Test Report: KVM_Linux_crio 19672

                    
                      d6d2a37830b251a8a712eec07ee86a534797346d:2024-09-20:36297
                    
                

Test fail (32/311)

Order failed test Duration
33 TestAddons/parallel/Registry 75.32
34 TestAddons/parallel/Ingress 156.02
36 TestAddons/parallel/MetricsServer 359.18
147 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 7.02
163 TestMultiControlPlane/serial/StopSecondaryNode 141.46
164 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 5.6
165 TestMultiControlPlane/serial/RestartSecondaryNode 6.58
167 TestMultiControlPlane/serial/RestartClusterKeepsNodes 379.85
170 TestMultiControlPlane/serial/StopCluster 141.64
230 TestMultiNode/serial/RestartKeepsNodes 333.25
232 TestMultiNode/serial/StopMultiNode 144.96
239 TestPreload 271.16
247 TestKubernetesUpgrade 343.76
272 TestPause/serial/SecondStartNoReconfiguration 60.94
313 TestStartStop/group/old-k8s-version/serial/FirstStart 282.73
338 TestStartStop/group/no-preload/serial/Stop 138.96
341 TestStartStop/group/embed-certs/serial/Stop 138.96
344 TestStartStop/group/default-k8s-diff-port/serial/Stop 138.99
345 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
346 TestStartStop/group/old-k8s-version/serial/DeployApp 0.5
347 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 95.8
349 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
351 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
355 TestStartStop/group/old-k8s-version/serial/SecondStart 732.79
356 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.43
357 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.58
358 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.3
359 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.74
360 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 455.45
361 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 426.45
362 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 371.98
363 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 146.68
x
+
TestAddons/parallel/Registry (75.32s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 4.07592ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-7swkh" [1e3cfba8-c77f-46f3-b6b1-46c7a36ae3a4] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.070926504s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-ggl6q" [a467b141-5827-4440-b11f-9203739b4a10] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.007354566s
addons_test.go:338: (dbg) Run:  kubectl --context addons-489802 delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context addons-489802 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context addons-489802 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.098664792s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-489802 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: (dbg) Run:  out/minikube-linux-amd64 -p addons-489802 ip
2024/09/20 16:56:57 [DEBUG] GET http://192.168.39.89:5000
addons_test.go:386: (dbg) Run:  out/minikube-linux-amd64 -p addons-489802 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-489802 -n addons-489802
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-489802 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-489802 logs -n 25: (2.300461072s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-858543 | jenkins | v1.34.0 | 20 Sep 24 16:43 UTC |                     |
	|         | -p download-only-858543                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 20 Sep 24 16:44 UTC | 20 Sep 24 16:44 UTC |
	| delete  | -p download-only-858543                                                                     | download-only-858543 | jenkins | v1.34.0 | 20 Sep 24 16:44 UTC | 20 Sep 24 16:44 UTC |
	| start   | -o=json --download-only                                                                     | download-only-349545 | jenkins | v1.34.0 | 20 Sep 24 16:44 UTC |                     |
	|         | -p download-only-349545                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 20 Sep 24 16:44 UTC | 20 Sep 24 16:44 UTC |
	| delete  | -p download-only-349545                                                                     | download-only-349545 | jenkins | v1.34.0 | 20 Sep 24 16:44 UTC | 20 Sep 24 16:44 UTC |
	| delete  | -p download-only-858543                                                                     | download-only-858543 | jenkins | v1.34.0 | 20 Sep 24 16:44 UTC | 20 Sep 24 16:44 UTC |
	| delete  | -p download-only-349545                                                                     | download-only-349545 | jenkins | v1.34.0 | 20 Sep 24 16:44 UTC | 20 Sep 24 16:44 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-811854 | jenkins | v1.34.0 | 20 Sep 24 16:44 UTC |                     |
	|         | binary-mirror-811854                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:34057                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-811854                                                                     | binary-mirror-811854 | jenkins | v1.34.0 | 20 Sep 24 16:44 UTC | 20 Sep 24 16:44 UTC |
	| addons  | disable dashboard -p                                                                        | addons-489802        | jenkins | v1.34.0 | 20 Sep 24 16:44 UTC |                     |
	|         | addons-489802                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-489802        | jenkins | v1.34.0 | 20 Sep 24 16:44 UTC |                     |
	|         | addons-489802                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-489802 --wait=true                                                                | addons-489802        | jenkins | v1.34.0 | 20 Sep 24 16:44 UTC | 20 Sep 24 16:47 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-489802        | jenkins | v1.34.0 | 20 Sep 24 16:55 UTC | 20 Sep 24 16:55 UTC |
	|         | -p addons-489802                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-489802        | jenkins | v1.34.0 | 20 Sep 24 16:55 UTC | 20 Sep 24 16:55 UTC |
	|         | addons-489802                                                                               |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-489802        | jenkins | v1.34.0 | 20 Sep 24 16:55 UTC | 20 Sep 24 16:55 UTC |
	|         | -p addons-489802                                                                            |                      |         |         |                     |                     |
	| addons  | addons-489802 addons disable                                                                | addons-489802        | jenkins | v1.34.0 | 20 Sep 24 16:55 UTC | 20 Sep 24 16:56 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-489802 addons disable                                                                | addons-489802        | jenkins | v1.34.0 | 20 Sep 24 16:55 UTC | 20 Sep 24 16:56 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-489802        | jenkins | v1.34.0 | 20 Sep 24 16:56 UTC | 20 Sep 24 16:56 UTC |
	|         | addons-489802                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-489802 ssh cat                                                                       | addons-489802        | jenkins | v1.34.0 | 20 Sep 24 16:56 UTC | 20 Sep 24 16:56 UTC |
	|         | /opt/local-path-provisioner/pvc-b8225ab7-cae8-4ab5-8ca1-5e74b7712f98_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-489802 addons disable                                                                | addons-489802        | jenkins | v1.34.0 | 20 Sep 24 16:56 UTC | 20 Sep 24 16:56 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-489802 ssh curl -s                                                                   | addons-489802        | jenkins | v1.34.0 | 20 Sep 24 16:56 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ip      | addons-489802 ip                                                                            | addons-489802        | jenkins | v1.34.0 | 20 Sep 24 16:56 UTC | 20 Sep 24 16:56 UTC |
	| addons  | addons-489802 addons disable                                                                | addons-489802        | jenkins | v1.34.0 | 20 Sep 24 16:56 UTC | 20 Sep 24 16:56 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 16:44:18
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 16:44:18.178711   16686 out.go:345] Setting OutFile to fd 1 ...
	I0920 16:44:18.178820   16686 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 16:44:18.178830   16686 out.go:358] Setting ErrFile to fd 2...
	I0920 16:44:18.178837   16686 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 16:44:18.179018   16686 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-8777/.minikube/bin
	I0920 16:44:18.179615   16686 out.go:352] Setting JSON to false
	I0920 16:44:18.180405   16686 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1601,"bootTime":1726849057,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 16:44:18.180501   16686 start.go:139] virtualization: kvm guest
	I0920 16:44:18.182896   16686 out.go:177] * [addons-489802] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 16:44:18.184216   16686 notify.go:220] Checking for updates...
	I0920 16:44:18.184222   16686 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 16:44:18.185469   16686 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 16:44:18.186874   16686 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-8777/kubeconfig
	I0920 16:44:18.188324   16686 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-8777/.minikube
	I0920 16:44:18.190351   16686 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 16:44:18.191922   16686 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 16:44:18.193502   16686 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 16:44:18.225366   16686 out.go:177] * Using the kvm2 driver based on user configuration
	I0920 16:44:18.226431   16686 start.go:297] selected driver: kvm2
	I0920 16:44:18.226443   16686 start.go:901] validating driver "kvm2" against <nil>
	I0920 16:44:18.226453   16686 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 16:44:18.227135   16686 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 16:44:18.227230   16686 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19672-8777/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 16:44:18.242065   16686 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 16:44:18.242112   16686 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 16:44:18.242404   16686 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 16:44:18.242437   16686 cni.go:84] Creating CNI manager for ""
	I0920 16:44:18.242490   16686 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 16:44:18.242500   16686 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 16:44:18.242555   16686 start.go:340] cluster config:
	{Name:addons-489802 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-489802 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 16:44:18.242664   16686 iso.go:125] acquiring lock: {Name:mkba95ef0488e46f622333e9f317f43def93040b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 16:44:18.244379   16686 out.go:177] * Starting "addons-489802" primary control-plane node in "addons-489802" cluster
	I0920 16:44:18.245561   16686 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 16:44:18.245610   16686 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0920 16:44:18.245618   16686 cache.go:56] Caching tarball of preloaded images
	I0920 16:44:18.245687   16686 preload.go:172] Found /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 16:44:18.245698   16686 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 16:44:18.246011   16686 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/config.json ...
	I0920 16:44:18.246032   16686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/config.json: {Name:mka75e2e382f021a76fc6885b0195d64c12ed744 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:44:18.246164   16686 start.go:360] acquireMachinesLock for addons-489802: {Name:mkfeedb385cf08b5d2aa00913e85815d02a180c2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 16:44:18.246208   16686 start.go:364] duration metric: took 31.448µs to acquireMachinesLock for "addons-489802"
	I0920 16:44:18.246223   16686 start.go:93] Provisioning new machine with config: &{Name:addons-489802 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-489802 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 16:44:18.246282   16686 start.go:125] createHost starting for "" (driver="kvm2")
	I0920 16:44:18.247940   16686 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0920 16:44:18.248080   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:44:18.248117   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:44:18.262329   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33039
	I0920 16:44:18.262809   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:44:18.263337   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:44:18.263357   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:44:18.263710   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:44:18.263878   16686 main.go:141] libmachine: (addons-489802) Calling .GetMachineName
	I0920 16:44:18.263996   16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
	I0920 16:44:18.264148   16686 start.go:159] libmachine.API.Create for "addons-489802" (driver="kvm2")
	I0920 16:44:18.264173   16686 client.go:168] LocalClient.Create starting
	I0920 16:44:18.264205   16686 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem
	I0920 16:44:18.669459   16686 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem
	I0920 16:44:18.951878   16686 main.go:141] libmachine: Running pre-create checks...
	I0920 16:44:18.951905   16686 main.go:141] libmachine: (addons-489802) Calling .PreCreateCheck
	I0920 16:44:18.952422   16686 main.go:141] libmachine: (addons-489802) Calling .GetConfigRaw
	I0920 16:44:18.952871   16686 main.go:141] libmachine: Creating machine...
	I0920 16:44:18.952893   16686 main.go:141] libmachine: (addons-489802) Calling .Create
	I0920 16:44:18.953060   16686 main.go:141] libmachine: (addons-489802) Creating KVM machine...
	I0920 16:44:18.954192   16686 main.go:141] libmachine: (addons-489802) DBG | found existing default KVM network
	I0920 16:44:18.954932   16686 main.go:141] libmachine: (addons-489802) DBG | I0920 16:44:18.954771   16708 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002211f0}
	I0920 16:44:18.954987   16686 main.go:141] libmachine: (addons-489802) DBG | created network xml: 
	I0920 16:44:18.955015   16686 main.go:141] libmachine: (addons-489802) DBG | <network>
	I0920 16:44:18.955034   16686 main.go:141] libmachine: (addons-489802) DBG |   <name>mk-addons-489802</name>
	I0920 16:44:18.955053   16686 main.go:141] libmachine: (addons-489802) DBG |   <dns enable='no'/>
	I0920 16:44:18.955078   16686 main.go:141] libmachine: (addons-489802) DBG |   
	I0920 16:44:18.955099   16686 main.go:141] libmachine: (addons-489802) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0920 16:44:18.955108   16686 main.go:141] libmachine: (addons-489802) DBG |     <dhcp>
	I0920 16:44:18.955115   16686 main.go:141] libmachine: (addons-489802) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0920 16:44:18.955126   16686 main.go:141] libmachine: (addons-489802) DBG |     </dhcp>
	I0920 16:44:18.955132   16686 main.go:141] libmachine: (addons-489802) DBG |   </ip>
	I0920 16:44:18.955142   16686 main.go:141] libmachine: (addons-489802) DBG |   
	I0920 16:44:18.955152   16686 main.go:141] libmachine: (addons-489802) DBG | </network>
	I0920 16:44:18.955180   16686 main.go:141] libmachine: (addons-489802) DBG | 
	I0920 16:44:18.961544   16686 main.go:141] libmachine: (addons-489802) DBG | trying to create private KVM network mk-addons-489802 192.168.39.0/24...
	I0920 16:44:19.029008   16686 main.go:141] libmachine: (addons-489802) Setting up store path in /home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802 ...
	I0920 16:44:19.029031   16686 main.go:141] libmachine: (addons-489802) DBG | private KVM network mk-addons-489802 192.168.39.0/24 created
	I0920 16:44:19.029050   16686 main.go:141] libmachine: (addons-489802) Building disk image from file:///home/jenkins/minikube-integration/19672-8777/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso
	I0920 16:44:19.029076   16686 main.go:141] libmachine: (addons-489802) Downloading /home/jenkins/minikube-integration/19672-8777/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19672-8777/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso...
	I0920 16:44:19.029097   16686 main.go:141] libmachine: (addons-489802) DBG | I0920 16:44:19.028953   16708 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19672-8777/.minikube
	I0920 16:44:19.344578   16686 main.go:141] libmachine: (addons-489802) DBG | I0920 16:44:19.344398   16708 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa...
	I0920 16:44:19.462008   16686 main.go:141] libmachine: (addons-489802) DBG | I0920 16:44:19.461879   16708 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/addons-489802.rawdisk...
	I0920 16:44:19.462055   16686 main.go:141] libmachine: (addons-489802) DBG | Writing magic tar header
	I0920 16:44:19.462065   16686 main.go:141] libmachine: (addons-489802) DBG | Writing SSH key tar header
	I0920 16:44:19.462072   16686 main.go:141] libmachine: (addons-489802) DBG | I0920 16:44:19.462027   16708 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802 ...
	I0920 16:44:19.462210   16686 main.go:141] libmachine: (addons-489802) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802
	I0920 16:44:19.462252   16686 main.go:141] libmachine: (addons-489802) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-8777/.minikube/machines
	I0920 16:44:19.462263   16686 main.go:141] libmachine: (addons-489802) Setting executable bit set on /home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802 (perms=drwx------)
	I0920 16:44:19.462287   16686 main.go:141] libmachine: (addons-489802) Setting executable bit set on /home/jenkins/minikube-integration/19672-8777/.minikube/machines (perms=drwxr-xr-x)
	I0920 16:44:19.462302   16686 main.go:141] libmachine: (addons-489802) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-8777/.minikube
	I0920 16:44:19.462312   16686 main.go:141] libmachine: (addons-489802) Setting executable bit set on /home/jenkins/minikube-integration/19672-8777/.minikube (perms=drwxr-xr-x)
	I0920 16:44:19.462324   16686 main.go:141] libmachine: (addons-489802) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-8777
	I0920 16:44:19.462340   16686 main.go:141] libmachine: (addons-489802) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0920 16:44:19.462350   16686 main.go:141] libmachine: (addons-489802) DBG | Checking permissions on dir: /home/jenkins
	I0920 16:44:19.462361   16686 main.go:141] libmachine: (addons-489802) DBG | Checking permissions on dir: /home
	I0920 16:44:19.462374   16686 main.go:141] libmachine: (addons-489802) Setting executable bit set on /home/jenkins/minikube-integration/19672-8777 (perms=drwxrwxr-x)
	I0920 16:44:19.462383   16686 main.go:141] libmachine: (addons-489802) DBG | Skipping /home - not owner
	I0920 16:44:19.462409   16686 main.go:141] libmachine: (addons-489802) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0920 16:44:19.462428   16686 main.go:141] libmachine: (addons-489802) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0920 16:44:19.462441   16686 main.go:141] libmachine: (addons-489802) Creating domain...
	I0920 16:44:19.463291   16686 main.go:141] libmachine: (addons-489802) define libvirt domain using xml: 
	I0920 16:44:19.463308   16686 main.go:141] libmachine: (addons-489802) <domain type='kvm'>
	I0920 16:44:19.463315   16686 main.go:141] libmachine: (addons-489802)   <name>addons-489802</name>
	I0920 16:44:19.463321   16686 main.go:141] libmachine: (addons-489802)   <memory unit='MiB'>4000</memory>
	I0920 16:44:19.463328   16686 main.go:141] libmachine: (addons-489802)   <vcpu>2</vcpu>
	I0920 16:44:19.463335   16686 main.go:141] libmachine: (addons-489802)   <features>
	I0920 16:44:19.463346   16686 main.go:141] libmachine: (addons-489802)     <acpi/>
	I0920 16:44:19.463360   16686 main.go:141] libmachine: (addons-489802)     <apic/>
	I0920 16:44:19.463368   16686 main.go:141] libmachine: (addons-489802)     <pae/>
	I0920 16:44:19.463375   16686 main.go:141] libmachine: (addons-489802)     
	I0920 16:44:19.463386   16686 main.go:141] libmachine: (addons-489802)   </features>
	I0920 16:44:19.463393   16686 main.go:141] libmachine: (addons-489802)   <cpu mode='host-passthrough'>
	I0920 16:44:19.463402   16686 main.go:141] libmachine: (addons-489802)   
	I0920 16:44:19.463408   16686 main.go:141] libmachine: (addons-489802)   </cpu>
	I0920 16:44:19.463415   16686 main.go:141] libmachine: (addons-489802)   <os>
	I0920 16:44:19.463424   16686 main.go:141] libmachine: (addons-489802)     <type>hvm</type>
	I0920 16:44:19.463435   16686 main.go:141] libmachine: (addons-489802)     <boot dev='cdrom'/>
	I0920 16:44:19.463445   16686 main.go:141] libmachine: (addons-489802)     <boot dev='hd'/>
	I0920 16:44:19.463472   16686 main.go:141] libmachine: (addons-489802)     <bootmenu enable='no'/>
	I0920 16:44:19.463497   16686 main.go:141] libmachine: (addons-489802)   </os>
	I0920 16:44:19.463520   16686 main.go:141] libmachine: (addons-489802)   <devices>
	I0920 16:44:19.463534   16686 main.go:141] libmachine: (addons-489802)     <disk type='file' device='cdrom'>
	I0920 16:44:19.463547   16686 main.go:141] libmachine: (addons-489802)       <source file='/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/boot2docker.iso'/>
	I0920 16:44:19.463558   16686 main.go:141] libmachine: (addons-489802)       <target dev='hdc' bus='scsi'/>
	I0920 16:44:19.463570   16686 main.go:141] libmachine: (addons-489802)       <readonly/>
	I0920 16:44:19.463577   16686 main.go:141] libmachine: (addons-489802)     </disk>
	I0920 16:44:19.463584   16686 main.go:141] libmachine: (addons-489802)     <disk type='file' device='disk'>
	I0920 16:44:19.463592   16686 main.go:141] libmachine: (addons-489802)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0920 16:44:19.463600   16686 main.go:141] libmachine: (addons-489802)       <source file='/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/addons-489802.rawdisk'/>
	I0920 16:44:19.463608   16686 main.go:141] libmachine: (addons-489802)       <target dev='hda' bus='virtio'/>
	I0920 16:44:19.463614   16686 main.go:141] libmachine: (addons-489802)     </disk>
	I0920 16:44:19.463623   16686 main.go:141] libmachine: (addons-489802)     <interface type='network'>
	I0920 16:44:19.463633   16686 main.go:141] libmachine: (addons-489802)       <source network='mk-addons-489802'/>
	I0920 16:44:19.463643   16686 main.go:141] libmachine: (addons-489802)       <model type='virtio'/>
	I0920 16:44:19.463651   16686 main.go:141] libmachine: (addons-489802)     </interface>
	I0920 16:44:19.463660   16686 main.go:141] libmachine: (addons-489802)     <interface type='network'>
	I0920 16:44:19.463672   16686 main.go:141] libmachine: (addons-489802)       <source network='default'/>
	I0920 16:44:19.463681   16686 main.go:141] libmachine: (addons-489802)       <model type='virtio'/>
	I0920 16:44:19.463703   16686 main.go:141] libmachine: (addons-489802)     </interface>
	I0920 16:44:19.463722   16686 main.go:141] libmachine: (addons-489802)     <serial type='pty'>
	I0920 16:44:19.463732   16686 main.go:141] libmachine: (addons-489802)       <target port='0'/>
	I0920 16:44:19.463738   16686 main.go:141] libmachine: (addons-489802)     </serial>
	I0920 16:44:19.463745   16686 main.go:141] libmachine: (addons-489802)     <console type='pty'>
	I0920 16:44:19.463755   16686 main.go:141] libmachine: (addons-489802)       <target type='serial' port='0'/>
	I0920 16:44:19.463762   16686 main.go:141] libmachine: (addons-489802)     </console>
	I0920 16:44:19.463767   16686 main.go:141] libmachine: (addons-489802)     <rng model='virtio'>
	I0920 16:44:19.463776   16686 main.go:141] libmachine: (addons-489802)       <backend model='random'>/dev/random</backend>
	I0920 16:44:19.463784   16686 main.go:141] libmachine: (addons-489802)     </rng>
	I0920 16:44:19.463793   16686 main.go:141] libmachine: (addons-489802)     
	I0920 16:44:19.463807   16686 main.go:141] libmachine: (addons-489802)     
	I0920 16:44:19.463822   16686 main.go:141] libmachine: (addons-489802)   </devices>
	I0920 16:44:19.463837   16686 main.go:141] libmachine: (addons-489802) </domain>
	I0920 16:44:19.463852   16686 main.go:141] libmachine: (addons-489802) 
	I0920 16:44:19.470320   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:86:10:bf in network default
	I0920 16:44:19.470900   16686 main.go:141] libmachine: (addons-489802) Ensuring networks are active...
	I0920 16:44:19.470920   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:19.471767   16686 main.go:141] libmachine: (addons-489802) Ensuring network default is active
	I0920 16:44:19.472031   16686 main.go:141] libmachine: (addons-489802) Ensuring network mk-addons-489802 is active
	I0920 16:44:19.472810   16686 main.go:141] libmachine: (addons-489802) Getting domain xml...
	I0920 16:44:19.473428   16686 main.go:141] libmachine: (addons-489802) Creating domain...
	I0920 16:44:20.958983   16686 main.go:141] libmachine: (addons-489802) Waiting to get IP...
	I0920 16:44:20.959942   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:20.960292   16686 main.go:141] libmachine: (addons-489802) DBG | unable to find current IP address of domain addons-489802 in network mk-addons-489802
	I0920 16:44:20.960332   16686 main.go:141] libmachine: (addons-489802) DBG | I0920 16:44:20.960280   16708 retry.go:31] will retry after 218.466528ms: waiting for machine to come up
	I0920 16:44:21.180891   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:21.181202   16686 main.go:141] libmachine: (addons-489802) DBG | unable to find current IP address of domain addons-489802 in network mk-addons-489802
	I0920 16:44:21.181228   16686 main.go:141] libmachine: (addons-489802) DBG | I0920 16:44:21.181159   16708 retry.go:31] will retry after 269.124789ms: waiting for machine to come up
	I0920 16:44:21.451562   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:21.451985   16686 main.go:141] libmachine: (addons-489802) DBG | unable to find current IP address of domain addons-489802 in network mk-addons-489802
	I0920 16:44:21.452021   16686 main.go:141] libmachine: (addons-489802) DBG | I0920 16:44:21.451946   16708 retry.go:31] will retry after 418.879425ms: waiting for machine to come up
	I0920 16:44:21.872595   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:21.873035   16686 main.go:141] libmachine: (addons-489802) DBG | unable to find current IP address of domain addons-489802 in network mk-addons-489802
	I0920 16:44:21.873056   16686 main.go:141] libmachine: (addons-489802) DBG | I0920 16:44:21.873002   16708 retry.go:31] will retry after 379.463169ms: waiting for machine to come up
	I0920 16:44:22.254754   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:22.255179   16686 main.go:141] libmachine: (addons-489802) DBG | unable to find current IP address of domain addons-489802 in network mk-addons-489802
	I0920 16:44:22.255208   16686 main.go:141] libmachine: (addons-489802) DBG | I0920 16:44:22.255151   16708 retry.go:31] will retry after 621.089592ms: waiting for machine to come up
	I0920 16:44:22.877890   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:22.878236   16686 main.go:141] libmachine: (addons-489802) DBG | unable to find current IP address of domain addons-489802 in network mk-addons-489802
	I0920 16:44:22.878254   16686 main.go:141] libmachine: (addons-489802) DBG | I0920 16:44:22.878215   16708 retry.go:31] will retry after 896.419124ms: waiting for machine to come up
	I0920 16:44:23.776119   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:23.776531   16686 main.go:141] libmachine: (addons-489802) DBG | unable to find current IP address of domain addons-489802 in network mk-addons-489802
	I0920 16:44:23.776580   16686 main.go:141] libmachine: (addons-489802) DBG | I0920 16:44:23.776503   16708 retry.go:31] will retry after 792.329452ms: waiting for machine to come up
	I0920 16:44:24.570579   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:24.571007   16686 main.go:141] libmachine: (addons-489802) DBG | unable to find current IP address of domain addons-489802 in network mk-addons-489802
	I0920 16:44:24.571032   16686 main.go:141] libmachine: (addons-489802) DBG | I0920 16:44:24.570964   16708 retry.go:31] will retry after 1.123730634s: waiting for machine to come up
	I0920 16:44:25.695981   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:25.696433   16686 main.go:141] libmachine: (addons-489802) DBG | unable to find current IP address of domain addons-489802 in network mk-addons-489802
	I0920 16:44:25.696455   16686 main.go:141] libmachine: (addons-489802) DBG | I0920 16:44:25.696382   16708 retry.go:31] will retry after 1.437323391s: waiting for machine to come up
	I0920 16:44:27.136109   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:27.136681   16686 main.go:141] libmachine: (addons-489802) DBG | unable to find current IP address of domain addons-489802 in network mk-addons-489802
	I0920 16:44:27.136706   16686 main.go:141] libmachine: (addons-489802) DBG | I0920 16:44:27.136631   16708 retry.go:31] will retry after 2.286987635s: waiting for machine to come up
	I0920 16:44:29.425015   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:29.425554   16686 main.go:141] libmachine: (addons-489802) DBG | unable to find current IP address of domain addons-489802 in network mk-addons-489802
	I0920 16:44:29.425597   16686 main.go:141] libmachine: (addons-489802) DBG | I0920 16:44:29.425518   16708 retry.go:31] will retry after 1.976852311s: waiting for machine to come up
	I0920 16:44:31.404712   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:31.405218   16686 main.go:141] libmachine: (addons-489802) DBG | unable to find current IP address of domain addons-489802 in network mk-addons-489802
	I0920 16:44:31.405240   16686 main.go:141] libmachine: (addons-489802) DBG | I0920 16:44:31.405170   16708 retry.go:31] will retry after 3.060545694s: waiting for machine to come up
	I0920 16:44:34.467106   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:34.467532   16686 main.go:141] libmachine: (addons-489802) DBG | unable to find current IP address of domain addons-489802 in network mk-addons-489802
	I0920 16:44:34.467559   16686 main.go:141] libmachine: (addons-489802) DBG | I0920 16:44:34.467474   16708 retry.go:31] will retry after 3.246517198s: waiting for machine to come up
	I0920 16:44:37.717806   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:37.718239   16686 main.go:141] libmachine: (addons-489802) DBG | unable to find current IP address of domain addons-489802 in network mk-addons-489802
	I0920 16:44:37.718274   16686 main.go:141] libmachine: (addons-489802) DBG | I0920 16:44:37.718168   16708 retry.go:31] will retry after 4.118490306s: waiting for machine to come up
	I0920 16:44:41.841226   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:41.841726   16686 main.go:141] libmachine: (addons-489802) Found IP for machine: 192.168.39.89
	I0920 16:44:41.841743   16686 main.go:141] libmachine: (addons-489802) Reserving static IP address...
	I0920 16:44:41.841755   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has current primary IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:41.842160   16686 main.go:141] libmachine: (addons-489802) DBG | unable to find host DHCP lease matching {name: "addons-489802", mac: "52:54:00:bf:85:db", ip: "192.168.39.89"} in network mk-addons-489802
	I0920 16:44:41.913230   16686 main.go:141] libmachine: (addons-489802) Reserved static IP address: 192.168.39.89
	I0920 16:44:41.913257   16686 main.go:141] libmachine: (addons-489802) Waiting for SSH to be available...
	I0920 16:44:41.913265   16686 main.go:141] libmachine: (addons-489802) DBG | Getting to WaitForSSH function...
	I0920 16:44:41.915767   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:41.916236   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:minikube Clientid:01:52:54:00:bf:85:db}
	I0920 16:44:41.916267   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:41.916422   16686 main.go:141] libmachine: (addons-489802) DBG | Using SSH client type: external
	I0920 16:44:41.916446   16686 main.go:141] libmachine: (addons-489802) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa (-rw-------)
	I0920 16:44:41.916467   16686 main.go:141] libmachine: (addons-489802) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.89 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 16:44:41.916475   16686 main.go:141] libmachine: (addons-489802) DBG | About to run SSH command:
	I0920 16:44:41.916485   16686 main.go:141] libmachine: (addons-489802) DBG | exit 0
	I0920 16:44:42.045938   16686 main.go:141] libmachine: (addons-489802) DBG | SSH cmd err, output: <nil>: 
	I0920 16:44:42.046220   16686 main.go:141] libmachine: (addons-489802) KVM machine creation complete!
	I0920 16:44:42.046564   16686 main.go:141] libmachine: (addons-489802) Calling .GetConfigRaw
	I0920 16:44:42.047127   16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
	I0920 16:44:42.047334   16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
	I0920 16:44:42.047475   16686 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0920 16:44:42.047490   16686 main.go:141] libmachine: (addons-489802) Calling .GetState
	I0920 16:44:42.049083   16686 main.go:141] libmachine: Detecting operating system of created instance...
	I0920 16:44:42.049109   16686 main.go:141] libmachine: Waiting for SSH to be available...
	I0920 16:44:42.049116   16686 main.go:141] libmachine: Getting to WaitForSSH function...
	I0920 16:44:42.049122   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
	I0920 16:44:42.051309   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:42.051675   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:44:42.051731   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:42.051767   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
	I0920 16:44:42.051947   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:44:42.052082   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:44:42.052201   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
	I0920 16:44:42.052358   16686 main.go:141] libmachine: Using SSH client type: native
	I0920 16:44:42.052546   16686 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0920 16:44:42.052561   16686 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0920 16:44:42.153288   16686 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 16:44:42.153332   16686 main.go:141] libmachine: Detecting the provisioner...
	I0920 16:44:42.153344   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
	I0920 16:44:42.156232   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:42.156583   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:44:42.156612   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:42.156760   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
	I0920 16:44:42.156968   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:44:42.157119   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:44:42.157234   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
	I0920 16:44:42.157410   16686 main.go:141] libmachine: Using SSH client type: native
	I0920 16:44:42.157610   16686 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0920 16:44:42.157626   16686 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0920 16:44:42.254380   16686 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0920 16:44:42.254438   16686 main.go:141] libmachine: found compatible host: buildroot
	I0920 16:44:42.254444   16686 main.go:141] libmachine: Provisioning with buildroot...
	I0920 16:44:42.254451   16686 main.go:141] libmachine: (addons-489802) Calling .GetMachineName
	I0920 16:44:42.254703   16686 buildroot.go:166] provisioning hostname "addons-489802"
	I0920 16:44:42.254734   16686 main.go:141] libmachine: (addons-489802) Calling .GetMachineName
	I0920 16:44:42.254884   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
	I0920 16:44:42.257868   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:42.258311   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:44:42.258354   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:42.258809   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
	I0920 16:44:42.259005   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:44:42.259172   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:44:42.259323   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
	I0920 16:44:42.259521   16686 main.go:141] libmachine: Using SSH client type: native
	I0920 16:44:42.259670   16686 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0920 16:44:42.259683   16686 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-489802 && echo "addons-489802" | sudo tee /etc/hostname
	I0920 16:44:42.370953   16686 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-489802
	
	I0920 16:44:42.370980   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
	I0920 16:44:42.373616   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:42.373970   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:44:42.374002   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:42.374153   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
	I0920 16:44:42.374357   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:44:42.374531   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:44:42.374634   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
	I0920 16:44:42.374808   16686 main.go:141] libmachine: Using SSH client type: native
	I0920 16:44:42.374994   16686 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0920 16:44:42.375012   16686 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-489802' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-489802/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-489802' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 16:44:42.482921   16686 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 16:44:42.482949   16686 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-8777/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-8777/.minikube}
	I0920 16:44:42.482989   16686 buildroot.go:174] setting up certificates
	I0920 16:44:42.482998   16686 provision.go:84] configureAuth start
	I0920 16:44:42.483007   16686 main.go:141] libmachine: (addons-489802) Calling .GetMachineName
	I0920 16:44:42.483254   16686 main.go:141] libmachine: (addons-489802) Calling .GetIP
	I0920 16:44:42.486082   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:42.486435   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:44:42.486458   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:42.486591   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
	I0920 16:44:42.489005   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:42.489385   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:44:42.489412   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:42.489530   16686 provision.go:143] copyHostCerts
	I0920 16:44:42.489599   16686 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem (1082 bytes)
	I0920 16:44:42.489774   16686 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem (1123 bytes)
	I0920 16:44:42.489920   16686 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem (1675 bytes)
	I0920 16:44:42.490019   16686 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem org=jenkins.addons-489802 san=[127.0.0.1 192.168.39.89 addons-489802 localhost minikube]
	I0920 16:44:42.556359   16686 provision.go:177] copyRemoteCerts
	I0920 16:44:42.556423   16686 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 16:44:42.556446   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
	I0920 16:44:42.559402   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:42.559884   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:44:42.559911   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:42.560233   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
	I0920 16:44:42.560402   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:44:42.560524   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
	I0920 16:44:42.560649   16686 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa Username:docker}
	I0920 16:44:42.640095   16686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0920 16:44:42.664291   16686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0920 16:44:42.687271   16686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0920 16:44:42.709976   16686 provision.go:87] duration metric: took 226.963662ms to configureAuth
	I0920 16:44:42.710011   16686 buildroot.go:189] setting minikube options for container-runtime
	I0920 16:44:42.710210   16686 config.go:182] Loaded profile config "addons-489802": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 16:44:42.710288   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
	I0920 16:44:42.713157   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:42.713576   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:44:42.713605   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:42.713861   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
	I0920 16:44:42.714050   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:44:42.714198   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:44:42.714335   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
	I0920 16:44:42.714575   16686 main.go:141] libmachine: Using SSH client type: native
	I0920 16:44:42.714732   16686 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0920 16:44:42.714746   16686 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 16:44:42.936196   16686 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 16:44:42.936230   16686 main.go:141] libmachine: Checking connection to Docker...
	I0920 16:44:42.936255   16686 main.go:141] libmachine: (addons-489802) Calling .GetURL
	I0920 16:44:42.937633   16686 main.go:141] libmachine: (addons-489802) DBG | Using libvirt version 6000000
	I0920 16:44:42.940023   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:42.940360   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:44:42.940383   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:42.940608   16686 main.go:141] libmachine: Docker is up and running!
	I0920 16:44:42.940623   16686 main.go:141] libmachine: Reticulating splines...
	I0920 16:44:42.940629   16686 client.go:171] duration metric: took 24.676449957s to LocalClient.Create
	I0920 16:44:42.940649   16686 start.go:167] duration metric: took 24.676502405s to libmachine.API.Create "addons-489802"
	I0920 16:44:42.940665   16686 start.go:293] postStartSetup for "addons-489802" (driver="kvm2")
	I0920 16:44:42.940675   16686 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 16:44:42.940691   16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
	I0920 16:44:42.940982   16686 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 16:44:42.941005   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
	I0920 16:44:42.943365   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:42.943725   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:44:42.943749   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:42.943950   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
	I0920 16:44:42.944124   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:44:42.944283   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
	I0920 16:44:42.944440   16686 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa Username:docker}
	I0920 16:44:43.023999   16686 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 16:44:43.028231   16686 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 16:44:43.028271   16686 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/addons for local assets ...
	I0920 16:44:43.028362   16686 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/files for local assets ...
	I0920 16:44:43.028391   16686 start.go:296] duration metric: took 87.721087ms for postStartSetup
	I0920 16:44:43.028430   16686 main.go:141] libmachine: (addons-489802) Calling .GetConfigRaw
	I0920 16:44:43.029004   16686 main.go:141] libmachine: (addons-489802) Calling .GetIP
	I0920 16:44:43.032101   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:43.032392   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:44:43.032420   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:43.032651   16686 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/config.json ...
	I0920 16:44:43.032872   16686 start.go:128] duration metric: took 24.786580765s to createHost
	I0920 16:44:43.032897   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
	I0920 16:44:43.035034   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:43.035343   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:44:43.035377   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:43.035500   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
	I0920 16:44:43.035665   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:44:43.035848   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:44:43.035974   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
	I0920 16:44:43.036134   16686 main.go:141] libmachine: Using SSH client type: native
	I0920 16:44:43.036283   16686 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0920 16:44:43.036293   16686 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 16:44:43.134258   16686 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726850683.106297733
	
	I0920 16:44:43.134281   16686 fix.go:216] guest clock: 1726850683.106297733
	I0920 16:44:43.134318   16686 fix.go:229] Guest: 2024-09-20 16:44:43.106297733 +0000 UTC Remote: 2024-09-20 16:44:43.032884764 +0000 UTC m=+24.887429631 (delta=73.412969ms)
	I0920 16:44:43.134347   16686 fix.go:200] guest clock delta is within tolerance: 73.412969ms
	I0920 16:44:43.134354   16686 start.go:83] releasing machines lock for "addons-489802", held for 24.88813735s
	I0920 16:44:43.134375   16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
	I0920 16:44:43.134602   16686 main.go:141] libmachine: (addons-489802) Calling .GetIP
	I0920 16:44:43.137503   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:43.137857   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:44:43.137885   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:43.138022   16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
	I0920 16:44:43.138471   16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
	I0920 16:44:43.138655   16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
	I0920 16:44:43.138740   16686 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 16:44:43.138784   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
	I0920 16:44:43.138890   16686 ssh_runner.go:195] Run: cat /version.json
	I0920 16:44:43.138911   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
	I0920 16:44:43.141496   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:43.141700   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:43.141814   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:44:43.141848   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:43.141984   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
	I0920 16:44:43.142122   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:44:43.142207   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:44:43.142233   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:43.142240   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
	I0920 16:44:43.142382   16686 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa Username:docker}
	I0920 16:44:43.142400   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
	I0920 16:44:43.142527   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:44:43.142639   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
	I0920 16:44:43.142738   16686 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa Username:docker}
	I0920 16:44:43.214377   16686 ssh_runner.go:195] Run: systemctl --version
	I0920 16:44:43.255061   16686 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 16:44:43.407471   16686 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 16:44:43.413920   16686 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 16:44:43.413984   16686 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 16:44:43.430049   16686 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 16:44:43.430083   16686 start.go:495] detecting cgroup driver to use...
	I0920 16:44:43.430165   16686 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 16:44:43.445755   16686 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 16:44:43.460072   16686 docker.go:217] disabling cri-docker service (if available) ...
	I0920 16:44:43.460130   16686 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 16:44:43.473445   16686 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 16:44:43.486406   16686 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 16:44:43.599287   16686 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 16:44:43.771188   16686 docker.go:233] disabling docker service ...
	I0920 16:44:43.771285   16686 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 16:44:43.786254   16686 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 16:44:43.799345   16686 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 16:44:43.929040   16686 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 16:44:44.054620   16686 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 16:44:44.068879   16686 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 16:44:44.087412   16686 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 16:44:44.087482   16686 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 16:44:44.098030   16686 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 16:44:44.098093   16686 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 16:44:44.108462   16686 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 16:44:44.119209   16686 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 16:44:44.130359   16686 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 16:44:44.141802   16686 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 16:44:44.152585   16686 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 16:44:44.169299   16686 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 16:44:44.179293   16686 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 16:44:44.188257   16686 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 16:44:44.188326   16686 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 16:44:44.200400   16686 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 16:44:44.210617   16686 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 16:44:44.322851   16686 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 16:44:44.414303   16686 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 16:44:44.414398   16686 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 16:44:44.418774   16686 start.go:563] Will wait 60s for crictl version
	I0920 16:44:44.418851   16686 ssh_runner.go:195] Run: which crictl
	I0920 16:44:44.422352   16686 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 16:44:44.464229   16686 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 16:44:44.464345   16686 ssh_runner.go:195] Run: crio --version
	I0920 16:44:44.492112   16686 ssh_runner.go:195] Run: crio --version
	I0920 16:44:44.519927   16686 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 16:44:44.520939   16686 main.go:141] libmachine: (addons-489802) Calling .GetIP
	I0920 16:44:44.523216   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:44.523500   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:44:44.523521   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:44.523769   16686 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 16:44:44.527526   16686 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 16:44:44.539346   16686 kubeadm.go:883] updating cluster {Name:addons-489802 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-489802 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.89 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 16:44:44.539450   16686 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 16:44:44.539491   16686 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 16:44:44.570607   16686 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 16:44:44.570672   16686 ssh_runner.go:195] Run: which lz4
	I0920 16:44:44.574305   16686 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 16:44:44.578003   16686 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 16:44:44.578036   16686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0920 16:44:45.832824   16686 crio.go:462] duration metric: took 1.258544501s to copy over tarball
	I0920 16:44:45.832907   16686 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 16:44:49.851668   16686 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (4.018714604s)
	I0920 16:44:49.851726   16686 crio.go:469] duration metric: took 4.01886728s to extract the tarball
	I0920 16:44:49.851737   16686 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 16:44:49.896630   16686 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 16:44:49.944783   16686 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 16:44:49.944818   16686 cache_images.go:84] Images are preloaded, skipping loading
	I0920 16:44:49.944827   16686 kubeadm.go:934] updating node { 192.168.39.89 8443 v1.31.1 crio true true} ...
	I0920 16:44:49.944968   16686 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-489802 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.89
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-489802 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 16:44:49.945079   16686 ssh_runner.go:195] Run: crio config
	I0920 16:44:50.001938   16686 cni.go:84] Creating CNI manager for ""
	I0920 16:44:50.001967   16686 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 16:44:50.001981   16686 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 16:44:50.002006   16686 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.89 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-489802 NodeName:addons-489802 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.89"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.89 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 16:44:50.002170   16686 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.89
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-489802"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.89
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.89"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 16:44:50.002231   16686 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 16:44:50.013339   16686 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 16:44:50.013411   16686 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 16:44:50.024767   16686 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0920 16:44:50.045363   16686 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 16:44:50.062898   16686 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0920 16:44:50.080572   16686 ssh_runner.go:195] Run: grep 192.168.39.89	control-plane.minikube.internal$ /etc/hosts
	I0920 16:44:50.085773   16686 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.89	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 16:44:50.098757   16686 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 16:44:50.240556   16686 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 16:44:50.258141   16686 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802 for IP: 192.168.39.89
	I0920 16:44:50.258209   16686 certs.go:194] generating shared ca certs ...
	I0920 16:44:50.258255   16686 certs.go:226] acquiring lock for ca certs: {Name:mkc7ef6c737c6bdc3fdd9dcff8f57029c020d8f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:44:50.258438   16686 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key
	I0920 16:44:50.381564   16686 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt ...
	I0920 16:44:50.381596   16686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt: {Name:mkba49b4d048d5af44df48f4edd690a694a33473 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:44:50.381797   16686 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key ...
	I0920 16:44:50.381808   16686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key: {Name:mk653576ff784ce50de2dfa9e3a0facde1d60271 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:44:50.381907   16686 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key
	I0920 16:44:50.546530   16686 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.crt ...
	I0920 16:44:50.546555   16686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.crt: {Name:mk67c6a6b77428ba0cdac9b9e34d49fcf308bb8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:44:50.546726   16686 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key ...
	I0920 16:44:50.546738   16686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key: {Name:mkd7ae4f2d01ceba146c4dc9b43c4a1a5ab41e93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:44:50.546824   16686 certs.go:256] generating profile certs ...
	I0920 16:44:50.546886   16686 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/client.key
	I0920 16:44:50.546900   16686 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/client.crt with IP's: []
	I0920 16:44:50.626758   16686 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/client.crt ...
	I0920 16:44:50.626785   16686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/client.crt: {Name:mkc5f095f711647000f5605c19ca0db353359e2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:44:50.626972   16686 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/client.key ...
	I0920 16:44:50.626986   16686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/client.key: {Name:mk3f0c684e304c5dc541f54b7034757bf95d7fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:44:50.627082   16686 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/apiserver.key.1bac25cc
	I0920 16:44:50.627100   16686 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/apiserver.crt.1bac25cc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.89]
	I0920 16:44:50.846521   16686 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/apiserver.crt.1bac25cc ...
	I0920 16:44:50.846553   16686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/apiserver.crt.1bac25cc: {Name:mkb99a44e1af5a4a578b6ff7445cbfc9f6d1c4b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:44:50.846716   16686 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/apiserver.key.1bac25cc ...
	I0920 16:44:50.846729   16686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/apiserver.key.1bac25cc: {Name:mk1ce5fd024a94836fd45952b6c3038de9bbeaae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:44:50.846799   16686 certs.go:381] copying /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/apiserver.crt.1bac25cc -> /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/apiserver.crt
	I0920 16:44:50.846874   16686 certs.go:385] copying /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/apiserver.key.1bac25cc -> /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/apiserver.key
	I0920 16:44:50.846919   16686 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/proxy-client.key
	I0920 16:44:50.846934   16686 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/proxy-client.crt with IP's: []
	I0920 16:44:51.074511   16686 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/proxy-client.crt ...
	I0920 16:44:51.074548   16686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/proxy-client.crt: {Name:mk593c697632b0437e75154f622f66ff162758f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:44:51.074697   16686 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/proxy-client.key ...
	I0920 16:44:51.074708   16686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/proxy-client.key: {Name:mkd7afdfda0e263fcdc4ad0882491ad3726f4657 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:44:51.074875   16686 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 16:44:51.074907   16686 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem (1082 bytes)
	I0920 16:44:51.074929   16686 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem (1123 bytes)
	I0920 16:44:51.074950   16686 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem (1675 bytes)
	I0920 16:44:51.075572   16686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 16:44:51.104195   16686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 16:44:51.128646   16686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 16:44:51.153291   16686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0920 16:44:51.177482   16686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0920 16:44:51.202143   16686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 16:44:51.226168   16686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 16:44:51.251069   16686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 16:44:51.274951   16686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 16:44:51.298272   16686 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 16:44:51.314508   16686 ssh_runner.go:195] Run: openssl version
	I0920 16:44:51.320418   16686 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 16:44:51.331616   16686 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 16:44:51.336211   16686 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0920 16:44:51.336270   16686 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 16:44:51.341681   16686 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 16:44:51.351994   16686 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 16:44:51.356403   16686 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 16:44:51.356470   16686 kubeadm.go:392] StartCluster: {Name:addons-489802 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-489802 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.89 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 16:44:51.356584   16686 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 16:44:51.356645   16686 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 16:44:51.396773   16686 cri.go:89] found id: ""
	I0920 16:44:51.396839   16686 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 16:44:51.407827   16686 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 16:44:51.417398   16686 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 16:44:51.426423   16686 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 16:44:51.426443   16686 kubeadm.go:157] found existing configuration files:
	
	I0920 16:44:51.426481   16686 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 16:44:51.435274   16686 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 16:44:51.435338   16686 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 16:44:51.444427   16686 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 16:44:51.453046   16686 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 16:44:51.453111   16686 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 16:44:51.462277   16686 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 16:44:51.470882   16686 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 16:44:51.470938   16686 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 16:44:51.480053   16686 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 16:44:51.488382   16686 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 16:44:51.488450   16686 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 16:44:51.497406   16686 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 16:44:51.541221   16686 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 16:44:51.541351   16686 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 16:44:51.633000   16686 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 16:44:51.633106   16686 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 16:44:51.633217   16686 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 16:44:51.641465   16686 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 16:44:51.643561   16686 out.go:235]   - Generating certificates and keys ...
	I0920 16:44:51.643637   16686 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 16:44:51.643707   16686 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 16:44:51.974976   16686 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0920 16:44:52.212429   16686 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0920 16:44:52.725412   16686 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0920 16:44:52.824449   16686 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0920 16:44:52.884139   16686 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0920 16:44:52.884436   16686 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-489802 localhost] and IPs [192.168.39.89 127.0.0.1 ::1]
	I0920 16:44:53.064017   16686 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0920 16:44:53.064225   16686 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-489802 localhost] and IPs [192.168.39.89 127.0.0.1 ::1]
	I0920 16:44:53.110684   16686 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0920 16:44:53.439405   16686 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0920 16:44:53.523372   16686 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0920 16:44:53.523450   16686 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 16:44:53.894835   16686 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 16:44:54.063405   16686 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 16:44:54.134012   16686 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 16:44:54.252802   16686 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 16:44:54.496063   16686 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 16:44:54.498352   16686 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 16:44:54.501105   16686 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 16:44:54.502882   16686 out.go:235]   - Booting up control plane ...
	I0920 16:44:54.503004   16686 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 16:44:54.503113   16686 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 16:44:54.503192   16686 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 16:44:54.517820   16686 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 16:44:54.525307   16686 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 16:44:54.525359   16686 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 16:44:54.642832   16686 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 16:44:54.642977   16686 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 16:44:55.143793   16686 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.346631ms
	I0920 16:44:55.143884   16686 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 16:45:00.142510   16686 kubeadm.go:310] [api-check] The API server is healthy after 5.001658723s
	I0920 16:45:00.161952   16686 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 16:45:00.199831   16686 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 16:45:00.237142   16686 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 16:45:00.237431   16686 kubeadm.go:310] [mark-control-plane] Marking the node addons-489802 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 16:45:00.267465   16686 kubeadm.go:310] [bootstrap-token] Using token: pxuown.8491ndv1zucibr8t
	I0920 16:45:00.269321   16686 out.go:235]   - Configuring RBAC rules ...
	I0920 16:45:00.269445   16686 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 16:45:00.277244   16686 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 16:45:00.297062   16686 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 16:45:00.303392   16686 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 16:45:00.310726   16686 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 16:45:00.317990   16686 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 16:45:00.550067   16686 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 16:45:00.983547   16686 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 16:45:01.549916   16686 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 16:45:01.549943   16686 kubeadm.go:310] 
	I0920 16:45:01.550082   16686 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 16:45:01.550165   16686 kubeadm.go:310] 
	I0920 16:45:01.550391   16686 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 16:45:01.550403   16686 kubeadm.go:310] 
	I0920 16:45:01.550435   16686 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 16:45:01.550520   16686 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 16:45:01.550590   16686 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 16:45:01.550601   16686 kubeadm.go:310] 
	I0920 16:45:01.550668   16686 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 16:45:01.550680   16686 kubeadm.go:310] 
	I0920 16:45:01.550751   16686 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 16:45:01.550761   16686 kubeadm.go:310] 
	I0920 16:45:01.550847   16686 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 16:45:01.550942   16686 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 16:45:01.551031   16686 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 16:45:01.551040   16686 kubeadm.go:310] 
	I0920 16:45:01.551130   16686 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 16:45:01.551241   16686 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 16:45:01.551252   16686 kubeadm.go:310] 
	I0920 16:45:01.551332   16686 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token pxuown.8491ndv1zucibr8t \
	I0920 16:45:01.551422   16686 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:736f3fb521855813decb4a7214f2b8beff5f81c3971b60decfa20a8807626a2d \
	I0920 16:45:01.551443   16686 kubeadm.go:310] 	--control-plane 
	I0920 16:45:01.551456   16686 kubeadm.go:310] 
	I0920 16:45:01.551575   16686 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 16:45:01.551586   16686 kubeadm.go:310] 
	I0920 16:45:01.551676   16686 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token pxuown.8491ndv1zucibr8t \
	I0920 16:45:01.551784   16686 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:736f3fb521855813decb4a7214f2b8beff5f81c3971b60decfa20a8807626a2d 
	I0920 16:45:01.552616   16686 kubeadm.go:310] W0920 16:44:51.520638     818 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 16:45:01.553045   16686 kubeadm.go:310] W0920 16:44:51.522103     818 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 16:45:01.553171   16686 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 16:45:01.553193   16686 cni.go:84] Creating CNI manager for ""
	I0920 16:45:01.553204   16686 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 16:45:01.554912   16686 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 16:45:01.556375   16686 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 16:45:01.567185   16686 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 16:45:01.590373   16686 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 16:45:01.590503   16686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 16:45:01.590518   16686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-489802 minikube.k8s.io/updated_at=2024_09_20T16_45_01_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=0626f22cf0d915d75e291a5bce701f94395056e1 minikube.k8s.io/name=addons-489802 minikube.k8s.io/primary=true
	I0920 16:45:01.611693   16686 ops.go:34] apiserver oom_adj: -16
	I0920 16:45:01.740445   16686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 16:45:02.241564   16686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 16:45:02.740509   16686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 16:45:03.241160   16686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 16:45:03.740876   16686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 16:45:04.241125   16686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 16:45:04.740796   16686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 16:45:05.241433   16686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 16:45:05.740524   16686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 16:45:05.862361   16686 kubeadm.go:1113] duration metric: took 4.271922428s to wait for elevateKubeSystemPrivileges
	I0920 16:45:05.862397   16686 kubeadm.go:394] duration metric: took 14.505940675s to StartCluster
	I0920 16:45:05.862414   16686 settings.go:142] acquiring lock: {Name:mk2a13c58cdc0faf8cddca5d6716175d45db9bfd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:45:05.862558   16686 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19672-8777/kubeconfig
	I0920 16:45:05.862903   16686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/kubeconfig: {Name:mkf32a4c736808e023459b2f0e40188618a38db1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:45:05.863101   16686 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.89 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 16:45:05.863138   16686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0920 16:45:05.863158   16686 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0920 16:45:05.863290   16686 addons.go:69] Setting yakd=true in profile "addons-489802"
	I0920 16:45:05.863282   16686 addons.go:69] Setting default-storageclass=true in profile "addons-489802"
	I0920 16:45:05.863308   16686 addons.go:234] Setting addon yakd=true in "addons-489802"
	I0920 16:45:05.863317   16686 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-489802"
	I0920 16:45:05.863312   16686 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-489802"
	I0920 16:45:05.863314   16686 addons.go:69] Setting cloud-spanner=true in profile "addons-489802"
	I0920 16:45:05.863340   16686 host.go:66] Checking if "addons-489802" exists ...
	I0920 16:45:05.863341   16686 addons.go:234] Setting addon cloud-spanner=true in "addons-489802"
	I0920 16:45:05.863342   16686 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-489802"
	I0920 16:45:05.863361   16686 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-489802"
	I0920 16:45:05.863363   16686 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-489802"
	I0920 16:45:05.863375   16686 host.go:66] Checking if "addons-489802" exists ...
	I0920 16:45:05.863390   16686 host.go:66] Checking if "addons-489802" exists ...
	I0920 16:45:05.863391   16686 host.go:66] Checking if "addons-489802" exists ...
	I0920 16:45:05.863391   16686 config.go:182] Loaded profile config "addons-489802": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 16:45:05.863448   16686 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-489802"
	I0920 16:45:05.863461   16686 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-489802"
	I0920 16:45:05.863793   16686 addons.go:69] Setting gcp-auth=true in profile "addons-489802"
	I0920 16:45:05.863800   16686 addons.go:69] Setting ingress-dns=true in profile "addons-489802"
	I0920 16:45:05.863804   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.863808   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.863821   16686 addons.go:69] Setting ingress=true in profile "addons-489802"
	I0920 16:45:05.863824   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.863831   16686 addons.go:69] Setting metrics-server=true in profile "addons-489802"
	I0920 16:45:05.863821   16686 addons.go:69] Setting inspektor-gadget=true in profile "addons-489802"
	I0920 16:45:05.863839   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.863843   16686 addons.go:234] Setting addon metrics-server=true in "addons-489802"
	I0920 16:45:05.863845   16686 addons.go:69] Setting volcano=true in profile "addons-489802"
	I0920 16:45:05.863812   16686 mustload.go:65] Loading cluster: addons-489802
	I0920 16:45:05.863852   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.863856   16686 addons.go:234] Setting addon volcano=true in "addons-489802"
	I0920 16:45:05.863865   16686 host.go:66] Checking if "addons-489802" exists ...
	I0920 16:45:05.863881   16686 host.go:66] Checking if "addons-489802" exists ...
	I0920 16:45:05.863918   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.863925   16686 addons.go:69] Setting registry=true in profile "addons-489802"
	I0920 16:45:05.863943   16686 addons.go:234] Setting addon registry=true in "addons-489802"
	I0920 16:45:05.863943   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.863955   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.863978   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.864003   16686 config.go:182] Loaded profile config "addons-489802": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 16:45:05.864008   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.864067   16686 addons.go:69] Setting storage-provisioner=true in profile "addons-489802"
	I0920 16:45:05.864077   16686 addons.go:234] Setting addon storage-provisioner=true in "addons-489802"
	I0920 16:45:05.864162   16686 addons.go:69] Setting volumesnapshots=true in profile "addons-489802"
	I0920 16:45:05.864180   16686 addons.go:234] Setting addon volumesnapshots=true in "addons-489802"
	I0920 16:45:05.864214   16686 host.go:66] Checking if "addons-489802" exists ...
	I0920 16:45:05.864241   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.864270   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.863833   16686 addons.go:234] Setting addon ingress=true in "addons-489802"
	I0920 16:45:05.864312   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.864337   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.863812   16686 addons.go:234] Setting addon ingress-dns=true in "addons-489802"
	I0920 16:45:05.864407   16686 host.go:66] Checking if "addons-489802" exists ...
	I0920 16:45:05.863847   16686 addons.go:234] Setting addon inspektor-gadget=true in "addons-489802"
	I0920 16:45:05.863810   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.864596   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.864641   16686 host.go:66] Checking if "addons-489802" exists ...
	I0920 16:45:05.864662   16686 host.go:66] Checking if "addons-489802" exists ...
	I0920 16:45:05.864741   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.864770   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.864799   16686 host.go:66] Checking if "addons-489802" exists ...
	I0920 16:45:05.864991   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.864993   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.865016   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.865021   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.865128   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.865158   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.865250   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.865287   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.865605   16686 host.go:66] Checking if "addons-489802" exists ...
	I0920 16:45:05.873149   16686 out.go:177] * Verifying Kubernetes components...
	I0920 16:45:05.875354   16686 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 16:45:05.886351   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.886408   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.886439   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.886493   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.886542   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36167
	I0920 16:45:05.886778   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45461
	I0920 16:45:05.886908   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40277
	I0920 16:45:05.887721   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:05.887867   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:05.887935   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:05.888511   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:05.888539   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:05.888665   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:05.888682   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:05.889051   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:05.889074   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:05.889168   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39407
	I0920 16:45:05.889340   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:05.889387   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:05.889430   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:05.889990   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.890030   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.890136   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.890165   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.894535   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:05.895113   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.895154   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.904311   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:05.904341   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:05.905034   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:05.905227   16686 main.go:141] libmachine: (addons-489802) Calling .GetState
	I0920 16:45:05.910612   16686 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-489802"
	I0920 16:45:05.910663   16686 host.go:66] Checking if "addons-489802" exists ...
	I0920 16:45:05.911040   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.911095   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.911196   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43009
	I0920 16:45:05.912127   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36389
	I0920 16:45:05.912633   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:05.913296   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:05.913317   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:05.913620   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43891
	I0920 16:45:05.913784   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41005
	I0920 16:45:05.913785   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:05.914527   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.914569   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.914814   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:05.914815   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:05.915345   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:05.915366   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:05.915470   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:05.915488   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:05.916370   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:05.916574   16686 main.go:141] libmachine: (addons-489802) Calling .GetState
	I0920 16:45:05.916621   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:05.917159   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.917200   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.917629   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:05.918192   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:05.918213   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:05.918613   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:05.918669   16686 host.go:66] Checking if "addons-489802" exists ...
	I0920 16:45:05.919045   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.919074   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.922095   16686 main.go:141] libmachine: (addons-489802) Calling .GetState
	I0920 16:45:05.925413   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39609
	I0920 16:45:05.926161   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:05.926895   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:05.926919   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:05.927445   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:05.928038   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.928083   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.930652   16686 addons.go:234] Setting addon default-storageclass=true in "addons-489802"
	I0920 16:45:05.930702   16686 host.go:66] Checking if "addons-489802" exists ...
	I0920 16:45:05.931084   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.931143   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.932706   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34593
	I0920 16:45:05.933363   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:05.934073   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:05.934093   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:05.934558   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:05.935171   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.935210   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.941706   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43133
	I0920 16:45:05.942347   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:05.943149   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:05.943173   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:05.943717   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:05.949811   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36123
	I0920 16:45:05.950710   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.950769   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.951083   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:05.951845   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:05.951868   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:05.952349   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:05.952538   16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
	I0920 16:45:05.953123   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33577
	I0920 16:45:05.954739   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:05.955577   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38365
	I0920 16:45:05.956118   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46489
	I0920 16:45:05.956311   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:05.956877   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:05.956902   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:05.957263   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:05.957283   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:05.958119   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:05.958195   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41117
	I0920 16:45:05.958880   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.958921   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.959186   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:05.959739   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:05.959761   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:05.959785   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:05.960399   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:05.960985   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.961025   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.961535   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:05.961729   16686 main.go:141] libmachine: (addons-489802) Calling .GetState
	I0920 16:45:05.961940   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:05.961958   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:05.962782   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:05.963365   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.963414   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.963800   16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
	I0920 16:45:05.966313   16686 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0920 16:45:05.967714   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36999
	I0920 16:45:05.967733   16686 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 16:45:05.967750   16686 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 16:45:05.967775   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
	I0920 16:45:05.971362   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42171
	I0920 16:45:05.972858   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33479
	I0920 16:45:05.974844   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:05.975487   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:45:05.975517   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:05.975763   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
	I0920 16:45:05.975965   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:45:05.976140   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
	I0920 16:45:05.976363   16686 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa Username:docker}
	I0920 16:45:05.977671   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42025
	I0920 16:45:05.978187   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:05.981448   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46501
	I0920 16:45:05.981604   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44741
	I0920 16:45:05.982424   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:05.982550   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:05.982830   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:05.982881   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:05.983467   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:05.983492   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:05.983551   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:05.983961   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:05.983979   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:05.984042   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:05.984224   16686 main.go:141] libmachine: (addons-489802) Calling .GetState
	I0920 16:45:05.984715   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35263
	I0920 16:45:05.984871   16686 main.go:141] libmachine: (addons-489802) Calling .GetState
	I0920 16:45:05.984923   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:05.985197   16686 main.go:141] libmachine: (addons-489802) Calling .GetState
	I0920 16:45:05.986711   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:05.987367   16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
	I0920 16:45:05.987635   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:05.987654   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:05.987994   16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
	I0920 16:45:05.988156   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:05.988566   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42959
	I0920 16:45:05.989594   16686 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0920 16:45:05.990395   16686 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0920 16:45:05.991212   16686 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 16:45:05.991233   16686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0920 16:45:05.991257   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
	I0920 16:45:05.991416   16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
	I0920 16:45:05.992716   16686 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0920 16:45:05.992737   16686 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0920 16:45:05.992760   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
	I0920 16:45:05.992873   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39923
	I0920 16:45:05.993699   16686 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0920 16:45:05.995293   16686 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0920 16:45:05.995314   16686 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0920 16:45:05.995337   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
	I0920 16:45:05.995421   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
	I0920 16:45:05.995474   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:05.995494   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:45:05.995520   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:05.995539   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39693
	I0920 16:45:06.002124   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:06.002163   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:45:06.002180   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:06.002186   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
	I0920 16:45:06.002226   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:45:06.002256   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
	I0920 16:45:06.002186   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:06.002304   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:45:06.002330   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:06.002392   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:06.002441   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:06.002794   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:06.002895   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:06.003001   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:06.003084   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:06.003168   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
	I0920 16:45:06.003348   16686 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa Username:docker}
	I0920 16:45:06.003599   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:06.003651   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:06.003661   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:06.003693   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:45:06.003693   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:06.003708   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:06.003715   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:45:06.003952   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:06.003969   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:06.004102   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:06.004235   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:06.004248   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:06.004312   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:06.004332   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:06.004348   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:06.004535   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:06.004574   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
	I0920 16:45:06.004738   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
	I0920 16:45:06.004727   16686 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa Username:docker}
	I0920 16:45:06.004793   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:06.005068   16686 main.go:141] libmachine: (addons-489802) Calling .GetState
	I0920 16:45:06.005104   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:06.005120   16686 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa Username:docker}
	I0920 16:45:06.005134   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:06.005135   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:06.005145   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:06.006374   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:06.006382   16686 main.go:141] libmachine: (addons-489802) Calling .GetState
	I0920 16:45:06.006398   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:06.006377   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:06.007189   16686 main.go:141] libmachine: (addons-489802) Calling .GetState
	I0920 16:45:06.007202   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36911
	I0920 16:45:06.007213   16686 main.go:141] libmachine: (addons-489802) Calling .GetState
	I0920 16:45:06.007251   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46439
	I0920 16:45:06.007358   16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
	I0920 16:45:06.007582   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:06.007618   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:06.008305   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:06.009013   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:06.009036   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:06.009097   16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
	I0920 16:45:06.009454   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:06.009483   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:06.011482   16686 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0920 16:45:06.011667   16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
	I0920 16:45:06.011700   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:06.011718   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:06.011719   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:06.011730   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:06.011738   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:06.011780   16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
	I0920 16:45:06.012083   16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
	I0920 16:45:06.012119   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:06.012127   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	W0920 16:45:06.012215   16686 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0920 16:45:06.013040   16686 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0920 16:45:06.013057   16686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0920 16:45:06.013076   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
	I0920 16:45:06.013854   16686 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0920 16:45:06.013875   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:06.014222   16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
	I0920 16:45:06.014278   16686 main.go:141] libmachine: (addons-489802) Calling .GetState
	I0920 16:45:06.015566   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:06.015585   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:06.016191   16686 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0920 16:45:06.016298   16686 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0920 16:45:06.016476   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:06.016889   16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
	I0920 16:45:06.017494   16686 main.go:141] libmachine: (addons-489802) Calling .GetState
	I0920 16:45:06.018839   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:06.019261   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:45:06.019283   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:06.019485   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
	I0920 16:45:06.019664   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:45:06.019716   16686 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0920 16:45:06.019816   16686 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 16:45:06.019996   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
	I0920 16:45:06.020051   16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
	I0920 16:45:06.020211   16686 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa Username:docker}
	I0920 16:45:06.020731   16686 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 16:45:06.021987   16686 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0920 16:45:06.022029   16686 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0920 16:45:06.022093   16686 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 16:45:06.022300   16686 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 16:45:06.022755   16686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 16:45:06.022776   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
	I0920 16:45:06.023143   16686 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0920 16:45:06.023160   16686 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0920 16:45:06.023177   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
	I0920 16:45:06.024174   16686 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0920 16:45:06.024191   16686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0920 16:45:06.024275   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
	I0920 16:45:06.024664   16686 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0920 16:45:06.025980   16686 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0920 16:45:06.027309   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:06.027785   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:45:06.027815   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:06.027929   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:06.028009   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
	I0920 16:45:06.028181   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:45:06.028474   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:45:06.028495   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:06.028615   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
	I0920 16:45:06.028701   16686 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0920 16:45:06.028891   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:06.028889   16686 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa Username:docker}
	I0920 16:45:06.028923   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
	I0920 16:45:06.029196   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:45:06.029192   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:45:06.029222   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:06.029483   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
	I0920 16:45:06.029709   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
	I0920 16:45:06.029887   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:45:06.029906   16686 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa Username:docker}
	I0920 16:45:06.030033   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
	I0920 16:45:06.030190   16686 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa Username:docker}
	I0920 16:45:06.031196   16686 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0920 16:45:06.032725   16686 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0920 16:45:06.032746   16686 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0920 16:45:06.032780   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
	I0920 16:45:06.034644   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42341
	I0920 16:45:06.035197   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37823
	I0920 16:45:06.035340   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:06.036022   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:06.036041   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:06.036112   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:06.036407   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:06.036475   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:06.036695   16686 main.go:141] libmachine: (addons-489802) Calling .GetState
	I0920 16:45:06.036796   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:06.036813   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:06.037369   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
	I0920 16:45:06.037379   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:06.037431   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46467
	I0920 16:45:06.037435   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:45:06.037447   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:06.037568   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:45:06.037633   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36745
	I0920 16:45:06.037767   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:06.037792   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
	I0920 16:45:06.037889   16686 main.go:141] libmachine: (addons-489802) Calling .GetState
	I0920 16:45:06.037985   16686 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa Username:docker}
	I0920 16:45:06.038291   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:06.038315   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:06.038531   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:06.038620   16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
	I0920 16:45:06.038675   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:06.038861   16686 main.go:141] libmachine: (addons-489802) Calling .GetState
	I0920 16:45:06.039491   16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
	I0920 16:45:06.039654   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:06.039669   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:06.040233   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:06.040465   16686 main.go:141] libmachine: (addons-489802) Calling .GetState
	I0920 16:45:06.040605   16686 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0920 16:45:06.040832   16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
	I0920 16:45:06.041303   16686 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 16:45:06.041318   16686 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 16:45:06.041334   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
	I0920 16:45:06.041615   16686 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0920 16:45:06.042140   16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
	I0920 16:45:06.043269   16686 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0920 16:45:06.043289   16686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0920 16:45:06.043306   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
	I0920 16:45:06.044349   16686 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0920 16:45:06.044617   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:06.044625   16686 out.go:177]   - Using image docker.io/busybox:stable
	I0920 16:45:06.045036   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:45:06.045057   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:06.045261   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
	I0920 16:45:06.045420   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:45:06.045924   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
	I0920 16:45:06.046045   16686 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 16:45:06.046062   16686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0920 16:45:06.046076   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
	I0920 16:45:06.046233   16686 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa Username:docker}
	I0920 16:45:06.046927   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:06.047431   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:45:06.047463   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:06.047597   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
	I0920 16:45:06.047765   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:45:06.047891   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
	I0920 16:45:06.048008   16686 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa Username:docker}
	I0920 16:45:06.048154   16686 out.go:177]   - Using image docker.io/registry:2.8.3
	I0920 16:45:06.049631   16686 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0920 16:45:06.049649   16686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0920 16:45:06.049663   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:06.049676   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
	I0920 16:45:06.050129   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:45:06.050156   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:06.050430   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
	I0920 16:45:06.050586   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:45:06.050750   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
	I0920 16:45:06.050868   16686 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa Username:docker}
	I0920 16:45:06.052498   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:06.052871   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:45:06.052900   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:06.053033   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
	I0920 16:45:06.053170   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:45:06.053326   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
	I0920 16:45:06.053496   16686 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa Username:docker}
	I0920 16:45:06.353051   16686 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 16:45:06.353074   16686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0920 16:45:06.375750   16686 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 16:45:06.375808   16686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0920 16:45:06.391326   16686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 16:45:06.493613   16686 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0920 16:45:06.493638   16686 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0920 16:45:06.505773   16686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0920 16:45:06.532977   16686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0920 16:45:06.533515   16686 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0920 16:45:06.533534   16686 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0920 16:45:06.540683   16686 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0920 16:45:06.540708   16686 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0920 16:45:06.543084   16686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 16:45:06.544984   16686 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0920 16:45:06.545000   16686 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0920 16:45:06.551458   16686 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0920 16:45:06.551479   16686 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0920 16:45:06.556172   16686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0920 16:45:06.557507   16686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 16:45:06.566682   16686 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0920 16:45:06.566703   16686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0920 16:45:06.627313   16686 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0920 16:45:06.627340   16686 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0920 16:45:06.640927   16686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 16:45:06.670548   16686 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 16:45:06.670574   16686 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 16:45:06.763522   16686 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0920 16:45:06.763549   16686 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0920 16:45:06.783481   16686 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0920 16:45:06.783521   16686 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0920 16:45:06.819177   16686 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0920 16:45:06.819204   16686 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0920 16:45:06.839272   16686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0920 16:45:06.896200   16686 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0920 16:45:06.896230   16686 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0920 16:45:06.910579   16686 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0920 16:45:06.910614   16686 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0920 16:45:06.930437   16686 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0920 16:45:06.930463   16686 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0920 16:45:06.940831   16686 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 16:45:06.940867   16686 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 16:45:07.047035   16686 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0920 16:45:07.047062   16686 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0920 16:45:07.215806   16686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 16:45:07.218901   16686 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0920 16:45:07.218932   16686 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0920 16:45:07.223882   16686 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0920 16:45:07.223905   16686 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0920 16:45:07.227082   16686 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0920 16:45:07.227103   16686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0920 16:45:07.256340   16686 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0920 16:45:07.256375   16686 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0920 16:45:07.464044   16686 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0920 16:45:07.464078   16686 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0920 16:45:07.493814   16686 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0920 16:45:07.493851   16686 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0920 16:45:07.582458   16686 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 16:45:07.582479   16686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0920 16:45:07.603848   16686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0920 16:45:07.828047   16686 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0920 16:45:07.828070   16686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0920 16:45:07.844298   16686 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0920 16:45:07.844335   16686 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0920 16:45:08.029971   16686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 16:45:08.174001   16686 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 16:45:08.174023   16686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0920 16:45:08.192445   16686 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0920 16:45:08.192475   16686 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0920 16:45:08.510930   16686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 16:45:08.524911   16686 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0920 16:45:08.524942   16686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0920 16:45:08.726846   16686 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0920 16:45:08.726879   16686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0920 16:45:09.009410   16686 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 16:45:09.009447   16686 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0920 16:45:09.024627   16686 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.648835712s)
	I0920 16:45:09.024679   16686 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.648847664s)
	I0920 16:45:09.024704   16686 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0920 16:45:09.024765   16686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.633411979s)
	I0920 16:45:09.024811   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:09.024825   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:09.025119   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:09.025144   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:09.025153   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:09.025161   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:09.025404   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:09.025445   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:09.025920   16686 node_ready.go:35] waiting up to 6m0s for node "addons-489802" to be "Ready" ...
	I0920 16:45:09.035518   16686 node_ready.go:49] node "addons-489802" has status "Ready":"True"
	I0920 16:45:09.035609   16686 node_ready.go:38] duration metric: took 9.661904ms for node "addons-489802" to be "Ready" ...
	I0920 16:45:09.035637   16686 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 16:45:09.051148   16686 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-nqbzq" in "kube-system" namespace to be "Ready" ...
	I0920 16:45:09.322288   16686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 16:45:09.534546   16686 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-489802" context rescaled to 1 replicas
	I0920 16:45:11.158586   16686 pod_ready.go:103] pod "coredns-7c65d6cfc9-nqbzq" in "kube-system" namespace has status "Ready":"False"
	I0920 16:45:12.692545   16686 pod_ready.go:93] pod "coredns-7c65d6cfc9-nqbzq" in "kube-system" namespace has status "Ready":"True"
	I0920 16:45:12.692574   16686 pod_ready.go:82] duration metric: took 3.641395186s for pod "coredns-7c65d6cfc9-nqbzq" in "kube-system" namespace to be "Ready" ...
	I0920 16:45:12.692587   16686 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-tm9vr" in "kube-system" namespace to be "Ready" ...
	I0920 16:45:12.993726   16686 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0920 16:45:12.993782   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
	I0920 16:45:12.997095   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:12.997468   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:45:12.997509   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:12.997646   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
	I0920 16:45:12.997868   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:45:12.998029   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
	I0920 16:45:12.998260   16686 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa Username:docker}
	I0920 16:45:13.539202   16686 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0920 16:45:13.682847   16686 addons.go:234] Setting addon gcp-auth=true in "addons-489802"
	I0920 16:45:13.682906   16686 host.go:66] Checking if "addons-489802" exists ...
	I0920 16:45:13.683199   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:13.683239   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:13.702441   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33011
	I0920 16:45:13.702905   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:13.703420   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:13.703442   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:13.703814   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:13.704438   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:13.704485   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:13.722380   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33875
	I0920 16:45:13.723033   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:13.723749   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:13.723776   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:13.724178   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:13.724416   16686 main.go:141] libmachine: (addons-489802) Calling .GetState
	I0920 16:45:13.726164   16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
	I0920 16:45:13.726406   16686 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0920 16:45:13.726432   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
	I0920 16:45:13.729255   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:13.729760   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:45:13.729791   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:13.729945   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
	I0920 16:45:13.730109   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:45:13.730294   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
	I0920 16:45:13.730440   16686 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa Username:docker}
	I0920 16:45:13.776226   16686 pod_ready.go:98] pod "coredns-7c65d6cfc9-tm9vr" in "kube-system" namespace has status phase "Failed" (skipping!): {Phase:Failed Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 16:45:13 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 16:45:06 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 16:45:06 +0000 UTC Reason:PodFailed Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 16:45:06 +0000 UTC Reason:PodFailed Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 16:45:06 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.89 HostIPs:[{IP:192.168.39.89}] PodIP:10.244.0.3 Po
dIPs:[{IP:10.244.0.3}] StartTime:2024-09-20 16:45:06 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2024-09-20 16:45:10 +0000 UTC,FinishedAt:2024-09-20 16:45:11 +0000 UTC,ContainerID:cri-o://918bc5fe873828ba31e8b226c084835ff9648d49d56fd967df98c04026fcd9c4,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://918bc5fe873828ba31e8b226c084835ff9648d49d56fd967df98c04026fcd9c4 Started:0xc00179695c AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001a1e830} {Name:kube-api-access-l4wh8 MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc001a1e840}] User:nil AllocatedResourcesStatus:[]}
] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0920 16:45:13.776273   16686 pod_ready.go:82] duration metric: took 1.083676607s for pod "coredns-7c65d6cfc9-tm9vr" in "kube-system" namespace to be "Ready" ...
	E0920 16:45:13.776285   16686 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-tm9vr" in "kube-system" namespace has status phase "Failed" (skipping!): {Phase:Failed Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 16:45:13 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 16:45:06 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 16:45:06 +0000 UTC Reason:PodFailed Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 16:45:06 +0000 UTC Reason:PodFailed Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 16:45:06 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.89 HostIPs:[{IP:192.16
8.39.89}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-09-20 16:45:06 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2024-09-20 16:45:10 +0000 UTC,FinishedAt:2024-09-20 16:45:11 +0000 UTC,ContainerID:cri-o://918bc5fe873828ba31e8b226c084835ff9648d49d56fd967df98c04026fcd9c4,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://918bc5fe873828ba31e8b226c084835ff9648d49d56fd967df98c04026fcd9c4 Started:0xc00179695c AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001a1e830} {Name:kube-api-access-l4wh8 MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc001a1e840}] User:nil
AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0920 16:45:13.776297   16686 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-489802" in "kube-system" namespace to be "Ready" ...
	I0920 16:45:13.895071   16686 pod_ready.go:93] pod "etcd-addons-489802" in "kube-system" namespace has status "Ready":"True"
	I0920 16:45:13.895098   16686 pod_ready.go:82] duration metric: took 118.793361ms for pod "etcd-addons-489802" in "kube-system" namespace to be "Ready" ...
	I0920 16:45:13.895111   16686 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-489802" in "kube-system" namespace to be "Ready" ...
	I0920 16:45:14.014764   16686 pod_ready.go:93] pod "kube-apiserver-addons-489802" in "kube-system" namespace has status "Ready":"True"
	I0920 16:45:14.014787   16686 pod_ready.go:82] duration metric: took 119.668585ms for pod "kube-apiserver-addons-489802" in "kube-system" namespace to be "Ready" ...
	I0920 16:45:14.014841   16686 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-489802" in "kube-system" namespace to be "Ready" ...
	I0920 16:45:14.127671   16686 pod_ready.go:93] pod "kube-controller-manager-addons-489802" in "kube-system" namespace has status "Ready":"True"
	I0920 16:45:14.127694   16686 pod_ready.go:82] duration metric: took 112.838527ms for pod "kube-controller-manager-addons-489802" in "kube-system" namespace to be "Ready" ...
	I0920 16:45:14.127705   16686 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xr4bt" in "kube-system" namespace to be "Ready" ...
	I0920 16:45:14.150341   16686 pod_ready.go:93] pod "kube-proxy-xr4bt" in "kube-system" namespace has status "Ready":"True"
	I0920 16:45:14.150367   16686 pod_ready.go:82] duration metric: took 22.655966ms for pod "kube-proxy-xr4bt" in "kube-system" namespace to be "Ready" ...
	I0920 16:45:14.150376   16686 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-489802" in "kube-system" namespace to be "Ready" ...
	I0920 16:45:14.206202   16686 pod_ready.go:93] pod "kube-scheduler-addons-489802" in "kube-system" namespace has status "Ready":"True"
	I0920 16:45:14.206226   16686 pod_ready.go:82] duration metric: took 55.843139ms for pod "kube-scheduler-addons-489802" in "kube-system" namespace to be "Ready" ...
	I0920 16:45:14.206238   16686 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-54hhx" in "kube-system" namespace to be "Ready" ...
	I0920 16:45:15.135704   16686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.629885928s)
	I0920 16:45:15.135777   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:15.135782   16686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.602774066s)
	I0920 16:45:15.135815   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:15.135832   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:15.135837   16686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.592733845s)
	I0920 16:45:15.135860   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:15.135874   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:15.135791   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:15.135976   16686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.579777747s)
	I0920 16:45:15.136071   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:15.136137   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:15.136165   16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
	I0920 16:45:15.136165   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:15.136176   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:15.136187   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:15.136191   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:15.136195   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:15.136195   16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
	I0920 16:45:15.136202   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:15.136241   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:15.136199   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:15.136269   16686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.578731979s)
	I0920 16:45:15.136290   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:15.136196   16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
	I0920 16:45:15.136312   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:15.136322   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:15.136299   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:15.136332   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:15.136345   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:15.136388   16686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.297083849s)
	I0920 16:45:15.136410   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:15.136420   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:15.136467   16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
	I0920 16:45:15.136492   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:15.136499   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:15.136506   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:15.136513   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:15.136540   16686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.920700025s)
	I0920 16:45:15.136560   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:15.136569   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:15.136342   16686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.495383696s)
	I0920 16:45:15.136654   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:15.136666   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:15.136665   16686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.532769315s)
	I0920 16:45:15.136718   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:15.136726   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:15.136765   16686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.106759371s)
	I0920 16:45:15.136781   16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
	W0920 16:45:15.136792   16686 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0920 16:45:15.136807   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:15.136815   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:15.136815   16686 retry.go:31] will retry after 374.579066ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0920 16:45:15.136939   16686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.625889401s)
	I0920 16:45:15.136963   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:15.136976   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:15.137039   16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
	I0920 16:45:15.137050   16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
	I0920 16:45:15.137071   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:15.137102   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:15.137131   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:15.137137   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:15.137152   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:15.137158   16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
	I0920 16:45:15.137108   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:15.137170   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:15.137178   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:15.137186   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:15.137875   16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
	I0920 16:45:15.137908   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:15.137915   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:15.137922   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:15.137929   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:15.137975   16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
	I0920 16:45:15.137994   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:15.137999   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:15.138006   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:15.138013   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:15.138047   16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
	I0920 16:45:15.138061   16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
	I0920 16:45:15.138078   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:15.138084   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:15.138093   16686 addons.go:475] Verifying addon registry=true in "addons-489802"
	I0920 16:45:15.138895   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:15.138916   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:15.138927   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:15.138936   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:15.139035   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:15.139050   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:15.137073   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:15.139271   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:15.137144   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:15.139348   16686 addons.go:475] Verifying addon ingress=true in "addons-489802"
	I0920 16:45:15.139477   16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
	I0920 16:45:15.137089   16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
	I0920 16:45:15.139526   16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
	I0920 16:45:15.139550   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:15.139564   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:15.139719   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:15.139735   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:15.139509   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:15.139873   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:15.139884   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:15.139894   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:15.140278   16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
	I0920 16:45:15.140316   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:15.140328   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:15.141359   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:15.141378   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:15.141387   16686 addons.go:475] Verifying addon metrics-server=true in "addons-489802"
	I0920 16:45:15.141742   16686 out.go:177] * Verifying ingress addon...
	I0920 16:45:15.141861   16686 out.go:177] * Verifying registry addon...
	I0920 16:45:15.142395   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:15.142416   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:15.142438   16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
	I0920 16:45:15.144272   16686 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-489802 service yakd-dashboard -n yakd-dashboard
	
	I0920 16:45:15.144625   16686 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0920 16:45:15.144652   16686 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0920 16:45:15.182676   16686 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0920 16:45:15.182707   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:15.183762   16686 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0920 16:45:15.183790   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:15.473454   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:15.473474   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:15.473959   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:15.473976   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:15.479442   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:15.479466   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:15.479704   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:15.479721   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	W0920 16:45:15.479879   16686 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0920 16:45:15.512325   16686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 16:45:15.658712   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:15.659607   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:16.155622   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:16.160001   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:16.241480   16686 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-54hhx" in "kube-system" namespace has status "Ready":"False"
	I0920 16:45:16.517442   16686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.195100107s)
	I0920 16:45:16.517489   16686 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.791061379s)
	I0920 16:45:16.517497   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:16.517513   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:16.517795   16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
	I0920 16:45:16.517795   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:16.517817   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:16.517843   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:16.517851   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:16.518062   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:16.518079   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:16.518089   16686 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-489802"
	I0920 16:45:16.519716   16686 out.go:177] * Verifying csi-hostpath-driver addon...
	I0920 16:45:16.519723   16686 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 16:45:16.521078   16686 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0920 16:45:16.521713   16686 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0920 16:45:16.522238   16686 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0920 16:45:16.522258   16686 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0920 16:45:16.561413   16686 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0920 16:45:16.561441   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:16.652853   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:16.654932   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:16.670493   16686 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0920 16:45:16.670518   16686 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0920 16:45:16.788959   16686 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 16:45:16.788986   16686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0920 16:45:16.869081   16686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 16:45:17.027599   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:17.156633   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:17.157163   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:17.527462   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:17.650521   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:17.650643   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:17.734897   16686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.222504857s)
	I0920 16:45:17.734961   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:17.734978   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:17.735373   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:17.735395   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:17.735414   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:17.735423   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:17.735676   16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
	I0920 16:45:17.735715   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:17.735732   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:18.039389   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:18.191248   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:18.192032   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:18.226929   16686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.357782077s)
	I0920 16:45:18.227006   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:18.227027   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:18.227352   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:18.227371   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:18.227380   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:18.227388   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:18.227596   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:18.227608   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:18.229117   16686 addons.go:475] Verifying addon gcp-auth=true in "addons-489802"
	I0920 16:45:18.230928   16686 out.go:177] * Verifying gcp-auth addon...
	I0920 16:45:18.233132   16686 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0920 16:45:18.302814   16686 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-54hhx" in "kube-system" namespace has status "Ready":"False"
	I0920 16:45:18.303833   16686 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0920 16:45:18.303849   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:18.526206   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:18.650162   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:18.650906   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:18.737130   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:19.027359   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:19.151083   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:19.152167   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:19.237097   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:19.530489   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:19.651552   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:19.651799   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:19.737916   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:20.027552   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:20.150028   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:20.150617   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:20.237634   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:20.527445   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:20.651604   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:20.652378   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:20.712902   16686 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-54hhx" in "kube-system" namespace has status "Ready":"False"
	I0920 16:45:20.736944   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:21.029114   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:21.149408   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:21.150699   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:21.236999   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:21.527442   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:21.967907   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:21.968174   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:22.070927   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:22.072675   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:22.149613   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:22.150237   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:22.237824   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:22.531579   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:22.650997   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:22.651735   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:22.714124   16686 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-54hhx" in "kube-system" namespace has status "Ready":"False"
	I0920 16:45:22.738003   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:23.036430   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:23.154161   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:23.155271   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:23.274914   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:23.528959   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:23.662172   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:23.665690   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:23.747609   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:24.028698   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:24.163651   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:24.164456   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:24.248826   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:24.526972   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:24.652716   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:24.653397   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:24.715653   16686 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-54hhx" in "kube-system" namespace has status "Ready":"False"
	I0920 16:45:24.740107   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:25.028341   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:25.150991   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:25.153743   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:25.634814   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:25.635566   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:25.651776   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:25.652748   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:25.736431   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:26.032193   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:26.150517   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:26.150967   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:26.238433   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:26.527250   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:26.650016   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:26.650451   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:26.737952   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:27.027290   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:27.150220   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:27.150405   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:27.213074   16686 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-54hhx" in "kube-system" namespace has status "Ready":"True"
	I0920 16:45:27.213099   16686 pod_ready.go:82] duration metric: took 13.006853784s for pod "nvidia-device-plugin-daemonset-54hhx" in "kube-system" namespace to be "Ready" ...
	I0920 16:45:27.213106   16686 pod_ready.go:39] duration metric: took 18.177423912s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 16:45:27.213122   16686 api_server.go:52] waiting for apiserver process to appear ...
	I0920 16:45:27.213169   16686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 16:45:27.236400   16686 api_server.go:72] duration metric: took 21.373270823s to wait for apiserver process to appear ...
	I0920 16:45:27.236426   16686 api_server.go:88] waiting for apiserver healthz status ...
	I0920 16:45:27.236445   16686 api_server.go:253] Checking apiserver healthz at https://192.168.39.89:8443/healthz ...
	I0920 16:45:27.239701   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:27.242110   16686 api_server.go:279] https://192.168.39.89:8443/healthz returned 200:
	ok
	I0920 16:45:27.243105   16686 api_server.go:141] control plane version: v1.31.1
	I0920 16:45:27.243132   16686 api_server.go:131] duration metric: took 6.699495ms to wait for apiserver health ...
	I0920 16:45:27.243142   16686 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 16:45:27.251414   16686 system_pods.go:59] 17 kube-system pods found
	I0920 16:45:27.251443   16686 system_pods.go:61] "coredns-7c65d6cfc9-nqbzq" [734f1782-975a-486b-adf3-32f60c376a9a] Running
	I0920 16:45:27.251451   16686 system_pods.go:61] "csi-hostpath-attacher-0" [8fc733e6-4135-418b-a554-490bd25dabe7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0920 16:45:27.251458   16686 system_pods.go:61] "csi-hostpath-resizer-0" [85755d16-e8fa-4878-9184-45658ba8d8ac] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0920 16:45:27.251465   16686 system_pods.go:61] "csi-hostpathplugin-hglqr" [0aeb8bcc-1f9f-40f6-8aa1-4822a64115f2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0920 16:45:27.251469   16686 system_pods.go:61] "etcd-addons-489802" [7f35387b-c7f1-4436-8369-77849b9c2383] Running
	I0920 16:45:27.251475   16686 system_pods.go:61] "kube-apiserver-addons-489802" [9c4029f4-8e01-4d5d-a866-518e553ac713] Running
	I0920 16:45:27.251481   16686 system_pods.go:61] "kube-controller-manager-addons-489802" [30219691-4d43-476d-8720-80aa4f2b6b54] Running
	I0920 16:45:27.251488   16686 system_pods.go:61] "kube-ingress-dns-minikube" [1f722d5e-9dee-4b0e-8661-9c4181ea4f9b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0920 16:45:27.251495   16686 system_pods.go:61] "kube-proxy-xr4bt" [7a20cb9e-3e82-4bda-9529-7e024f9681a4] Running
	I0920 16:45:27.251504   16686 system_pods.go:61] "kube-scheduler-addons-489802" [8b17a764-82bc-4003-8b0c-9d46c614e15d] Running
	I0920 16:45:27.251512   16686 system_pods.go:61] "metrics-server-84c5f94fbc-txlrn" [b6d2625e-ba6e-44e1-b245-0edc2adaa243] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 16:45:27.251518   16686 system_pods.go:61] "nvidia-device-plugin-daemonset-54hhx" [b022f644-f7de-4d74-aed4-63ad47ef0b71] Running
	I0920 16:45:27.251526   16686 system_pods.go:61] "registry-66c9cd494c-7swkh" [1e3cfba8-c77f-46f3-b6b1-46c7a36ae3a4] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0920 16:45:27.251534   16686 system_pods.go:61] "registry-proxy-ggl6q" [a467b141-5827-4440-b11f-9203739b4a10] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0920 16:45:27.251542   16686 system_pods.go:61] "snapshot-controller-56fcc65765-2hz6g" [0d531a52-cced-4b3d-adfd-5d62357591e8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 16:45:27.251549   16686 system_pods.go:61] "snapshot-controller-56fcc65765-4l9hv" [eccfc252-ad9c-4b70-bb1c-d81a71214556] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 16:45:27.251553   16686 system_pods.go:61] "storage-provisioner" [1e04b7e0-a0fe-4e65-9ba5-63be2690da1d] Running
	I0920 16:45:27.251561   16686 system_pods.go:74] duration metric: took 8.412514ms to wait for pod list to return data ...
	I0920 16:45:27.251568   16686 default_sa.go:34] waiting for default service account to be created ...
	I0920 16:45:27.254735   16686 default_sa.go:45] found service account: "default"
	I0920 16:45:27.254760   16686 default_sa.go:55] duration metric: took 3.185589ms for default service account to be created ...
	I0920 16:45:27.254770   16686 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 16:45:27.261725   16686 system_pods.go:86] 17 kube-system pods found
	I0920 16:45:27.261752   16686 system_pods.go:89] "coredns-7c65d6cfc9-nqbzq" [734f1782-975a-486b-adf3-32f60c376a9a] Running
	I0920 16:45:27.261759   16686 system_pods.go:89] "csi-hostpath-attacher-0" [8fc733e6-4135-418b-a554-490bd25dabe7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0920 16:45:27.261766   16686 system_pods.go:89] "csi-hostpath-resizer-0" [85755d16-e8fa-4878-9184-45658ba8d8ac] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0920 16:45:27.261772   16686 system_pods.go:89] "csi-hostpathplugin-hglqr" [0aeb8bcc-1f9f-40f6-8aa1-4822a64115f2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0920 16:45:27.261776   16686 system_pods.go:89] "etcd-addons-489802" [7f35387b-c7f1-4436-8369-77849b9c2383] Running
	I0920 16:45:27.261780   16686 system_pods.go:89] "kube-apiserver-addons-489802" [9c4029f4-8e01-4d5d-a866-518e553ac713] Running
	I0920 16:45:27.261784   16686 system_pods.go:89] "kube-controller-manager-addons-489802" [30219691-4d43-476d-8720-80aa4f2b6b54] Running
	I0920 16:45:27.261791   16686 system_pods.go:89] "kube-ingress-dns-minikube" [1f722d5e-9dee-4b0e-8661-9c4181ea4f9b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0920 16:45:27.261795   16686 system_pods.go:89] "kube-proxy-xr4bt" [7a20cb9e-3e82-4bda-9529-7e024f9681a4] Running
	I0920 16:45:27.261799   16686 system_pods.go:89] "kube-scheduler-addons-489802" [8b17a764-82bc-4003-8b0c-9d46c614e15d] Running
	I0920 16:45:27.261805   16686 system_pods.go:89] "metrics-server-84c5f94fbc-txlrn" [b6d2625e-ba6e-44e1-b245-0edc2adaa243] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 16:45:27.261809   16686 system_pods.go:89] "nvidia-device-plugin-daemonset-54hhx" [b022f644-f7de-4d74-aed4-63ad47ef0b71] Running
	I0920 16:45:27.261815   16686 system_pods.go:89] "registry-66c9cd494c-7swkh" [1e3cfba8-c77f-46f3-b6b1-46c7a36ae3a4] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0920 16:45:27.261820   16686 system_pods.go:89] "registry-proxy-ggl6q" [a467b141-5827-4440-b11f-9203739b4a10] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0920 16:45:27.261828   16686 system_pods.go:89] "snapshot-controller-56fcc65765-2hz6g" [0d531a52-cced-4b3d-adfd-5d62357591e8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 16:45:27.261858   16686 system_pods.go:89] "snapshot-controller-56fcc65765-4l9hv" [eccfc252-ad9c-4b70-bb1c-d81a71214556] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 16:45:27.261868   16686 system_pods.go:89] "storage-provisioner" [1e04b7e0-a0fe-4e65-9ba5-63be2690da1d] Running
	I0920 16:45:27.261877   16686 system_pods.go:126] duration metric: took 7.099706ms to wait for k8s-apps to be running ...
	I0920 16:45:27.261887   16686 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 16:45:27.261932   16686 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 16:45:27.276406   16686 system_svc.go:56] duration metric: took 14.508978ms WaitForService to wait for kubelet
	I0920 16:45:27.276438   16686 kubeadm.go:582] duration metric: took 21.413312681s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 16:45:27.276460   16686 node_conditions.go:102] verifying NodePressure condition ...
	I0920 16:45:27.280248   16686 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 16:45:27.280278   16686 node_conditions.go:123] node cpu capacity is 2
	I0920 16:45:27.280291   16686 node_conditions.go:105] duration metric: took 3.825237ms to run NodePressure ...
	I0920 16:45:27.280304   16686 start.go:241] waiting for startup goroutines ...
	I0920 16:45:27.526718   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:27.649095   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:27.649421   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:27.737354   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:28.027233   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:28.150225   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:28.150730   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:28.236702   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:28.528434   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:28.650325   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:28.650405   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:28.740070   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:29.026096   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:29.149445   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:29.150058   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:29.237452   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:29.527135   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:29.649902   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:29.649932   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:29.737559   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:30.026698   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:30.150115   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:30.150769   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:30.238484   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:30.527374   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:30.648850   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:30.649272   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:30.738810   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:31.028473   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:31.150589   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:31.156282   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:31.237373   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:31.527393   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:31.649166   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:31.650780   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:31.736824   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:32.027837   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:32.152463   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:32.153143   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:32.237068   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:32.528272   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:32.649079   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:32.650818   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:32.738352   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:33.026553   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:33.149902   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:33.150275   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:33.237524   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:33.537491   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:33.649781   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:33.650261   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:33.737265   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:34.028817   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:34.150791   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:34.152125   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:34.237490   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:34.526864   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:34.649685   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:34.650181   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:34.736977   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:35.029888   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:35.150945   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:35.155795   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:35.240335   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:35.527786   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:35.654336   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:35.655062   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:35.737485   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:36.027635   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:36.151566   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:36.152493   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:36.238231   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:36.527246   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:36.655057   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:36.655723   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:36.738138   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:37.030365   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:37.150592   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:37.150821   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:37.236830   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:37.526749   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:37.650962   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:37.652318   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:37.738164   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:38.031402   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:38.155846   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:38.156510   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:38.252531   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:38.528674   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:38.655016   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:38.658754   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:38.739024   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:39.026715   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:39.151013   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:39.154202   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:39.238586   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:39.527713   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:39.649075   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:39.649203   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:39.737480   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:40.027567   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:40.150474   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:40.151696   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:40.250888   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:40.526616   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:40.652188   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:40.652389   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:40.736985   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:41.026770   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:41.150827   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:41.151842   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:41.237101   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:41.526185   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:41.650288   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:41.650519   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:41.737186   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:42.027683   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:42.149240   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:42.150504   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:42.491904   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:42.592635   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:42.650756   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:42.651320   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:42.737069   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:43.029825   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:43.149551   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:43.149935   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:43.237114   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:43.528788   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:43.650325   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:43.650461   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:43.739219   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:44.027085   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:44.150296   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:44.150650   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:44.238279   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:44.527675   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:44.649728   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:44.650268   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:44.737823   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:45.028181   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:45.150501   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:45.151145   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:45.237285   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:45.527586   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:45.649593   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:45.650452   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:45.738407   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:46.030564   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:46.150486   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:46.150734   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:46.237087   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:46.551259   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:46.651342   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:46.653245   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:46.737384   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:47.029654   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:47.150343   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:47.150347   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:47.238187   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:47.535430   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:47.650178   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:47.651863   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:47.739041   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:48.029210   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:48.150091   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:48.154252   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:48.240363   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:48.529142   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:48.653143   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:48.655833   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:48.738746   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:49.027666   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:49.150751   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:49.151834   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:49.236647   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:49.530861   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:49.651140   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:49.651675   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:49.740617   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:50.026729   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:50.159867   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:50.160090   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:50.239757   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:50.527622   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:50.654766   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:50.655361   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:50.737483   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:51.027995   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:51.149643   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:51.149801   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:51.237905   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:51.526411   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:51.649489   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:51.650326   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:51.738210   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:52.036253   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:52.149599   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:52.151253   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:52.237057   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:52.527569   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:52.648975   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:52.650153   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:52.737191   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:53.027592   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:53.150060   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:53.150479   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:53.236403   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:53.526504   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:53.649297   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:53.651436   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:53.737405   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:54.028487   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:54.150980   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:54.151321   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:54.237711   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:54.527354   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:54.650301   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:54.650677   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:54.737955   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:55.031032   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:55.149243   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:55.150181   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:55.238167   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:55.528915   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:55.649892   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:55.650313   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:55.738797   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:56.028783   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:56.151114   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:56.151294   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:56.237410   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:56.527498   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:56.650436   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:56.650776   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:56.736898   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:57.026952   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:57.149669   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:57.150915   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:57.237031   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:57.526939   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:57.648982   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:57.650547   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:57.737696   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:58.026729   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:58.150041   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:58.150968   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:58.237146   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:58.527288   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:58.651780   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:58.652013   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:58.738908   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:59.026605   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:59.149437   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:59.149648   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:59.237722   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:59.527090   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:59.650035   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:59.651041   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:59.737351   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:00.027912   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:00.558370   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:46:00.561620   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:00.563942   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:00.565779   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:00.661977   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:46:00.662874   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:00.739219   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:01.029865   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:01.154749   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:46:01.155165   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:01.237401   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:01.530045   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:01.649221   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:46:01.649554   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:01.740003   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:02.026763   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:02.150502   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:02.150590   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:46:02.236863   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:02.529068   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:02.650888   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:46:02.651000   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:02.750263   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:03.026716   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:03.149149   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:46:03.149545   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:03.237369   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:03.534553   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:03.650442   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:46:03.650862   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:03.737614   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:04.026913   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:04.149387   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:46:04.149593   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:04.243360   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:04.527336   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:04.650842   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:46:04.651139   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:04.739255   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:05.027878   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:05.150204   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:46:05.150545   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:05.244231   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:05.529349   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:05.652867   16686 kapi.go:107] duration metric: took 50.508229978s to wait for kubernetes.io/minikube-addons=registry ...
	I0920 16:46:05.652925   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:05.739640   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:06.033981   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:06.149185   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:06.237046   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:06.528004   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:06.649435   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:06.895278   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:07.026949   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:07.149429   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:07.237034   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:07.526452   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:07.649544   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:07.737620   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:08.028390   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:08.150933   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:08.237962   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:08.529026   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:08.650034   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:08.737105   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:09.027687   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:09.149020   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:09.239286   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:09.529929   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:09.666377   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:09.746102   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:10.030699   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:10.155669   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:10.239033   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:10.530724   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:10.651556   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:10.737407   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:11.027890   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:11.149069   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:11.236960   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:11.527373   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:11.649887   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:11.737323   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:12.027469   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:12.149540   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:12.237298   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:12.527280   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:12.650565   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:12.750782   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:13.027210   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:13.149266   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:13.236795   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:13.527089   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:13.650076   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:13.739568   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:14.028427   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:14.150142   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:14.238716   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:14.529618   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:14.649719   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:14.737439   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:15.029527   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:15.149916   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:15.236871   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:15.527484   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:15.660993   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:15.737550   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:16.027986   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:16.149414   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:16.237560   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:16.528143   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:16.649180   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:16.749844   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:17.027012   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:17.149822   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:17.237094   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:17.527302   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:17.650815   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:17.737697   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:18.027958   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:18.151414   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:18.237081   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:18.755707   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:18.756298   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:18.756334   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:19.027579   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:19.149746   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:19.237870   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:19.532636   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:19.649362   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:19.743684   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:20.029394   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:20.152735   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:20.238771   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:20.528220   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:20.650381   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:20.739497   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:21.028952   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:21.149828   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:21.238039   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:21.532796   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:21.648825   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:21.736739   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:22.025994   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:22.149742   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:22.237902   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:22.526869   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:22.651053   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:22.754073   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:23.029507   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:23.150844   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:23.236975   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:23.530954   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:23.649940   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:23.737663   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:24.027816   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:24.149027   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:24.236905   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:24.528126   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:24.649610   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:24.737256   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:25.029079   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:25.168465   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:25.279560   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:25.529941   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:25.649862   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:25.738675   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:26.031710   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:26.149047   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:26.237178   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:26.527079   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:26.649467   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:26.737219   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:27.027260   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:27.150392   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:27.237951   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:27.526593   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:27.649815   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:27.738065   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:28.026169   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:28.150226   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:28.237640   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:28.526680   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:28.649544   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:28.737407   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:29.027688   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:29.150021   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:29.236763   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:29.563052   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:29.652576   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:29.739028   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:30.029796   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:30.150520   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:30.240233   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:30.526626   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:30.651044   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:30.739007   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:31.027062   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:31.541329   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:31.546535   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:31.546967   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:31.652149   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:31.736761   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:32.026342   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:32.149699   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:32.238624   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:32.526975   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:32.650436   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:32.740112   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:33.028897   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:33.150155   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:33.250978   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:33.528932   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:33.649886   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:33.743165   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:34.028352   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:34.150042   16686 kapi.go:107] duration metric: took 1m19.005386454s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0920 16:46:34.237404   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:34.526686   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:34.740025   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:35.033014   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:35.241504   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:35.527579   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:35.738045   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:36.034900   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:36.242839   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:36.528649   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:36.738556   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:37.027713   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:37.237641   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:37.527114   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:37.736812   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:38.027753   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:38.240755   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:38.526552   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:38.739220   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:39.027014   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:39.240347   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:39.534783   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:39.739002   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:40.032069   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:40.239670   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:40.527751   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:40.742044   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:41.026894   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:41.237898   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:41.526185   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:41.737861   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:42.026935   16686 kapi.go:107] duration metric: took 1m25.505217334s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0920 16:46:42.236807   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:42.738034   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:43.237393   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:43.739267   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:44.237884   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:44.738051   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:45.236733   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:45.737720   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:46.236788   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:46.739281   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:47.237290   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:47.737521   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:48.237326   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:48.737915   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:49.238707   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:49.738314   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:50.237798   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:50.737959   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:51.237197   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:51.737289   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:52.236949   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:52.737530   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:53.237179   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:53.737635   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:54.237901   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:54.737648   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:55.238274   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:55.738085   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:56.237671   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:56.737704   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:57.237419   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:57.737353   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:58.237702   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:58.737197   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:59.237153   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:59.737583   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:00.238191   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:00.737084   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:01.237072   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:01.737245   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:02.237128   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:02.737215   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:03.237530   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:03.737290   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:04.237086   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:04.737817   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:05.237856   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:05.738321   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:06.237429   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:06.737202   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:07.236740   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:07.738137   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:08.237395   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:08.738090   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:09.237251   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:09.847229   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:10.237467   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:10.737639   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:11.237672   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:11.737856   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:12.237892   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:12.737947   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:13.236851   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:13.737127   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:14.236749   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:14.737645   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:15.240515   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:15.737944   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:16.236760   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:16.737628   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:17.237203   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:17.736930   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:18.237666   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:18.737293   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:19.253355   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:19.738180   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:20.239996   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:20.737102   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:21.239307   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:21.737634   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:22.237896   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:22.738438   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:23.237672   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:23.737184   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:24.239150   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:24.737464   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:25.237351   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:25.737539   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:26.237905   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:26.737559   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:27.237704   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:27.738056   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:28.237766   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:28.737159   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:29.237477   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:29.737337   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:30.238578   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:30.737543   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:31.237419   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:31.737583   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:32.237893   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:32.737619   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:33.237679   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:33.737168   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:34.237268   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:34.737264   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:35.237495   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:35.738039   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:36.238149   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:36.737649   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:37.237524   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:37.737017   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:38.238138   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:38.737568   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:39.237391   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:39.736477   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:40.238059   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:40.738010   16686 kapi.go:107] duration metric: took 2m22.504874191s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0920 16:47:40.740079   16686 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-489802 cluster.
	I0920 16:47:40.741424   16686 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0920 16:47:40.742789   16686 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0920 16:47:40.744449   16686 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, inspektor-gadget, ingress-dns, storage-provisioner, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0920 16:47:40.745981   16686 addons.go:510] duration metric: took 2m34.882823136s for enable addons: enabled=[nvidia-device-plugin cloud-spanner inspektor-gadget ingress-dns storage-provisioner metrics-server yakd default-storageclass volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0920 16:47:40.746064   16686 start.go:246] waiting for cluster config update ...
	I0920 16:47:40.746085   16686 start.go:255] writing updated cluster config ...
	I0920 16:47:40.746667   16686 ssh_runner.go:195] Run: rm -f paused
	I0920 16:47:40.832742   16686 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 16:47:40.834777   16686 out.go:177] * Done! kubectl is now configured to use "addons-489802" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 20 16:56:59 addons-489802 crio[664]: time="2024-09-20 16:56:59.218037211Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726851419218008762,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:550631,},InodesUsed:&UInt64Value{Value:187,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b823f095-4010-439d-84ce-2019842c72ba name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 16:56:59 addons-489802 crio[664]: time="2024-09-20 16:56:59.218649776Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f085ab55-fe88-4464-af73-edc0da3b0789 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 16:56:59 addons-489802 crio[664]: time="2024-09-20 16:56:59.218730027Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f085ab55-fe88-4464-af73-edc0da3b0789 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 16:56:59 addons-489802 crio[664]: time="2024-09-20 16:56:59.220653955Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b3b98df31c510e2b7d8467304c108770bfabad7ebb2494f12313d8f912b2482c,PodSandboxId:ddccd18e28f19bcd554a80347c0802f4ddf6d7bad08d4b2ac6f27eb3e102b20d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726851396918964948,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4c34572d-1118-4bb3-8265-b67b3104bc59,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7855191edf9dac6a02cb338d5c06ae79feb999af96c1205e987920277065d079,PodSandboxId:94183eceae2996500c16f3e0182cde507f6cd239b42d7e04c4be2ec3899c6a6f,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1726851392690228669,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-delete-pvc-b8225ab7-cae8-4ab5-8ca1-5e74b7712f98,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 3fe17849-a80b-47ae-adf6-77c01273238d,},Annotations:map[string]string{io.kubernetes.container.
hash: 973dbf55,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88e1bcda36f90d21b7130116c5ea6b1229ea9a0700e45bbff41b308db6dbc33c,PodSandboxId:e98930c60622fe0f7bdc4bf6d6d08bc7526fc617d34122970d0d7182bd9138e3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:9186e638ccc30c5d1a2efd5a2cd632f49bb5013f164f6f85c48ed6fce90fe38f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6fd955f66c231c1a946653170d096a28ac2b2052a02080c0b84ec082a07f7d12,State:CONTAINER_EXITED,CreatedAt:1726851385595495335,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: test-local-path,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5a52579e-aa38-4262-8d40-663925dc3ec1,},Annotations:map[string]string{io.kubernetes.container.hash: dd3595
ac,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5038dfb91d9e8dca86d31517d2006b94b6c908631d5f20394c86871e56d1d08,PodSandboxId:21621b7034dfd4946db396ab2ecb322c86b682626f5c0285738a87ba88bfbf23,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1726851379562446175,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-create-pvc-b8225ab7-cae8-4ab5-8ca1-5e74b7712f98,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 5e840cda-1451-4279-88ae-f9ba29c00bec,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 973dbf55,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c1fd10705c644580fdef2fc3075d7c9349c8e4b44d3899910dd41e40c87e2ce,PodSandboxId:66f4ad3477a6cc6a00655cc193d28db01097870bd3585db50c33f9e7cc96f8cf,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726850860245098258,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-wzvr2,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 9688d654-b2f1-4e67-b21f-737c57cb6d4f,},Annota
tions:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3789e1deba3b5c9ce3ea828aadfae5635edc167c040fa930464707e91be53341,PodSandboxId:a00df88c1f82fc3492928da8501518bce8b0f2ccb5d6274f59769e673a724852,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1726850801062921486,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-
hglqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aeb8bcc-1f9f-40f6-8aa1-4822a64115f2,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2170a5568649b01c67765f29e8fdff73695d347ea33e872cffc2746fb679bb35,PodSandboxId:a00df88c1f82fc3492928da8501518bce8b0f2ccb5d6274f59769e673a724852,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1726850799105793588,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kube
rnetes.pod.name: csi-hostpathplugin-hglqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aeb8bcc-1f9f-40f6-8aa1-4822a64115f2,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d20afd16f541ca333f7bd36f8da7782ea9a69ae24093ca113e872faea4de2b70,PodSandboxId:a00df88c1f82fc3492928da8501518bce8b0f2ccb5d6274f59769e673a724852,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1726850796453830227,Labels:map[string]string{io.kubernetes.contain
er.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-hglqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aeb8bcc-1f9f-40f6-8aa1-4822a64115f2,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a74c2b2a3ee355c5e919249c37a775e1de74552c52cbd119a7bcde2f5ef8ff6,PodSandboxId:a00df88c1f82fc3492928da8501518bce8b0f2ccb5d6274f59769e673a724852,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1726850794957838931,Labels:map[string]s
tring{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-hglqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aeb8bcc-1f9f-40f6-8aa1-4822a64115f2,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29c24274c3f958be71bf70e73d568bc6a4bb1bb6c65a5881e3fc34fefcc9fcf2,PodSandboxId:a115fb5bcdd70dd9eaddc86f186e4f6e55036b28dc0a72cf68edf7dae1530096,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38
d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1726850793113655585,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-79mpt,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f93f931b-28ea-417f-9956-b9dce76ebe38,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:4141de5542403c5675e25ca0d8c438d502a45b49559475f
46261d4f34feaa611,PodSandboxId:a00df88c1f82fc3492928da8501518bce8b0f2ccb5d6274f59769e673a724852,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1726850784957523000,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-hglqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aeb8bcc-1f9f-40f6-8aa1-4822a64115f2,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Co
ntainer{Id:0580426e8d27f7c106f5d251425428617a0b35941fbdbeb0cef1280abf386f6c,PodSandboxId:7a34dc197c7221a0f7968767406de9d9088af78de12ad00aa7c9e7602d006f7e,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1726850782617829339,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fc733e6-4135-418b-a554-490bd25dabe7,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:8af8ae710b7bb0d44edc885792516e5b3d3019d460fe9988723ecff6c6361291,PodSandboxId:91d588b9442b7a5883fc1e6ec70b3073b793fe9fa8e955c8f9ca0da9ba64c130,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1726850780862773926,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85755d16-e8fa-4878-9184-45658ba8d8ac,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c378811c5ad20a87fbb4de0cc32b2c86dc1e847531f104f45e8945f74db49ebf,PodSandboxId:a00df88c1f82fc3492928da8501518bce8b0f2ccb5d6274f59769e673a724852,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1726850778878102572,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-hglqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aeb8bcc-1f9f-40f6-8aa1-4822a64115f2,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5e85742448a79b5c1857677ad2a134c6852e9495035cbf9c25e3a7521dd6bb2,PodSandboxId:61840f6d138dd81b5b65efdfcdb4db6fc37465b1ee033b0bee2142714f07f4ae,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726850777385583505,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-b6mtt,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: fd711ef0-0010-45af-a950-49c84a55c942,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a9b75a453cd62d16bb90bc10f20dd616029cfd1dbb3300fdd9d3b272d5c1367,PodSandboxId:85dbe34d0b929d7356ea58dd7954b02069f214007674d69cc2313ed32dff2fc1,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726850776662845564,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-h7lw7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 52fba05c-46c5-4916-b5e4-386dadb0ae61,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a52dd73b64e18e4b092d4faca5a851b5873993d6017437104456eb51f3e1465a,PodSandboxId:1c6c09297d2606b08525d2ccba830943316f2d00ad82e5c753cf47556db96a02,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726850774360560772,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-4l9hv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eccfc252-ad9c-4b70-bb1c-d81a71214556,},Annotations:map[string]string{io.kubernetes
.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6aa4419694f45aa6bf3df3b53e48fb2c8f23061822c55c47d7081f7e546a623,PodSandboxId:57c48f1670b2a1a06e6ff7871e9504d83d44e44f6b4cc4c9e901990d02cd4cd3,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726850774210107694,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-2hz6g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d531a52
-cced-4b3d-adfd-5d62357591e8,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0690e87ddb4f9357eefd739e00c9cec1ec022eda1379279b535ba4678c33b26,PodSandboxId:36aedadeb2582fe5df950ad8776d82e03104ab51a608a29ba00e9113b19e678e,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1726850772239877059,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-rhmqb,io.kuberne
tes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: e5f1d3f8-1767-4ad2-b5b8-eb5bf18bc163,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a0d036505e72ed6b62c09226aa4d219c30e6e162e73ebffc595f568b216931d,PodSandboxId:1ae7bada2f668e2292fc48d3426bfa34e41215d4336864f62a4f90b4ee95709f,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726850737987301783,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.po
d.name: metrics-server-84c5f94fbc-txlrn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6d2625e-ba6e-44e1-b245-0edc2adaa243,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09902a512e79f577d8d4f2a8784f5484d2134b53bde422c6322893204f62b00a,PodSandboxId:3843f8105dc892830a295ca6b48e8f9f1e0a84e15e2eab1bd63dabf67e0567e1,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc2
9,State:CONTAINER_RUNNING,CreatedAt:1726850735299111720,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f722d5e-9dee-4b0e-8661-9c4181ea4f9b,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a981c68e927108571692d174ebc0cf47e600882543d6dd401c23cbcd805d49d,PodSandboxId:11b2a45f795d401fe4c78cf74478d3d702eff22fef4bdd814d8198ee5072d604,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,Ru
ntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726850713649115674,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e04b7e0-a0fe-4e65-9ba5-63be2690da1d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70c74f4f1e0bde75fc553a034aa664a515c218d2b72725850921f92314b6ec06,PodSandboxId:cfda686abf7f1fef69de6a34f633f44ac3c87637d6ec92d05dc4a45a4d5652b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef
:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726850710125894103,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nqbzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 734f1782-975a-486b-adf3-32f60c376a9a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c60a90d5ed294c1a5015ea6f6b5c5259e8d437a6b5dd0f9dd758bb62d91c7b7,PodSandboxId:b53a284c395cfb6bdea6622b664327da6733b43d0375a7570cfa3dac443563e5,Metadata:&ContainerMetad
ata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726850707153952752,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xr4bt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a20cb9e-3e82-4bda-9529-7e024f9681a4,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44c347dc4cb2326d9ce7eef959abf86dcaee69ecf824e59fbe44600500e8a0f4,PodSandboxId:0ccdde3d3e8e30fec62b1f315de346cf5989b81e93276bfcf9792ae014efb9d5,Metadata:&ContainerMetadata{Name:kube-controller-manager,
Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726850695786767603,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-489802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8db84d1368c7024c014f2f2f0d973aae,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79fb233450407c6fdf879eb55124a5840bf49aaa572c10f7add06512d38df264,PodSandboxId:b3c515c903cd8c54cc3829530f8702fa82f07287a4bcae50433ffb0e6100c34b,Metadata:&ContainerMetadata{Name:kube-apiserver
,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726850695761844946,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-489802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de814a9694fb61ae23ac46f9b9deb6e7,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ebda0675cfbe9e7b3e6c1ca40351339db78cf3954608a12cc779850ee452a23,PodSandboxId:ce3e5a61bc6e6a8044b701e61a79b033d814fb58851347acc4b4eaab63045047,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSp
ec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726850695741523021,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-489802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 016cfe34770e4cbd59f73407149e44ff,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53631bbb5fc199153283953dffaf83c3d2a2b4cdbda98ab81770b42af5dfe30e,PodSandboxId:c9a4930506bbb11794aa02ab9a68cfe8370b91453dd7ab2cce5eac61a155cacf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e
4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726850695699198150,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-489802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50faea81a2001503e00d2a0be1ceba9e,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f085ab55-fe88-4464-af73-edc0da3b0789 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 16:56:59 addons-489802 crio[664]: time="2024-09-20 16:56:59.285192167Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=78573faf-182c-4676-8292-a95111294789 name=/runtime.v1.RuntimeService/Version
	Sep 20 16:56:59 addons-489802 crio[664]: time="2024-09-20 16:56:59.285299806Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=78573faf-182c-4676-8292-a95111294789 name=/runtime.v1.RuntimeService/Version
	Sep 20 16:56:59 addons-489802 crio[664]: time="2024-09-20 16:56:59.286200454Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=44e79bb9-a2d0-40c6-87fe-b9f09dfd7b46 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 16:56:59 addons-489802 crio[664]: time="2024-09-20 16:56:59.287397725Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726851419287328046,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:550631,},InodesUsed:&UInt64Value{Value:187,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=44e79bb9-a2d0-40c6-87fe-b9f09dfd7b46 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 16:56:59 addons-489802 crio[664]: time="2024-09-20 16:56:59.288018220Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{io.kubernetes.pod.namespace: kube-system,},},}" file="otel-collector/interceptors.go:62" id=4f40ed99-aab0-4e0a-ae85-a62ea3efb94c name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 16:56:59 addons-489802 crio[664]: time="2024-09-20 16:56:59.288092446Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4f40ed99-aab0-4e0a-ae85-a62ea3efb94c name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 16:56:59 addons-489802 crio[664]: time="2024-09-20 16:56:59.288571936Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3789e1deba3b5c9ce3ea828aadfae5635edc167c040fa930464707e91be53341,PodSandboxId:a00df88c1f82fc3492928da8501518bce8b0f2ccb5d6274f59769e673a724852,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1726850801062921486,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-hglqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aeb8bcc-1f9f-40f6-8aa1-4822a64115f2,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2170a5568649b01c67765f29e8fdff73695d347ea33e872cffc2746fb679bb35,PodSandboxId:a00df88c1f82fc3492928da8501518bce8b0f2ccb5d6274f59769e673a724852,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1726850799105793588,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-hglqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aeb8bcc-1f9f-40f6-8aa1-4822a64115f2,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d20afd16f541ca333f7bd36f8da7782ea9a69ae24093ca113e872faea4de2b70,PodSandboxId:a00df88c1f82fc3492928da8501518bce8b0f2ccb5d6274f59769e673a724852,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1726850796453830227,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-hglqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aeb8bcc-1f9f-40f6-8aa1-4822a64115f2,},Annotations:map[
string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a74c2b2a3ee355c5e919249c37a775e1de74552c52cbd119a7bcde2f5ef8ff6,PodSandboxId:a00df88c1f82fc3492928da8501518bce8b0f2ccb5d6274f59769e673a724852,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1726850794957838931,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-hglqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aeb8bcc-1f9f-40f6-8aa1-4822a64115
f2,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4141de5542403c5675e25ca0d8c438d502a45b49559475f46261d4f34feaa611,PodSandboxId:a00df88c1f82fc3492928da8501518bce8b0f2ccb5d6274f59769e673a724852,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1726850784957523000,Labels:map[string]string{io.kubernetes.container.name: node-driver
-registrar,io.kubernetes.pod.name: csi-hostpathplugin-hglqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aeb8bcc-1f9f-40f6-8aa1-4822a64115f2,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0580426e8d27f7c106f5d251425428617a0b35941fbdbeb0cef1280abf386f6c,PodSandboxId:7a34dc197c7221a0f7968767406de9d9088af78de12ad00aa7c9e7602d006f7e,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1726850782617829339,Labels:map[string]string{io.ku
bernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fc733e6-4135-418b-a554-490bd25dabe7,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8af8ae710b7bb0d44edc885792516e5b3d3019d460fe9988723ecff6c6361291,PodSandboxId:91d588b9442b7a5883fc1e6ec70b3073b793fe9fa8e955c8f9ca0da9ba64c130,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1726850780862773926,Labels
:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85755d16-e8fa-4878-9184-45658ba8d8ac,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c378811c5ad20a87fbb4de0cc32b2c86dc1e847531f104f45e8945f74db49ebf,PodSandboxId:a00df88c1f82fc3492928da8501518bce8b0f2ccb5d6274f59769e673a724852,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585d
c6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1726850778878102572,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-hglqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aeb8bcc-1f9f-40f6-8aa1-4822a64115f2,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a52dd73b64e18e4b092d4faca5a851b5873993d6017437104456eb51f3e1465a,PodSandboxId:1c6c09297d2606b08525d2ccba830943316f2d00ad82e5c753cf47556db96a02,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,Ru
ntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726850774360560772,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-4l9hv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eccfc252-ad9c-4b70-bb1c-d81a71214556,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6aa4419694f45aa6bf3df3b53e48fb2c8f23061822c55c47d7081f7e546a623,PodSandboxId:57c48f1670b2a1a06e6ff7871e9504d83d44e44f6b4cc4c9e901990d02cd4cd3,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b
20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726850774210107694,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-2hz6g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d531a52-cced-4b3d-adfd-5d62357591e8,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a0d036505e72ed6b62c09226aa4d219c30e6e162e73ebffc595f568b216931d,PodSandboxId:1ae7bada2f668e2292fc48d3426bfa34e41215d4336864f62a4f90b4ee95709f,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics
-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726850737987301783,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-txlrn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6d2625e-ba6e-44e1-b245-0edc2adaa243,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09902a512e79f577d8d4f2a8784f5484d2134b53bde422c6322893204f62b00a,PodSandboxId:3843f8105dc892830a295ca6b48e8f9f1e0a84e15e2ea
b1bd63dabf67e0567e1,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1726850735299111720,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f722d5e-9dee-4b0e-8661-9c4181ea4f9b,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},
&Container{Id:5a981c68e927108571692d174ebc0cf47e600882543d6dd401c23cbcd805d49d,PodSandboxId:11b2a45f795d401fe4c78cf74478d3d702eff22fef4bdd814d8198ee5072d604,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726850713649115674,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e04b7e0-a0fe-4e65-9ba5-63be2690da1d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{I
d:70c74f4f1e0bde75fc553a034aa664a515c218d2b72725850921f92314b6ec06,PodSandboxId:cfda686abf7f1fef69de6a34f633f44ac3c87637d6ec92d05dc4a45a4d5652b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726850710125894103,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nqbzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 734f1782-975a-486b-adf3-32f60c376a9a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restar
tCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c60a90d5ed294c1a5015ea6f6b5c5259e8d437a6b5dd0f9dd758bb62d91c7b7,PodSandboxId:b53a284c395cfb6bdea6622b664327da6733b43d0375a7570cfa3dac443563e5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726850707153952752,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xr4bt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a20cb9e-3e82-4bda-9529-7e024f9681a4,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44c347dc4cb2326d9ce7eef959abf86dcaee69ecf824e59fbe44600500e8a0f4,PodSandboxId:0ccdde3d3e8e30fec62b1f315de346cf5989b81e93276bfcf9792ae014efb9d5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726850695786767603,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-489802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8db84d1368c7024c014f2f2f0d973aae,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes
.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79fb233450407c6fdf879eb55124a5840bf49aaa572c10f7add06512d38df264,PodSandboxId:b3c515c903cd8c54cc3829530f8702fa82f07287a4bcae50433ffb0e6100c34b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726850695761844946,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-489802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de814a9694fb61ae23ac46f9b9deb6e7,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ebda0675cfbe9e7b3e6c1ca40351339db78cf3954608a12cc779850ee452a23,PodSandboxId:ce3e5a61bc6e6a8044b701e61a79b033d814fb58851347acc4b4eaab63045047,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726850695741523021,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-489802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 016cfe34770e4cbd59f73407149e44ff,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuber
netes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53631bbb5fc199153283953dffaf83c3d2a2b4cdbda98ab81770b42af5dfe30e,PodSandboxId:c9a4930506bbb11794aa02ab9a68cfe8370b91453dd7ab2cce5eac61a155cacf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726850695699198150,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-489802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50faea81a2001503e00d2a0be1ceba9e,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.t
erminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4f40ed99-aab0-4e0a-ae85-a62ea3efb94c name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 16:56:59 addons-489802 crio[664]: time="2024-09-20 16:56:59.298278601Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f29bbe75-a164-4220-bf44-fda39f240527 name=/runtime.v1.RuntimeService/Version
	Sep 20 16:56:59 addons-489802 crio[664]: time="2024-09-20 16:56:59.298428020Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f29bbe75-a164-4220-bf44-fda39f240527 name=/runtime.v1.RuntimeService/Version
	Sep 20 16:56:59 addons-489802 crio[664]: time="2024-09-20 16:56:59.302561148Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f5a21df8-a2ee-40fb-96de-d6df997f7100 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 16:56:59 addons-489802 crio[664]: time="2024-09-20 16:56:59.303689460Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726851419303658626,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:550631,},InodesUsed:&UInt64Value{Value:187,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f5a21df8-a2ee-40fb-96de-d6df997f7100 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 16:56:59 addons-489802 crio[664]: time="2024-09-20 16:56:59.304484468Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e65c9dbf-df2c-4711-853e-7fef800c1a57 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 16:56:59 addons-489802 crio[664]: time="2024-09-20 16:56:59.304563397Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e65c9dbf-df2c-4711-853e-7fef800c1a57 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 16:56:59 addons-489802 crio[664]: time="2024-09-20 16:56:59.305278429Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b3b98df31c510e2b7d8467304c108770bfabad7ebb2494f12313d8f912b2482c,PodSandboxId:ddccd18e28f19bcd554a80347c0802f4ddf6d7bad08d4b2ac6f27eb3e102b20d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726851396918964948,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4c34572d-1118-4bb3-8265-b67b3104bc59,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7855191edf9dac6a02cb338d5c06ae79feb999af96c1205e987920277065d079,PodSandboxId:94183eceae2996500c16f3e0182cde507f6cd239b42d7e04c4be2ec3899c6a6f,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1726851392690228669,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-delete-pvc-b8225ab7-cae8-4ab5-8ca1-5e74b7712f98,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 3fe17849-a80b-47ae-adf6-77c01273238d,},Annotations:map[string]string{io.kubernetes.container.
hash: 973dbf55,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88e1bcda36f90d21b7130116c5ea6b1229ea9a0700e45bbff41b308db6dbc33c,PodSandboxId:e98930c60622fe0f7bdc4bf6d6d08bc7526fc617d34122970d0d7182bd9138e3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:9186e638ccc30c5d1a2efd5a2cd632f49bb5013f164f6f85c48ed6fce90fe38f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6fd955f66c231c1a946653170d096a28ac2b2052a02080c0b84ec082a07f7d12,State:CONTAINER_EXITED,CreatedAt:1726851385595495335,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: test-local-path,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5a52579e-aa38-4262-8d40-663925dc3ec1,},Annotations:map[string]string{io.kubernetes.container.hash: dd3595
ac,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5038dfb91d9e8dca86d31517d2006b94b6c908631d5f20394c86871e56d1d08,PodSandboxId:21621b7034dfd4946db396ab2ecb322c86b682626f5c0285738a87ba88bfbf23,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1726851379562446175,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-create-pvc-b8225ab7-cae8-4ab5-8ca1-5e74b7712f98,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 5e840cda-1451-4279-88ae-f9ba29c00bec,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 973dbf55,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c1fd10705c644580fdef2fc3075d7c9349c8e4b44d3899910dd41e40c87e2ce,PodSandboxId:66f4ad3477a6cc6a00655cc193d28db01097870bd3585db50c33f9e7cc96f8cf,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726850860245098258,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-wzvr2,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 9688d654-b2f1-4e67-b21f-737c57cb6d4f,},Annota
tions:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3789e1deba3b5c9ce3ea828aadfae5635edc167c040fa930464707e91be53341,PodSandboxId:a00df88c1f82fc3492928da8501518bce8b0f2ccb5d6274f59769e673a724852,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1726850801062921486,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-
hglqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aeb8bcc-1f9f-40f6-8aa1-4822a64115f2,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2170a5568649b01c67765f29e8fdff73695d347ea33e872cffc2746fb679bb35,PodSandboxId:a00df88c1f82fc3492928da8501518bce8b0f2ccb5d6274f59769e673a724852,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1726850799105793588,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kube
rnetes.pod.name: csi-hostpathplugin-hglqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aeb8bcc-1f9f-40f6-8aa1-4822a64115f2,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d20afd16f541ca333f7bd36f8da7782ea9a69ae24093ca113e872faea4de2b70,PodSandboxId:a00df88c1f82fc3492928da8501518bce8b0f2ccb5d6274f59769e673a724852,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1726850796453830227,Labels:map[string]string{io.kubernetes.contain
er.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-hglqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aeb8bcc-1f9f-40f6-8aa1-4822a64115f2,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a74c2b2a3ee355c5e919249c37a775e1de74552c52cbd119a7bcde2f5ef8ff6,PodSandboxId:a00df88c1f82fc3492928da8501518bce8b0f2ccb5d6274f59769e673a724852,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1726850794957838931,Labels:map[string]s
tring{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-hglqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aeb8bcc-1f9f-40f6-8aa1-4822a64115f2,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29c24274c3f958be71bf70e73d568bc6a4bb1bb6c65a5881e3fc34fefcc9fcf2,PodSandboxId:a115fb5bcdd70dd9eaddc86f186e4f6e55036b28dc0a72cf68edf7dae1530096,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38
d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1726850793113655585,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-79mpt,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f93f931b-28ea-417f-9956-b9dce76ebe38,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:4141de5542403c5675e25ca0d8c438d502a45b49559475f
46261d4f34feaa611,PodSandboxId:a00df88c1f82fc3492928da8501518bce8b0f2ccb5d6274f59769e673a724852,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1726850784957523000,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-hglqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aeb8bcc-1f9f-40f6-8aa1-4822a64115f2,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Co
ntainer{Id:0580426e8d27f7c106f5d251425428617a0b35941fbdbeb0cef1280abf386f6c,PodSandboxId:7a34dc197c7221a0f7968767406de9d9088af78de12ad00aa7c9e7602d006f7e,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1726850782617829339,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fc733e6-4135-418b-a554-490bd25dabe7,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:8af8ae710b7bb0d44edc885792516e5b3d3019d460fe9988723ecff6c6361291,PodSandboxId:91d588b9442b7a5883fc1e6ec70b3073b793fe9fa8e955c8f9ca0da9ba64c130,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1726850780862773926,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85755d16-e8fa-4878-9184-45658ba8d8ac,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c378811c5ad20a87fbb4de0cc32b2c86dc1e847531f104f45e8945f74db49ebf,PodSandboxId:a00df88c1f82fc3492928da8501518bce8b0f2ccb5d6274f59769e673a724852,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1726850778878102572,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-hglqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aeb8bcc-1f9f-40f6-8aa1-4822a64115f2,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5e85742448a79b5c1857677ad2a134c6852e9495035cbf9c25e3a7521dd6bb2,PodSandboxId:61840f6d138dd81b5b65efdfcdb4db6fc37465b1ee033b0bee2142714f07f4ae,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726850777385583505,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-b6mtt,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: fd711ef0-0010-45af-a950-49c84a55c942,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a9b75a453cd62d16bb90bc10f20dd616029cfd1dbb3300fdd9d3b272d5c1367,PodSandboxId:85dbe34d0b929d7356ea58dd7954b02069f214007674d69cc2313ed32dff2fc1,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726850776662845564,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-h7lw7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 52fba05c-46c5-4916-b5e4-386dadb0ae61,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a52dd73b64e18e4b092d4faca5a851b5873993d6017437104456eb51f3e1465a,PodSandboxId:1c6c09297d2606b08525d2ccba830943316f2d00ad82e5c753cf47556db96a02,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726850774360560772,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-4l9hv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eccfc252-ad9c-4b70-bb1c-d81a71214556,},Annotations:map[string]string{io.kubernetes
.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6aa4419694f45aa6bf3df3b53e48fb2c8f23061822c55c47d7081f7e546a623,PodSandboxId:57c48f1670b2a1a06e6ff7871e9504d83d44e44f6b4cc4c9e901990d02cd4cd3,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726850774210107694,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-2hz6g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d531a52
-cced-4b3d-adfd-5d62357591e8,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0690e87ddb4f9357eefd739e00c9cec1ec022eda1379279b535ba4678c33b26,PodSandboxId:36aedadeb2582fe5df950ad8776d82e03104ab51a608a29ba00e9113b19e678e,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1726850772239877059,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-rhmqb,io.kuberne
tes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: e5f1d3f8-1767-4ad2-b5b8-eb5bf18bc163,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a0d036505e72ed6b62c09226aa4d219c30e6e162e73ebffc595f568b216931d,PodSandboxId:1ae7bada2f668e2292fc48d3426bfa34e41215d4336864f62a4f90b4ee95709f,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726850737987301783,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.po
d.name: metrics-server-84c5f94fbc-txlrn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6d2625e-ba6e-44e1-b245-0edc2adaa243,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09902a512e79f577d8d4f2a8784f5484d2134b53bde422c6322893204f62b00a,PodSandboxId:3843f8105dc892830a295ca6b48e8f9f1e0a84e15e2eab1bd63dabf67e0567e1,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc2
9,State:CONTAINER_RUNNING,CreatedAt:1726850735299111720,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f722d5e-9dee-4b0e-8661-9c4181ea4f9b,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a981c68e927108571692d174ebc0cf47e600882543d6dd401c23cbcd805d49d,PodSandboxId:11b2a45f795d401fe4c78cf74478d3d702eff22fef4bdd814d8198ee5072d604,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,Ru
ntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726850713649115674,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e04b7e0-a0fe-4e65-9ba5-63be2690da1d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70c74f4f1e0bde75fc553a034aa664a515c218d2b72725850921f92314b6ec06,PodSandboxId:cfda686abf7f1fef69de6a34f633f44ac3c87637d6ec92d05dc4a45a4d5652b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef
:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726850710125894103,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nqbzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 734f1782-975a-486b-adf3-32f60c376a9a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c60a90d5ed294c1a5015ea6f6b5c5259e8d437a6b5dd0f9dd758bb62d91c7b7,PodSandboxId:b53a284c395cfb6bdea6622b664327da6733b43d0375a7570cfa3dac443563e5,Metadata:&ContainerMetad
ata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726850707153952752,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xr4bt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a20cb9e-3e82-4bda-9529-7e024f9681a4,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44c347dc4cb2326d9ce7eef959abf86dcaee69ecf824e59fbe44600500e8a0f4,PodSandboxId:0ccdde3d3e8e30fec62b1f315de346cf5989b81e93276bfcf9792ae014efb9d5,Metadata:&ContainerMetadata{Name:kube-controller-manager,
Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726850695786767603,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-489802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8db84d1368c7024c014f2f2f0d973aae,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79fb233450407c6fdf879eb55124a5840bf49aaa572c10f7add06512d38df264,PodSandboxId:b3c515c903cd8c54cc3829530f8702fa82f07287a4bcae50433ffb0e6100c34b,Metadata:&ContainerMetadata{Name:kube-apiserver
,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726850695761844946,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-489802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de814a9694fb61ae23ac46f9b9deb6e7,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ebda0675cfbe9e7b3e6c1ca40351339db78cf3954608a12cc779850ee452a23,PodSandboxId:ce3e5a61bc6e6a8044b701e61a79b033d814fb58851347acc4b4eaab63045047,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSp
ec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726850695741523021,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-489802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 016cfe34770e4cbd59f73407149e44ff,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53631bbb5fc199153283953dffaf83c3d2a2b4cdbda98ab81770b42af5dfe30e,PodSandboxId:c9a4930506bbb11794aa02ab9a68cfe8370b91453dd7ab2cce5eac61a155cacf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e
4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726850695699198150,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-489802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50faea81a2001503e00d2a0be1ceba9e,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e65c9dbf-df2c-4711-853e-7fef800c1a57 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 16:56:59 addons-489802 crio[664]: time="2024-09-20 16:56:59.369240188Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=661ba300-a9d1-4e7c-9b6f-ae49cd336a4c name=/runtime.v1.RuntimeService/Version
	Sep 20 16:56:59 addons-489802 crio[664]: time="2024-09-20 16:56:59.369407666Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=661ba300-a9d1-4e7c-9b6f-ae49cd336a4c name=/runtime.v1.RuntimeService/Version
	Sep 20 16:56:59 addons-489802 crio[664]: time="2024-09-20 16:56:59.375101511Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c84d685c-7246-430e-9d8d-bfe848e890cd name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 16:56:59 addons-489802 crio[664]: time="2024-09-20 16:56:59.378135343Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726851419378088365,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:550631,},InodesUsed:&UInt64Value{Value:187,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c84d685c-7246-430e-9d8d-bfe848e890cd name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 16:56:59 addons-489802 crio[664]: time="2024-09-20 16:56:59.379090381Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=76012d66-539a-43b6-9bd3-5f55aff72e04 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 16:56:59 addons-489802 crio[664]: time="2024-09-20 16:56:59.379169370Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=76012d66-539a-43b6-9bd3-5f55aff72e04 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 16:56:59 addons-489802 crio[664]: time="2024-09-20 16:56:59.379870987Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b3b98df31c510e2b7d8467304c108770bfabad7ebb2494f12313d8f912b2482c,PodSandboxId:ddccd18e28f19bcd554a80347c0802f4ddf6d7bad08d4b2ac6f27eb3e102b20d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726851396918964948,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4c34572d-1118-4bb3-8265-b67b3104bc59,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7855191edf9dac6a02cb338d5c06ae79feb999af96c1205e987920277065d079,PodSandboxId:94183eceae2996500c16f3e0182cde507f6cd239b42d7e04c4be2ec3899c6a6f,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1726851392690228669,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-delete-pvc-b8225ab7-cae8-4ab5-8ca1-5e74b7712f98,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 3fe17849-a80b-47ae-adf6-77c01273238d,},Annotations:map[string]string{io.kubernetes.container.
hash: 973dbf55,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88e1bcda36f90d21b7130116c5ea6b1229ea9a0700e45bbff41b308db6dbc33c,PodSandboxId:e98930c60622fe0f7bdc4bf6d6d08bc7526fc617d34122970d0d7182bd9138e3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:9186e638ccc30c5d1a2efd5a2cd632f49bb5013f164f6f85c48ed6fce90fe38f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6fd955f66c231c1a946653170d096a28ac2b2052a02080c0b84ec082a07f7d12,State:CONTAINER_EXITED,CreatedAt:1726851385595495335,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: test-local-path,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5a52579e-aa38-4262-8d40-663925dc3ec1,},Annotations:map[string]string{io.kubernetes.container.hash: dd3595
ac,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5038dfb91d9e8dca86d31517d2006b94b6c908631d5f20394c86871e56d1d08,PodSandboxId:21621b7034dfd4946db396ab2ecb322c86b682626f5c0285738a87ba88bfbf23,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1726851379562446175,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-create-pvc-b8225ab7-cae8-4ab5-8ca1-5e74b7712f98,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 5e840cda-1451-4279-88ae-f9ba29c00bec,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 973dbf55,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c1fd10705c644580fdef2fc3075d7c9349c8e4b44d3899910dd41e40c87e2ce,PodSandboxId:66f4ad3477a6cc6a00655cc193d28db01097870bd3585db50c33f9e7cc96f8cf,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726850860245098258,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-wzvr2,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 9688d654-b2f1-4e67-b21f-737c57cb6d4f,},Annota
tions:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3789e1deba3b5c9ce3ea828aadfae5635edc167c040fa930464707e91be53341,PodSandboxId:a00df88c1f82fc3492928da8501518bce8b0f2ccb5d6274f59769e673a724852,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1726850801062921486,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-
hglqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aeb8bcc-1f9f-40f6-8aa1-4822a64115f2,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2170a5568649b01c67765f29e8fdff73695d347ea33e872cffc2746fb679bb35,PodSandboxId:a00df88c1f82fc3492928da8501518bce8b0f2ccb5d6274f59769e673a724852,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1726850799105793588,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kube
rnetes.pod.name: csi-hostpathplugin-hglqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aeb8bcc-1f9f-40f6-8aa1-4822a64115f2,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d20afd16f541ca333f7bd36f8da7782ea9a69ae24093ca113e872faea4de2b70,PodSandboxId:a00df88c1f82fc3492928da8501518bce8b0f2ccb5d6274f59769e673a724852,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1726850796453830227,Labels:map[string]string{io.kubernetes.contain
er.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-hglqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aeb8bcc-1f9f-40f6-8aa1-4822a64115f2,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a74c2b2a3ee355c5e919249c37a775e1de74552c52cbd119a7bcde2f5ef8ff6,PodSandboxId:a00df88c1f82fc3492928da8501518bce8b0f2ccb5d6274f59769e673a724852,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1726850794957838931,Labels:map[string]s
tring{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-hglqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aeb8bcc-1f9f-40f6-8aa1-4822a64115f2,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29c24274c3f958be71bf70e73d568bc6a4bb1bb6c65a5881e3fc34fefcc9fcf2,PodSandboxId:a115fb5bcdd70dd9eaddc86f186e4f6e55036b28dc0a72cf68edf7dae1530096,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38
d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1726850793113655585,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-79mpt,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f93f931b-28ea-417f-9956-b9dce76ebe38,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:4141de5542403c5675e25ca0d8c438d502a45b49559475f
46261d4f34feaa611,PodSandboxId:a00df88c1f82fc3492928da8501518bce8b0f2ccb5d6274f59769e673a724852,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1726850784957523000,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-hglqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aeb8bcc-1f9f-40f6-8aa1-4822a64115f2,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Co
ntainer{Id:0580426e8d27f7c106f5d251425428617a0b35941fbdbeb0cef1280abf386f6c,PodSandboxId:7a34dc197c7221a0f7968767406de9d9088af78de12ad00aa7c9e7602d006f7e,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1726850782617829339,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fc733e6-4135-418b-a554-490bd25dabe7,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:8af8ae710b7bb0d44edc885792516e5b3d3019d460fe9988723ecff6c6361291,PodSandboxId:91d588b9442b7a5883fc1e6ec70b3073b793fe9fa8e955c8f9ca0da9ba64c130,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1726850780862773926,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85755d16-e8fa-4878-9184-45658ba8d8ac,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c378811c5ad20a87fbb4de0cc32b2c86dc1e847531f104f45e8945f74db49ebf,PodSandboxId:a00df88c1f82fc3492928da8501518bce8b0f2ccb5d6274f59769e673a724852,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1726850778878102572,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-hglqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aeb8bcc-1f9f-40f6-8aa1-4822a64115f2,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5e85742448a79b5c1857677ad2a134c6852e9495035cbf9c25e3a7521dd6bb2,PodSandboxId:61840f6d138dd81b5b65efdfcdb4db6fc37465b1ee033b0bee2142714f07f4ae,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726850777385583505,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-b6mtt,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: fd711ef0-0010-45af-a950-49c84a55c942,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a9b75a453cd62d16bb90bc10f20dd616029cfd1dbb3300fdd9d3b272d5c1367,PodSandboxId:85dbe34d0b929d7356ea58dd7954b02069f214007674d69cc2313ed32dff2fc1,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726850776662845564,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-h7lw7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 52fba05c-46c5-4916-b5e4-386dadb0ae61,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a52dd73b64e18e4b092d4faca5a851b5873993d6017437104456eb51f3e1465a,PodSandboxId:1c6c09297d2606b08525d2ccba830943316f2d00ad82e5c753cf47556db96a02,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726850774360560772,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-4l9hv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eccfc252-ad9c-4b70-bb1c-d81a71214556,},Annotations:map[string]string{io.kubernetes
.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6aa4419694f45aa6bf3df3b53e48fb2c8f23061822c55c47d7081f7e546a623,PodSandboxId:57c48f1670b2a1a06e6ff7871e9504d83d44e44f6b4cc4c9e901990d02cd4cd3,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726850774210107694,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-2hz6g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d531a52
-cced-4b3d-adfd-5d62357591e8,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0690e87ddb4f9357eefd739e00c9cec1ec022eda1379279b535ba4678c33b26,PodSandboxId:36aedadeb2582fe5df950ad8776d82e03104ab51a608a29ba00e9113b19e678e,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1726850772239877059,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-rhmqb,io.kuberne
tes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: e5f1d3f8-1767-4ad2-b5b8-eb5bf18bc163,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a0d036505e72ed6b62c09226aa4d219c30e6e162e73ebffc595f568b216931d,PodSandboxId:1ae7bada2f668e2292fc48d3426bfa34e41215d4336864f62a4f90b4ee95709f,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726850737987301783,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.po
d.name: metrics-server-84c5f94fbc-txlrn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6d2625e-ba6e-44e1-b245-0edc2adaa243,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09902a512e79f577d8d4f2a8784f5484d2134b53bde422c6322893204f62b00a,PodSandboxId:3843f8105dc892830a295ca6b48e8f9f1e0a84e15e2eab1bd63dabf67e0567e1,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc2
9,State:CONTAINER_RUNNING,CreatedAt:1726850735299111720,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f722d5e-9dee-4b0e-8661-9c4181ea4f9b,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a981c68e927108571692d174ebc0cf47e600882543d6dd401c23cbcd805d49d,PodSandboxId:11b2a45f795d401fe4c78cf74478d3d702eff22fef4bdd814d8198ee5072d604,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,Ru
ntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726850713649115674,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e04b7e0-a0fe-4e65-9ba5-63be2690da1d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70c74f4f1e0bde75fc553a034aa664a515c218d2b72725850921f92314b6ec06,PodSandboxId:cfda686abf7f1fef69de6a34f633f44ac3c87637d6ec92d05dc4a45a4d5652b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef
:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726850710125894103,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nqbzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 734f1782-975a-486b-adf3-32f60c376a9a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c60a90d5ed294c1a5015ea6f6b5c5259e8d437a6b5dd0f9dd758bb62d91c7b7,PodSandboxId:b53a284c395cfb6bdea6622b664327da6733b43d0375a7570cfa3dac443563e5,Metadata:&ContainerMetad
ata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726850707153952752,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xr4bt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a20cb9e-3e82-4bda-9529-7e024f9681a4,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44c347dc4cb2326d9ce7eef959abf86dcaee69ecf824e59fbe44600500e8a0f4,PodSandboxId:0ccdde3d3e8e30fec62b1f315de346cf5989b81e93276bfcf9792ae014efb9d5,Metadata:&ContainerMetadata{Name:kube-controller-manager,
Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726850695786767603,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-489802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8db84d1368c7024c014f2f2f0d973aae,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79fb233450407c6fdf879eb55124a5840bf49aaa572c10f7add06512d38df264,PodSandboxId:b3c515c903cd8c54cc3829530f8702fa82f07287a4bcae50433ffb0e6100c34b,Metadata:&ContainerMetadata{Name:kube-apiserver
,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726850695761844946,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-489802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de814a9694fb61ae23ac46f9b9deb6e7,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ebda0675cfbe9e7b3e6c1ca40351339db78cf3954608a12cc779850ee452a23,PodSandboxId:ce3e5a61bc6e6a8044b701e61a79b033d814fb58851347acc4b4eaab63045047,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSp
ec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726850695741523021,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-489802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 016cfe34770e4cbd59f73407149e44ff,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53631bbb5fc199153283953dffaf83c3d2a2b4cdbda98ab81770b42af5dfe30e,PodSandboxId:c9a4930506bbb11794aa02ab9a68cfe8370b91453dd7ab2cce5eac61a155cacf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e
4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726850695699198150,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-489802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50faea81a2001503e00d2a0be1ceba9e,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=76012d66-539a-43b6-9bd3-5f55aff72e04 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	b3b98df31c510       docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56                                              22 seconds ago      Running             nginx                                    0                   ddccd18e28f19       nginx
	7855191edf9da       a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824                                                                             26 seconds ago      Exited              helper-pod                               0                   94183eceae299       helper-pod-delete-pvc-b8225ab7-cae8-4ab5-8ca1-5e74b7712f98
	88e1bcda36f90       docker.io/library/busybox@sha256:9186e638ccc30c5d1a2efd5a2cd632f49bb5013f164f6f85c48ed6fce90fe38f                                            33 seconds ago      Exited              busybox                                  0                   e98930c60622f       test-local-path
	e5038dfb91d9e       docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee                                            39 seconds ago      Exited              helper-pod                               0                   21621b7034dfd       helper-pod-create-pvc-b8225ab7-cae8-4ab5-8ca1-5e74b7712f98
	1c1fd10705c64       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                                 9 minutes ago       Running             gcp-auth                                 0                   66f4ad3477a6c       gcp-auth-89d5ffd79-wzvr2
	3789e1deba3b5       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          10 minutes ago      Running             csi-snapshotter                          0                   a00df88c1f82f       csi-hostpathplugin-hglqr
	2170a5568649b       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          10 minutes ago      Running             csi-provisioner                          0                   a00df88c1f82f       csi-hostpathplugin-hglqr
	d20afd16f541c       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            10 minutes ago      Running             liveness-probe                           0                   a00df88c1f82f       csi-hostpathplugin-hglqr
	2a74c2b2a3ee3       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           10 minutes ago      Running             hostpath                                 0                   a00df88c1f82f       csi-hostpathplugin-hglqr
	29c24274c3f95       registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6                             10 minutes ago      Running             controller                               0                   a115fb5bcdd70       ingress-nginx-controller-bc57996ff-79mpt
	4141de5542403       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                10 minutes ago      Running             node-driver-registrar                    0                   a00df88c1f82f       csi-hostpathplugin-hglqr
	0580426e8d27f       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             10 minutes ago      Running             csi-attacher                             0                   7a34dc197c722       csi-hostpath-attacher-0
	8af8ae710b7bb       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              10 minutes ago      Running             csi-resizer                              0                   91d588b9442b7       csi-hostpath-resizer-0
	c378811c5ad20       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   10 minutes ago      Running             csi-external-health-monitor-controller   0                   a00df88c1f82f       csi-hostpathplugin-hglqr
	a5e85742448a7       ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242                                                                             10 minutes ago      Exited              patch                                    1                   61840f6d138dd       ingress-nginx-admission-patch-b6mtt
	5a9b75a453cd6       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012                   10 minutes ago      Exited              create                                   0                   85dbe34d0b929       ingress-nginx-admission-create-h7lw7
	a52dd73b64e18       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      10 minutes ago      Running             volume-snapshot-controller               0                   1c6c09297d260       snapshot-controller-56fcc65765-4l9hv
	c6aa4419694f4       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      10 minutes ago      Running             volume-snapshot-controller               0                   57c48f1670b2a       snapshot-controller-56fcc65765-2hz6g
	b0690e87ddb4f       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             10 minutes ago      Running             local-path-provisioner                   0                   36aedadeb2582       local-path-provisioner-86d989889c-rhmqb
	3a0d036505e72       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a                        11 minutes ago      Running             metrics-server                           0                   1ae7bada2f668       metrics-server-84c5f94fbc-txlrn
	09902a512e79f       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab                             11 minutes ago      Running             minikube-ingress-dns                     0                   3843f8105dc89       kube-ingress-dns-minikube
	5a981c68e9271       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             11 minutes ago      Running             storage-provisioner                      0                   11b2a45f795d4       storage-provisioner
	70c74f4f1e0bd       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                                             11 minutes ago      Running             coredns                                  0                   cfda686abf7f1       coredns-7c65d6cfc9-nqbzq
	7c60a90d5ed29       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                                             11 minutes ago      Running             kube-proxy                               0                   b53a284c395cf       kube-proxy-xr4bt
	44c347dc4cb23       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                                             12 minutes ago      Running             kube-controller-manager                  0                   0ccdde3d3e8e3       kube-controller-manager-addons-489802
	79fb233450407       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                                             12 minutes ago      Running             kube-apiserver                           0                   b3c515c903cd8       kube-apiserver-addons-489802
	5ebda0675cfbe       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                                             12 minutes ago      Running             etcd                                     0                   ce3e5a61bc6e6       etcd-addons-489802
	53631bbb5fc19       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                                             12 minutes ago      Running             kube-scheduler                           0                   c9a4930506bbb       kube-scheduler-addons-489802
	
	
	==> coredns [70c74f4f1e0bde75fc553a034aa664a515c218d2b72725850921f92314b6ec06] <==
	[INFO] 127.0.0.1:51784 - 8829 "HINFO IN 5160120906343044549.4812313304468353436. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012102619s
	[INFO] 10.244.0.7:49904 - 44683 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000739291s
	[INFO] 10.244.0.7:49904 - 13446 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000838879s
	[INFO] 10.244.0.7:37182 - 17696 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000137198s
	[INFO] 10.244.0.7:37182 - 29725 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000120771s
	[INFO] 10.244.0.7:40785 - 12767 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00012186s
	[INFO] 10.244.0.7:40785 - 24273 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000223065s
	[INFO] 10.244.0.7:54049 - 5032 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000122634s
	[INFO] 10.244.0.7:54049 - 51625 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000075286s
	[INFO] 10.244.0.7:57416 - 8811 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000080693s
	[INFO] 10.244.0.7:57416 - 56406 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000038363s
	[INFO] 10.244.0.7:59797 - 29819 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000040968s
	[INFO] 10.244.0.7:59797 - 16249 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000038791s
	[INFO] 10.244.0.7:39368 - 3897 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000045812s
	[INFO] 10.244.0.7:39368 - 53818 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000034439s
	[INFO] 10.244.0.7:57499 - 43541 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000049958s
	[INFO] 10.244.0.7:57499 - 15379 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000036533s
	[INFO] 10.244.0.21:51858 - 31367 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000847603s
	[INFO] 10.244.0.21:33579 - 64948 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000139841s
	[INFO] 10.244.0.21:48527 - 40604 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000280976s
	[INFO] 10.244.0.21:52717 - 13930 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000169344s
	[INFO] 10.244.0.21:58755 - 3796 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000147676s
	[INFO] 10.244.0.21:51813 - 12818 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000082135s
	[INFO] 10.244.0.21:51795 - 17985 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.004530788s
	[INFO] 10.244.0.21:47998 - 23926 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002659458s
	
	
	==> describe nodes <==
	Name:               addons-489802
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-489802
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0626f22cf0d915d75e291a5bce701f94395056e1
	                    minikube.k8s.io/name=addons-489802
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T16_45_01_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-489802
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-489802"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 16:44:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-489802
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 16:56:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 16:56:43 +0000   Fri, 20 Sep 2024 16:44:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 16:56:43 +0000   Fri, 20 Sep 2024 16:44:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 16:56:43 +0000   Fri, 20 Sep 2024 16:44:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 16:56:43 +0000   Fri, 20 Sep 2024 16:45:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.89
	  Hostname:    addons-489802
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 fd813db21ac84502aef251a6893e0027
	  System UUID:                fd813db2-1ac8-4502-aef2-51a6893e0027
	  Boot ID:                    ed0a3698-272d-483a-ba56-acac4def529a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (19 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m16s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  gcp-auth                    gcp-auth-89d5ffd79-wzvr2                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-79mpt    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         11m
	  kube-system                 coredns-7c65d6cfc9-nqbzq                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     11m
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 csi-hostpathplugin-hglqr                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-addons-489802                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         11m
	  kube-system                 kube-apiserver-addons-489802                250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-addons-489802       200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-xr4bt                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-addons-489802                100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 metrics-server-84c5f94fbc-txlrn             100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         11m
	  kube-system                 snapshot-controller-56fcc65765-2hz6g        0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 snapshot-controller-56fcc65765-4l9hv        0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  local-path-storage          local-path-provisioner-86d989889c-rhmqb     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             460Mi (12%)  170Mi (4%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11m                kube-proxy       
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  11m (x2 over 11m)  kubelet          Node addons-489802 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x2 over 11m)  kubelet          Node addons-489802 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x2 over 11m)  kubelet          Node addons-489802 status is now: NodeHasSufficientPID
	  Normal  NodeReady                11m                kubelet          Node addons-489802 status is now: NodeReady
	  Normal  RegisteredNode           11m                node-controller  Node addons-489802 event: Registered Node addons-489802 in Controller
	
	
	==> dmesg <==
	[  +0.174126] systemd-fstab-generator[1328]: Ignoring "noauto" option for root device
	[  +4.891583] kauditd_printk_skb: 98 callbacks suppressed
	[  +5.138076] kauditd_printk_skb: 145 callbacks suppressed
	[ +10.203071] kauditd_printk_skb: 70 callbacks suppressed
	[ +17.983286] kauditd_printk_skb: 2 callbacks suppressed
	[Sep20 16:46] kauditd_printk_skb: 4 callbacks suppressed
	[ +13.042505] kauditd_printk_skb: 42 callbacks suppressed
	[  +6.124032] kauditd_printk_skb: 45 callbacks suppressed
	[ +10.494816] kauditd_printk_skb: 64 callbacks suppressed
	[  +5.981422] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.234675] kauditd_printk_skb: 34 callbacks suppressed
	[Sep20 16:47] kauditd_printk_skb: 28 callbacks suppressed
	[  +7.543099] kauditd_printk_skb: 9 callbacks suppressed
	[Sep20 16:48] kauditd_printk_skb: 2 callbacks suppressed
	[Sep20 16:49] kauditd_printk_skb: 28 callbacks suppressed
	[Sep20 16:52] kauditd_printk_skb: 28 callbacks suppressed
	[Sep20 16:55] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.170883] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.280461] kauditd_printk_skb: 17 callbacks suppressed
	[Sep20 16:56] kauditd_printk_skb: 24 callbacks suppressed
	[ +10.067719] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.043461] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.256575] kauditd_printk_skb: 14 callbacks suppressed
	[  +9.179843] kauditd_printk_skb: 27 callbacks suppressed
	[ +15.573697] kauditd_printk_skb: 7 callbacks suppressed
	
	
	==> etcd [5ebda0675cfbe9e7b3e6c1ca40351339db78cf3954608a12cc779850ee452a23] <==
	{"level":"warn","ts":"2024-09-20T16:46:31.521799Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"298.668395ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T16:46:31.522054Z","caller":"traceutil/trace.go:171","msg":"trace[655563733] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1072; }","duration":"298.968755ms","start":"2024-09-20T16:46:31.223072Z","end":"2024-09-20T16:46:31.522041Z","steps":["trace[655563733] 'agreement among raft nodes before linearized reading'  (duration: 298.302775ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T16:46:31.522572Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"285.514745ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T16:46:31.522662Z","caller":"traceutil/trace.go:171","msg":"trace[397127513] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1072; }","duration":"285.60775ms","start":"2024-09-20T16:46:31.237046Z","end":"2024-09-20T16:46:31.522653Z","steps":["trace[397127513] 'agreement among raft nodes before linearized reading'  (duration: 285.506056ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T16:46:31.523097Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"389.094744ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T16:46:31.521069Z","caller":"traceutil/trace.go:171","msg":"trace[1366548052] transaction","detail":"{read_only:false; response_revision:1072; number_of_response:1; }","duration":"451.994343ms","start":"2024-09-20T16:46:31.069059Z","end":"2024-09-20T16:46:31.521053Z","steps":["trace[1366548052] 'process raft request'  (duration: 450.539479ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T16:46:31.523185Z","caller":"traceutil/trace.go:171","msg":"trace[1958014936] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1072; }","duration":"389.189661ms","start":"2024-09-20T16:46:31.133988Z","end":"2024-09-20T16:46:31.523178Z","steps":["trace[1958014936] 'agreement among raft nodes before linearized reading'  (duration: 388.742689ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T16:46:31.523315Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T16:46:31.133949Z","time spent":"389.346336ms","remote":"127.0.0.1:44644","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-09-20T16:46:31.523518Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T16:46:31.069043Z","time spent":"454.199637ms","remote":"127.0.0.1:44626","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1066 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-09-20T16:46:34.697548Z","caller":"traceutil/trace.go:171","msg":"trace[1773063632] transaction","detail":"{read_only:false; response_revision:1087; number_of_response:1; }","duration":"138.671352ms","start":"2024-09-20T16:46:34.558854Z","end":"2024-09-20T16:46:34.697526Z","steps":["trace[1773063632] 'process raft request'  (duration: 138.455302ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T16:47:09.828412Z","caller":"traceutil/trace.go:171","msg":"trace[1350480991] linearizableReadLoop","detail":"{readStateIndex:1234; appliedIndex:1233; }","duration":"107.953401ms","start":"2024-09-20T16:47:09.720376Z","end":"2024-09-20T16:47:09.828329Z","steps":["trace[1350480991] 'read index received'  (duration: 107.782449ms)","trace[1350480991] 'applied index is now lower than readState.Index'  (duration: 170.357µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-20T16:47:09.828591Z","caller":"traceutil/trace.go:171","msg":"trace[1677279500] transaction","detail":"{read_only:false; response_revision:1192; number_of_response:1; }","duration":"108.710691ms","start":"2024-09-20T16:47:09.719867Z","end":"2024-09-20T16:47:09.828578Z","steps":["trace[1677279500] 'process raft request'  (duration: 108.343763ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T16:47:09.828834Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.468877ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T16:47:09.828877Z","caller":"traceutil/trace.go:171","msg":"trace[823583891] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1192; }","duration":"108.573167ms","start":"2024-09-20T16:47:09.720295Z","end":"2024-09-20T16:47:09.828868Z","steps":["trace[823583891] 'agreement among raft nodes before linearized reading'  (duration: 108.427543ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T16:54:56.686206Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1494}
	{"level":"info","ts":"2024-09-20T16:54:56.732913Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1494,"took":"45.95642ms","hash":3143060453,"current-db-size-bytes":6316032,"current-db-size":"6.3 MB","current-db-size-in-use-bytes":3231744,"current-db-size-in-use":"3.2 MB"}
	{"level":"info","ts":"2024-09-20T16:54:56.733061Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3143060453,"revision":1494,"compact-revision":-1}
	{"level":"info","ts":"2024-09-20T16:55:52.021318Z","caller":"traceutil/trace.go:171","msg":"trace[2100115174] transaction","detail":"{read_only:false; response_revision:2018; number_of_response:1; }","duration":"379.66185ms","start":"2024-09-20T16:55:51.641590Z","end":"2024-09-20T16:55:52.021252Z","steps":["trace[2100115174] 'process raft request'  (duration: 379.545504ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T16:55:52.021786Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T16:55:51.641574Z","time spent":"380.006071ms","remote":"127.0.0.1:44742","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":538,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:1986 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:451 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >"}
	{"level":"info","ts":"2024-09-20T16:55:52.022293Z","caller":"traceutil/trace.go:171","msg":"trace[35214985] linearizableReadLoop","detail":"{readStateIndex:2175; appliedIndex:2174; }","duration":"196.804789ms","start":"2024-09-20T16:55:51.825473Z","end":"2024-09-20T16:55:52.022278Z","steps":["trace[35214985] 'read index received'  (duration: 196.433504ms)","trace[35214985] 'applied index is now lower than readState.Index'  (duration: 370.887µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-20T16:55:52.022475Z","caller":"traceutil/trace.go:171","msg":"trace[1790896376] transaction","detail":"{read_only:false; response_revision:2019; number_of_response:1; }","duration":"211.987025ms","start":"2024-09-20T16:55:51.810476Z","end":"2024-09-20T16:55:52.022463Z","steps":["trace[1790896376] 'process raft request'  (duration: 211.729812ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T16:55:52.022604Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"197.118957ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T16:55:52.022641Z","caller":"traceutil/trace.go:171","msg":"trace[1794876456] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2019; }","duration":"197.165972ms","start":"2024-09-20T16:55:51.825467Z","end":"2024-09-20T16:55:52.022633Z","steps":["trace[1794876456] 'agreement among raft nodes before linearized reading'  (duration: 197.096047ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T16:56:32.273552Z","caller":"traceutil/trace.go:171","msg":"trace[1806753974] transaction","detail":"{read_only:false; response_revision:2278; number_of_response:1; }","duration":"138.283014ms","start":"2024-09-20T16:56:32.135255Z","end":"2024-09-20T16:56:32.273538Z","steps":["trace[1806753974] 'process raft request'  (duration: 137.851209ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T16:56:36.295953Z","caller":"traceutil/trace.go:171","msg":"trace[1488171244] transaction","detail":"{read_only:false; response_revision:2301; number_of_response:1; }","duration":"162.589325ms","start":"2024-09-20T16:56:36.131622Z","end":"2024-09-20T16:56:36.294211Z","steps":["trace[1488171244] 'process raft request'  (duration: 162.248073ms)"],"step_count":1}
	
	
	==> gcp-auth [1c1fd10705c644580fdef2fc3075d7c9349c8e4b44d3899910dd41e40c87e2ce] <==
	2024/09/20 16:47:40 Ready to write response ...
	2024/09/20 16:47:43 Ready to marshal response ...
	2024/09/20 16:47:43 Ready to write response ...
	2024/09/20 16:47:43 Ready to marshal response ...
	2024/09/20 16:47:43 Ready to write response ...
	2024/09/20 16:55:47 Ready to marshal response ...
	2024/09/20 16:55:47 Ready to write response ...
	2024/09/20 16:55:47 Ready to marshal response ...
	2024/09/20 16:55:47 Ready to write response ...
	2024/09/20 16:55:47 Ready to marshal response ...
	2024/09/20 16:55:47 Ready to write response ...
	2024/09/20 16:55:57 Ready to marshal response ...
	2024/09/20 16:55:57 Ready to write response ...
	2024/09/20 16:56:16 Ready to marshal response ...
	2024/09/20 16:56:16 Ready to write response ...
	2024/09/20 16:56:16 Ready to marshal response ...
	2024/09/20 16:56:16 Ready to write response ...
	2024/09/20 16:56:23 Ready to marshal response ...
	2024/09/20 16:56:23 Ready to write response ...
	2024/09/20 16:56:28 Ready to marshal response ...
	2024/09/20 16:56:28 Ready to write response ...
	2024/09/20 16:56:29 Ready to marshal response ...
	2024/09/20 16:56:29 Ready to write response ...
	2024/09/20 16:56:50 Ready to marshal response ...
	2024/09/20 16:56:50 Ready to write response ...
	
	
	==> kernel <==
	 16:56:59 up 12 min,  0 users,  load average: 0.84, 0.55, 0.42
	Linux addons-489802 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [79fb233450407c6fdf879eb55124a5840bf49aaa572c10f7add06512d38df264] <==
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0920 16:46:13.039223       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0920 16:46:13.040532       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0920 16:46:42.591244       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 16:46:42.591968       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.82.249:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.82.249:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.82.249:443: connect: connection refused" logger="UnhandledError"
	E0920 16:46:42.592145       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0920 16:46:42.594035       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.82.249:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.82.249:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.82.249:443: connect: connection refused" logger="UnhandledError"
	E0920 16:46:42.599675       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.82.249:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.82.249:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.82.249:443: connect: connection refused" logger="UnhandledError"
	I0920 16:46:42.683134       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0920 16:47:19.399216       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"context canceled\"}: context canceled" logger="UnhandledError"
	E0920 16:47:19.400809       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E0920 16:47:19.402902       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E0920 16:47:19.404104       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0920 16:47:19.412494       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="13.458229ms" method="GET" path="/apis/apps/v1/namespaces/yakd-dashboard/replicasets/yakd-dashboard-67d98fc6b" result=null
	I0920 16:55:47.034722       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.110.200.88"}
	I0920 16:56:11.192249       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0920 16:56:12.228711       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0920 16:56:29.568621       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0920 16:56:29.873321       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.104.88.195"}
	I0920 16:56:40.306913       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [44c347dc4cb2326d9ce7eef959abf86dcaee69ecf824e59fbe44600500e8a0f4] <==
	I0920 16:55:52.978413       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="125.342µs"
	I0920 16:55:58.670799       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="4.445µs"
	I0920 16:55:59.411189       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="yakd-dashboard/yakd-dashboard-67d98fc6b" duration="4.296µs"
	I0920 16:56:08.808071       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	I0920 16:56:09.550858       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	E0920 16:56:12.230847       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0920 16:56:12.972141       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-489802"
	W0920 16:56:13.260946       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 16:56:13.261098       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 16:56:16.343580       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 16:56:16.343635       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 16:56:20.746066       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 16:56:20.746185       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0920 16:56:21.328553       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gadget"
	W0920 16:56:32.273073       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 16:56:32.273146       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0920 16:56:35.266525       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0920 16:56:35.266621       1 shared_informer.go:320] Caches are synced for resource quota
	I0920 16:56:35.842831       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0920 16:56:35.842926       1 shared_informer.go:320] Caches are synced for garbage collector
	I0920 16:56:43.368488       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-489802"
	W0920 16:56:54.448783       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 16:56:54.448846       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0920 16:56:57.871425       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="9.548µs"
	I0920 16:57:00.043111       1 stateful_set.go:466] "StatefulSet has been deleted" logger="statefulset-controller" key="kube-system/csi-hostpath-attacher"
	
	
	==> kube-proxy [7c60a90d5ed294c1a5015ea6f6b5c5259e8d437a6b5dd0f9dd758bb62d91c7b7] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 16:45:07.927443       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0920 16:45:07.961049       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.89"]
	E0920 16:45:07.961134       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 16:45:08.130722       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 16:45:08.130762       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 16:45:08.130790       1 server_linux.go:169] "Using iptables Proxier"
	I0920 16:45:08.135726       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 16:45:08.136036       1 server.go:483] "Version info" version="v1.31.1"
	I0920 16:45:08.136059       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 16:45:08.137263       1 config.go:199] "Starting service config controller"
	I0920 16:45:08.137318       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 16:45:08.137400       1 config.go:105] "Starting endpoint slice config controller"
	I0920 16:45:08.137405       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 16:45:08.137933       1 config.go:328] "Starting node config controller"
	I0920 16:45:08.137953       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 16:45:08.237708       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 16:45:08.237750       1 shared_informer.go:320] Caches are synced for service config
	I0920 16:45:08.239006       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [53631bbb5fc199153283953dffaf83c3d2a2b4cdbda98ab81770b42af5dfe30e] <==
	W0920 16:44:58.228924       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0920 16:44:58.230396       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 16:44:58.228968       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0920 16:44:58.230429       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 16:44:59.045447       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0920 16:44:59.045496       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 16:44:59.126233       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0920 16:44:59.126435       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 16:44:59.147240       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0920 16:44:59.147292       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 16:44:59.277135       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0920 16:44:59.278460       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 16:44:59.296223       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0920 16:44:59.296273       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0920 16:44:59.348771       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0920 16:44:59.348828       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 16:44:59.368238       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0920 16:44:59.368290       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 16:44:59.411207       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0920 16:44:59.411256       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 16:44:59.475030       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0920 16:44:59.475087       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 16:44:59.605643       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0920 16:44:59.605806       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0920 16:45:02.104787       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 20 16:56:58 addons-489802 kubelet[1210]: I0920 16:56:58.556538    1210 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdfr8\" (UniqueName: \"kubernetes.io/projected/a467b141-5827-4440-b11f-9203739b4a10-kube-api-access-wdfr8\") pod \"a467b141-5827-4440-b11f-9203739b4a10\" (UID: \"a467b141-5827-4440-b11f-9203739b4a10\") "
	Sep 20 16:56:58 addons-489802 kubelet[1210]: I0920 16:56:58.556656    1210 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-792t6\" (UniqueName: \"kubernetes.io/projected/5f951af3-0fc4-4606-9f2e-556adaa494f1-kube-api-access-792t6\") on node \"addons-489802\" DevicePath \"\""
	Sep 20 16:56:58 addons-489802 kubelet[1210]: I0920 16:56:58.556683    1210 reconciler_common.go:281] "operationExecutor.UnmountDevice started for volume \"pvc-67f5ab3b-e6d7-4c60-8631-6b4e746602db\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^54ac60df-7771-11ef-b51a-7ae5a69c722f\") on node \"addons-489802\" "
	Sep 20 16:56:58 addons-489802 kubelet[1210]: I0920 16:56:58.556693    1210 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/5f951af3-0fc4-4606-9f2e-556adaa494f1-gcp-creds\") on node \"addons-489802\" DevicePath \"\""
	Sep 20 16:56:58 addons-489802 kubelet[1210]: I0920 16:56:58.561300    1210 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-67f5ab3b-e6d7-4c60-8631-6b4e746602db" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^54ac60df-7771-11ef-b51a-7ae5a69c722f") on node "addons-489802"
	Sep 20 16:56:58 addons-489802 kubelet[1210]: I0920 16:56:58.561928    1210 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a467b141-5827-4440-b11f-9203739b4a10-kube-api-access-wdfr8" (OuterVolumeSpecName: "kube-api-access-wdfr8") pod "a467b141-5827-4440-b11f-9203739b4a10" (UID: "a467b141-5827-4440-b11f-9203739b4a10"). InnerVolumeSpecName "kube-api-access-wdfr8". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 20 16:56:58 addons-489802 kubelet[1210]: I0920 16:56:58.575555    1210 scope.go:117] "RemoveContainer" containerID="f231def9a616ea3d01057454c6d7cb6dd46e76a66f252dde0e7010d593debfb9"
	Sep 20 16:56:58 addons-489802 kubelet[1210]: I0920 16:56:58.632689    1210 scope.go:117] "RemoveContainer" containerID="f231def9a616ea3d01057454c6d7cb6dd46e76a66f252dde0e7010d593debfb9"
	Sep 20 16:56:58 addons-489802 kubelet[1210]: E0920 16:56:58.633438    1210 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f231def9a616ea3d01057454c6d7cb6dd46e76a66f252dde0e7010d593debfb9\": container with ID starting with f231def9a616ea3d01057454c6d7cb6dd46e76a66f252dde0e7010d593debfb9 not found: ID does not exist" containerID="f231def9a616ea3d01057454c6d7cb6dd46e76a66f252dde0e7010d593debfb9"
	Sep 20 16:56:58 addons-489802 kubelet[1210]: I0920 16:56:58.633470    1210 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f231def9a616ea3d01057454c6d7cb6dd46e76a66f252dde0e7010d593debfb9"} err="failed to get container status \"f231def9a616ea3d01057454c6d7cb6dd46e76a66f252dde0e7010d593debfb9\": rpc error: code = NotFound desc = could not find container \"f231def9a616ea3d01057454c6d7cb6dd46e76a66f252dde0e7010d593debfb9\": container with ID starting with f231def9a616ea3d01057454c6d7cb6dd46e76a66f252dde0e7010d593debfb9 not found: ID does not exist"
	Sep 20 16:56:58 addons-489802 kubelet[1210]: I0920 16:56:58.633493    1210 scope.go:117] "RemoveContainer" containerID="feb881ba69c1a13e19d50fc82b036cba32648069f91f79b6395af21eecc1840c"
	Sep 20 16:56:58 addons-489802 kubelet[1210]: I0920 16:56:58.660255    1210 reconciler_common.go:288] "Volume detached for volume \"pvc-67f5ab3b-e6d7-4c60-8631-6b4e746602db\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^54ac60df-7771-11ef-b51a-7ae5a69c722f\") on node \"addons-489802\" DevicePath \"\""
	Sep 20 16:56:58 addons-489802 kubelet[1210]: I0920 16:56:58.660320    1210 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-wdfr8\" (UniqueName: \"kubernetes.io/projected/a467b141-5827-4440-b11f-9203739b4a10-kube-api-access-wdfr8\") on node \"addons-489802\" DevicePath \"\""
	Sep 20 16:56:58 addons-489802 kubelet[1210]: I0920 16:56:58.669016    1210 scope.go:117] "RemoveContainer" containerID="feb881ba69c1a13e19d50fc82b036cba32648069f91f79b6395af21eecc1840c"
	Sep 20 16:56:58 addons-489802 kubelet[1210]: E0920 16:56:58.669701    1210 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"feb881ba69c1a13e19d50fc82b036cba32648069f91f79b6395af21eecc1840c\": container with ID starting with feb881ba69c1a13e19d50fc82b036cba32648069f91f79b6395af21eecc1840c not found: ID does not exist" containerID="feb881ba69c1a13e19d50fc82b036cba32648069f91f79b6395af21eecc1840c"
	Sep 20 16:56:58 addons-489802 kubelet[1210]: I0920 16:56:58.669745    1210 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"feb881ba69c1a13e19d50fc82b036cba32648069f91f79b6395af21eecc1840c"} err="failed to get container status \"feb881ba69c1a13e19d50fc82b036cba32648069f91f79b6395af21eecc1840c\": rpc error: code = NotFound desc = could not find container \"feb881ba69c1a13e19d50fc82b036cba32648069f91f79b6395af21eecc1840c\": container with ID starting with feb881ba69c1a13e19d50fc82b036cba32648069f91f79b6395af21eecc1840c not found: ID does not exist"
	Sep 20 16:56:58 addons-489802 kubelet[1210]: I0920 16:56:58.669771    1210 scope.go:117] "RemoveContainer" containerID="9151dfde6abf4371e401cb370de3d1093860959a7db56ed4562c36fda613a355"
	Sep 20 16:56:58 addons-489802 kubelet[1210]: I0920 16:56:58.705007    1210 scope.go:117] "RemoveContainer" containerID="9151dfde6abf4371e401cb370de3d1093860959a7db56ed4562c36fda613a355"
	Sep 20 16:56:58 addons-489802 kubelet[1210]: E0920 16:56:58.705684    1210 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9151dfde6abf4371e401cb370de3d1093860959a7db56ed4562c36fda613a355\": container with ID starting with 9151dfde6abf4371e401cb370de3d1093860959a7db56ed4562c36fda613a355 not found: ID does not exist" containerID="9151dfde6abf4371e401cb370de3d1093860959a7db56ed4562c36fda613a355"
	Sep 20 16:56:58 addons-489802 kubelet[1210]: I0920 16:56:58.705756    1210 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9151dfde6abf4371e401cb370de3d1093860959a7db56ed4562c36fda613a355"} err="failed to get container status \"9151dfde6abf4371e401cb370de3d1093860959a7db56ed4562c36fda613a355\": rpc error: code = NotFound desc = could not find container \"9151dfde6abf4371e401cb370de3d1093860959a7db56ed4562c36fda613a355\": container with ID starting with 9151dfde6abf4371e401cb370de3d1093860959a7db56ed4562c36fda613a355 not found: ID does not exist"
	Sep 20 16:56:58 addons-489802 kubelet[1210]: I0920 16:56:58.887046    1210 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e3cfba8-c77f-46f3-b6b1-46c7a36ae3a4" path="/var/lib/kubelet/pods/1e3cfba8-c77f-46f3-b6b1-46c7a36ae3a4/volumes"
	Sep 20 16:56:58 addons-489802 kubelet[1210]: I0920 16:56:58.887814    1210 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f951af3-0fc4-4606-9f2e-556adaa494f1" path="/var/lib/kubelet/pods/5f951af3-0fc4-4606-9f2e-556adaa494f1/volumes"
	Sep 20 16:56:58 addons-489802 kubelet[1210]: I0920 16:56:58.888567    1210 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a467b141-5827-4440-b11f-9203739b4a10" path="/var/lib/kubelet/pods/a467b141-5827-4440-b11f-9203739b4a10/volumes"
	Sep 20 16:56:58 addons-489802 kubelet[1210]: I0920 16:56:58.889706    1210 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5fb0889-ead9-403c-b47a-ce4e44c73c83" path="/var/lib/kubelet/pods/c5fb0889-ead9-403c-b47a-ce4e44c73c83/volumes"
	Sep 20 16:57:00 addons-489802 kubelet[1210]: I0920 16:57:00.388190    1210 csi_plugin.go:191] kubernetes.io/csi: registrationHandler.DeRegisterPlugin request for plugin hostpath.csi.k8s.io
	
	
	==> storage-provisioner [5a981c68e927108571692d174ebc0cf47e600882543d6dd401c23cbcd805d49d] <==
	I0920 16:45:14.933598       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0920 16:45:15.129203       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0920 16:45:15.129288       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0920 16:45:15.469563       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0920 16:45:15.471781       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-489802_9f119035-fb3e-4caa-852b-e718c04f6499!
	I0920 16:45:15.471465       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"47834956-e67b-4561-9f20-a2c3f45edc3a", APIVersion:"v1", ResourceVersion:"708", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-489802_9f119035-fb3e-4caa-852b-e718c04f6499 became leader
	I0920 16:45:15.594691       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-489802_9f119035-fb3e-4caa-852b-e718c04f6499!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-489802 -n addons-489802
helpers_test.go:261: (dbg) Run:  kubectl --context addons-489802 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-h7lw7 ingress-nginx-admission-patch-b6mtt csi-hostpath-attacher-0 csi-hostpath-resizer-0 csi-hostpathplugin-hglqr
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-489802 describe pod busybox ingress-nginx-admission-create-h7lw7 ingress-nginx-admission-patch-b6mtt csi-hostpath-attacher-0 csi-hostpath-resizer-0 csi-hostpathplugin-hglqr
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-489802 describe pod busybox ingress-nginx-admission-create-h7lw7 ingress-nginx-admission-patch-b6mtt csi-hostpath-attacher-0 csi-hostpath-resizer-0 csi-hostpathplugin-hglqr: exit status 1 (82.2534ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-489802/192.168.39.89
	Start Time:       Fri, 20 Sep 2024 16:47:43 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.22
	IPs:
	  IP:  10.244.0.22
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lh4vn (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-lh4vn:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m18s                   default-scheduler  Successfully assigned default/busybox to addons-489802
	  Normal   Pulling    7m45s (x4 over 9m18s)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m44s (x4 over 9m18s)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     7m44s (x4 over 9m18s)   kubelet            Error: ErrImagePull
	  Warning  Failed     7m30s (x6 over 9m17s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m12s (x20 over 9m17s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-h7lw7" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-b6mtt" not found
	Error from server (NotFound): pods "csi-hostpath-attacher-0" not found
	Error from server (NotFound): pods "csi-hostpath-resizer-0" not found
	Error from server (NotFound): pods "csi-hostpathplugin-hglqr" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-489802 describe pod busybox ingress-nginx-admission-create-h7lw7 ingress-nginx-admission-patch-b6mtt csi-hostpath-attacher-0 csi-hostpath-resizer-0 csi-hostpathplugin-hglqr: exit status 1
--- FAIL: TestAddons/parallel/Registry (75.32s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (156.02s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:205: (dbg) Run:  kubectl --context addons-489802 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:230: (dbg) Run:  kubectl --context addons-489802 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:243: (dbg) Run:  kubectl --context addons-489802 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [4c34572d-1118-4bb3-8265-b67b3104bc59] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [4c34572d-1118-4bb3-8265-b67b3104bc59] Running
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 13.009518109s
I0920 16:56:42.930879   15973 kapi.go:150] Service nginx in namespace default found.
addons_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p addons-489802 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:260: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-489802 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.460604533s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:276: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:284: (dbg) Run:  kubectl --context addons-489802 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p addons-489802 ip
addons_test.go:295: (dbg) Run:  nslookup hello-john.test 192.168.39.89
addons_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p addons-489802 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:304: (dbg) Done: out/minikube-linux-amd64 -p addons-489802 addons disable ingress-dns --alsologtostderr -v=1: (1.259578736s)
addons_test.go:309: (dbg) Run:  out/minikube-linux-amd64 -p addons-489802 addons disable ingress --alsologtostderr -v=1
addons_test.go:309: (dbg) Done: out/minikube-linux-amd64 -p addons-489802 addons disable ingress --alsologtostderr -v=1: (7.83233739s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-489802 -n addons-489802
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-489802 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-489802 logs -n 25: (1.350183747s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 20 Sep 24 16:44 UTC | 20 Sep 24 16:44 UTC |
	| delete  | -p download-only-349545                                                                     | download-only-349545 | jenkins | v1.34.0 | 20 Sep 24 16:44 UTC | 20 Sep 24 16:44 UTC |
	| delete  | -p download-only-858543                                                                     | download-only-858543 | jenkins | v1.34.0 | 20 Sep 24 16:44 UTC | 20 Sep 24 16:44 UTC |
	| delete  | -p download-only-349545                                                                     | download-only-349545 | jenkins | v1.34.0 | 20 Sep 24 16:44 UTC | 20 Sep 24 16:44 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-811854 | jenkins | v1.34.0 | 20 Sep 24 16:44 UTC |                     |
	|         | binary-mirror-811854                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:34057                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-811854                                                                     | binary-mirror-811854 | jenkins | v1.34.0 | 20 Sep 24 16:44 UTC | 20 Sep 24 16:44 UTC |
	| addons  | disable dashboard -p                                                                        | addons-489802        | jenkins | v1.34.0 | 20 Sep 24 16:44 UTC |                     |
	|         | addons-489802                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-489802        | jenkins | v1.34.0 | 20 Sep 24 16:44 UTC |                     |
	|         | addons-489802                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-489802 --wait=true                                                                | addons-489802        | jenkins | v1.34.0 | 20 Sep 24 16:44 UTC | 20 Sep 24 16:47 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-489802        | jenkins | v1.34.0 | 20 Sep 24 16:55 UTC | 20 Sep 24 16:55 UTC |
	|         | -p addons-489802                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-489802        | jenkins | v1.34.0 | 20 Sep 24 16:55 UTC | 20 Sep 24 16:55 UTC |
	|         | addons-489802                                                                               |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-489802        | jenkins | v1.34.0 | 20 Sep 24 16:55 UTC | 20 Sep 24 16:55 UTC |
	|         | -p addons-489802                                                                            |                      |         |         |                     |                     |
	| addons  | addons-489802 addons disable                                                                | addons-489802        | jenkins | v1.34.0 | 20 Sep 24 16:55 UTC | 20 Sep 24 16:56 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-489802 addons disable                                                                | addons-489802        | jenkins | v1.34.0 | 20 Sep 24 16:55 UTC | 20 Sep 24 16:56 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-489802        | jenkins | v1.34.0 | 20 Sep 24 16:56 UTC | 20 Sep 24 16:56 UTC |
	|         | addons-489802                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-489802 ssh cat                                                                       | addons-489802        | jenkins | v1.34.0 | 20 Sep 24 16:56 UTC | 20 Sep 24 16:56 UTC |
	|         | /opt/local-path-provisioner/pvc-b8225ab7-cae8-4ab5-8ca1-5e74b7712f98_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-489802 addons disable                                                                | addons-489802        | jenkins | v1.34.0 | 20 Sep 24 16:56 UTC | 20 Sep 24 16:56 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-489802 ssh curl -s                                                                   | addons-489802        | jenkins | v1.34.0 | 20 Sep 24 16:56 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ip      | addons-489802 ip                                                                            | addons-489802        | jenkins | v1.34.0 | 20 Sep 24 16:56 UTC | 20 Sep 24 16:56 UTC |
	| addons  | addons-489802 addons disable                                                                | addons-489802        | jenkins | v1.34.0 | 20 Sep 24 16:56 UTC | 20 Sep 24 16:56 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-489802 addons                                                                        | addons-489802        | jenkins | v1.34.0 | 20 Sep 24 16:56 UTC | 20 Sep 24 16:57 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-489802 addons                                                                        | addons-489802        | jenkins | v1.34.0 | 20 Sep 24 16:57 UTC | 20 Sep 24 16:57 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-489802 ip                                                                            | addons-489802        | jenkins | v1.34.0 | 20 Sep 24 16:58 UTC | 20 Sep 24 16:58 UTC |
	| addons  | addons-489802 addons disable                                                                | addons-489802        | jenkins | v1.34.0 | 20 Sep 24 16:58 UTC | 20 Sep 24 16:58 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-489802 addons disable                                                                | addons-489802        | jenkins | v1.34.0 | 20 Sep 24 16:58 UTC | 20 Sep 24 16:59 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 16:44:18
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 16:44:18.178711   16686 out.go:345] Setting OutFile to fd 1 ...
	I0920 16:44:18.178820   16686 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 16:44:18.178830   16686 out.go:358] Setting ErrFile to fd 2...
	I0920 16:44:18.178837   16686 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 16:44:18.179018   16686 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-8777/.minikube/bin
	I0920 16:44:18.179615   16686 out.go:352] Setting JSON to false
	I0920 16:44:18.180405   16686 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1601,"bootTime":1726849057,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 16:44:18.180501   16686 start.go:139] virtualization: kvm guest
	I0920 16:44:18.182896   16686 out.go:177] * [addons-489802] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 16:44:18.184216   16686 notify.go:220] Checking for updates...
	I0920 16:44:18.184222   16686 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 16:44:18.185469   16686 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 16:44:18.186874   16686 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-8777/kubeconfig
	I0920 16:44:18.188324   16686 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-8777/.minikube
	I0920 16:44:18.190351   16686 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 16:44:18.191922   16686 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 16:44:18.193502   16686 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 16:44:18.225366   16686 out.go:177] * Using the kvm2 driver based on user configuration
	I0920 16:44:18.226431   16686 start.go:297] selected driver: kvm2
	I0920 16:44:18.226443   16686 start.go:901] validating driver "kvm2" against <nil>
	I0920 16:44:18.226453   16686 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 16:44:18.227135   16686 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 16:44:18.227230   16686 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19672-8777/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 16:44:18.242065   16686 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 16:44:18.242112   16686 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 16:44:18.242404   16686 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 16:44:18.242437   16686 cni.go:84] Creating CNI manager for ""
	I0920 16:44:18.242490   16686 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 16:44:18.242500   16686 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 16:44:18.242555   16686 start.go:340] cluster config:
	{Name:addons-489802 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-489802 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 16:44:18.242664   16686 iso.go:125] acquiring lock: {Name:mkba95ef0488e46f622333e9f317f43def93040b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 16:44:18.244379   16686 out.go:177] * Starting "addons-489802" primary control-plane node in "addons-489802" cluster
	I0920 16:44:18.245561   16686 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 16:44:18.245610   16686 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0920 16:44:18.245618   16686 cache.go:56] Caching tarball of preloaded images
	I0920 16:44:18.245687   16686 preload.go:172] Found /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 16:44:18.245698   16686 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 16:44:18.246011   16686 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/config.json ...
	I0920 16:44:18.246032   16686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/config.json: {Name:mka75e2e382f021a76fc6885b0195d64c12ed744 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:44:18.246164   16686 start.go:360] acquireMachinesLock for addons-489802: {Name:mkfeedb385cf08b5d2aa00913e85815d02a180c2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 16:44:18.246208   16686 start.go:364] duration metric: took 31.448µs to acquireMachinesLock for "addons-489802"
	I0920 16:44:18.246223   16686 start.go:93] Provisioning new machine with config: &{Name:addons-489802 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-489802 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 16:44:18.246282   16686 start.go:125] createHost starting for "" (driver="kvm2")
	I0920 16:44:18.247940   16686 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0920 16:44:18.248080   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:44:18.248117   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:44:18.262329   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33039
	I0920 16:44:18.262809   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:44:18.263337   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:44:18.263357   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:44:18.263710   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:44:18.263878   16686 main.go:141] libmachine: (addons-489802) Calling .GetMachineName
	I0920 16:44:18.263996   16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
	I0920 16:44:18.264148   16686 start.go:159] libmachine.API.Create for "addons-489802" (driver="kvm2")
	I0920 16:44:18.264173   16686 client.go:168] LocalClient.Create starting
	I0920 16:44:18.264205   16686 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem
	I0920 16:44:18.669459   16686 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem
	I0920 16:44:18.951878   16686 main.go:141] libmachine: Running pre-create checks...
	I0920 16:44:18.951905   16686 main.go:141] libmachine: (addons-489802) Calling .PreCreateCheck
	I0920 16:44:18.952422   16686 main.go:141] libmachine: (addons-489802) Calling .GetConfigRaw
	I0920 16:44:18.952871   16686 main.go:141] libmachine: Creating machine...
	I0920 16:44:18.952893   16686 main.go:141] libmachine: (addons-489802) Calling .Create
	I0920 16:44:18.953060   16686 main.go:141] libmachine: (addons-489802) Creating KVM machine...
	I0920 16:44:18.954192   16686 main.go:141] libmachine: (addons-489802) DBG | found existing default KVM network
	I0920 16:44:18.954932   16686 main.go:141] libmachine: (addons-489802) DBG | I0920 16:44:18.954771   16708 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002211f0}
	I0920 16:44:18.954987   16686 main.go:141] libmachine: (addons-489802) DBG | created network xml: 
	I0920 16:44:18.955015   16686 main.go:141] libmachine: (addons-489802) DBG | <network>
	I0920 16:44:18.955034   16686 main.go:141] libmachine: (addons-489802) DBG |   <name>mk-addons-489802</name>
	I0920 16:44:18.955053   16686 main.go:141] libmachine: (addons-489802) DBG |   <dns enable='no'/>
	I0920 16:44:18.955078   16686 main.go:141] libmachine: (addons-489802) DBG |   
	I0920 16:44:18.955099   16686 main.go:141] libmachine: (addons-489802) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0920 16:44:18.955108   16686 main.go:141] libmachine: (addons-489802) DBG |     <dhcp>
	I0920 16:44:18.955115   16686 main.go:141] libmachine: (addons-489802) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0920 16:44:18.955126   16686 main.go:141] libmachine: (addons-489802) DBG |     </dhcp>
	I0920 16:44:18.955132   16686 main.go:141] libmachine: (addons-489802) DBG |   </ip>
	I0920 16:44:18.955142   16686 main.go:141] libmachine: (addons-489802) DBG |   
	I0920 16:44:18.955152   16686 main.go:141] libmachine: (addons-489802) DBG | </network>
	I0920 16:44:18.955180   16686 main.go:141] libmachine: (addons-489802) DBG | 
	I0920 16:44:18.961544   16686 main.go:141] libmachine: (addons-489802) DBG | trying to create private KVM network mk-addons-489802 192.168.39.0/24...
	I0920 16:44:19.029008   16686 main.go:141] libmachine: (addons-489802) Setting up store path in /home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802 ...
	I0920 16:44:19.029031   16686 main.go:141] libmachine: (addons-489802) DBG | private KVM network mk-addons-489802 192.168.39.0/24 created
	I0920 16:44:19.029050   16686 main.go:141] libmachine: (addons-489802) Building disk image from file:///home/jenkins/minikube-integration/19672-8777/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso
	I0920 16:44:19.029076   16686 main.go:141] libmachine: (addons-489802) Downloading /home/jenkins/minikube-integration/19672-8777/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19672-8777/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso...
	I0920 16:44:19.029097   16686 main.go:141] libmachine: (addons-489802) DBG | I0920 16:44:19.028953   16708 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19672-8777/.minikube
	I0920 16:44:19.344578   16686 main.go:141] libmachine: (addons-489802) DBG | I0920 16:44:19.344398   16708 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa...
	I0920 16:44:19.462008   16686 main.go:141] libmachine: (addons-489802) DBG | I0920 16:44:19.461879   16708 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/addons-489802.rawdisk...
	I0920 16:44:19.462055   16686 main.go:141] libmachine: (addons-489802) DBG | Writing magic tar header
	I0920 16:44:19.462065   16686 main.go:141] libmachine: (addons-489802) DBG | Writing SSH key tar header
	I0920 16:44:19.462072   16686 main.go:141] libmachine: (addons-489802) DBG | I0920 16:44:19.462027   16708 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802 ...
	I0920 16:44:19.462210   16686 main.go:141] libmachine: (addons-489802) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802
	I0920 16:44:19.462252   16686 main.go:141] libmachine: (addons-489802) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-8777/.minikube/machines
	I0920 16:44:19.462263   16686 main.go:141] libmachine: (addons-489802) Setting executable bit set on /home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802 (perms=drwx------)
	I0920 16:44:19.462287   16686 main.go:141] libmachine: (addons-489802) Setting executable bit set on /home/jenkins/minikube-integration/19672-8777/.minikube/machines (perms=drwxr-xr-x)
	I0920 16:44:19.462302   16686 main.go:141] libmachine: (addons-489802) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-8777/.minikube
	I0920 16:44:19.462312   16686 main.go:141] libmachine: (addons-489802) Setting executable bit set on /home/jenkins/minikube-integration/19672-8777/.minikube (perms=drwxr-xr-x)
	I0920 16:44:19.462324   16686 main.go:141] libmachine: (addons-489802) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-8777
	I0920 16:44:19.462340   16686 main.go:141] libmachine: (addons-489802) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0920 16:44:19.462350   16686 main.go:141] libmachine: (addons-489802) DBG | Checking permissions on dir: /home/jenkins
	I0920 16:44:19.462361   16686 main.go:141] libmachine: (addons-489802) DBG | Checking permissions on dir: /home
	I0920 16:44:19.462374   16686 main.go:141] libmachine: (addons-489802) Setting executable bit set on /home/jenkins/minikube-integration/19672-8777 (perms=drwxrwxr-x)
	I0920 16:44:19.462383   16686 main.go:141] libmachine: (addons-489802) DBG | Skipping /home - not owner
	I0920 16:44:19.462409   16686 main.go:141] libmachine: (addons-489802) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0920 16:44:19.462428   16686 main.go:141] libmachine: (addons-489802) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0920 16:44:19.462441   16686 main.go:141] libmachine: (addons-489802) Creating domain...
	I0920 16:44:19.463291   16686 main.go:141] libmachine: (addons-489802) define libvirt domain using xml: 
	I0920 16:44:19.463308   16686 main.go:141] libmachine: (addons-489802) <domain type='kvm'>
	I0920 16:44:19.463315   16686 main.go:141] libmachine: (addons-489802)   <name>addons-489802</name>
	I0920 16:44:19.463321   16686 main.go:141] libmachine: (addons-489802)   <memory unit='MiB'>4000</memory>
	I0920 16:44:19.463328   16686 main.go:141] libmachine: (addons-489802)   <vcpu>2</vcpu>
	I0920 16:44:19.463335   16686 main.go:141] libmachine: (addons-489802)   <features>
	I0920 16:44:19.463346   16686 main.go:141] libmachine: (addons-489802)     <acpi/>
	I0920 16:44:19.463360   16686 main.go:141] libmachine: (addons-489802)     <apic/>
	I0920 16:44:19.463368   16686 main.go:141] libmachine: (addons-489802)     <pae/>
	I0920 16:44:19.463375   16686 main.go:141] libmachine: (addons-489802)     
	I0920 16:44:19.463386   16686 main.go:141] libmachine: (addons-489802)   </features>
	I0920 16:44:19.463393   16686 main.go:141] libmachine: (addons-489802)   <cpu mode='host-passthrough'>
	I0920 16:44:19.463402   16686 main.go:141] libmachine: (addons-489802)   
	I0920 16:44:19.463408   16686 main.go:141] libmachine: (addons-489802)   </cpu>
	I0920 16:44:19.463415   16686 main.go:141] libmachine: (addons-489802)   <os>
	I0920 16:44:19.463424   16686 main.go:141] libmachine: (addons-489802)     <type>hvm</type>
	I0920 16:44:19.463435   16686 main.go:141] libmachine: (addons-489802)     <boot dev='cdrom'/>
	I0920 16:44:19.463445   16686 main.go:141] libmachine: (addons-489802)     <boot dev='hd'/>
	I0920 16:44:19.463472   16686 main.go:141] libmachine: (addons-489802)     <bootmenu enable='no'/>
	I0920 16:44:19.463497   16686 main.go:141] libmachine: (addons-489802)   </os>
	I0920 16:44:19.463520   16686 main.go:141] libmachine: (addons-489802)   <devices>
	I0920 16:44:19.463534   16686 main.go:141] libmachine: (addons-489802)     <disk type='file' device='cdrom'>
	I0920 16:44:19.463547   16686 main.go:141] libmachine: (addons-489802)       <source file='/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/boot2docker.iso'/>
	I0920 16:44:19.463558   16686 main.go:141] libmachine: (addons-489802)       <target dev='hdc' bus='scsi'/>
	I0920 16:44:19.463570   16686 main.go:141] libmachine: (addons-489802)       <readonly/>
	I0920 16:44:19.463577   16686 main.go:141] libmachine: (addons-489802)     </disk>
	I0920 16:44:19.463584   16686 main.go:141] libmachine: (addons-489802)     <disk type='file' device='disk'>
	I0920 16:44:19.463592   16686 main.go:141] libmachine: (addons-489802)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0920 16:44:19.463600   16686 main.go:141] libmachine: (addons-489802)       <source file='/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/addons-489802.rawdisk'/>
	I0920 16:44:19.463608   16686 main.go:141] libmachine: (addons-489802)       <target dev='hda' bus='virtio'/>
	I0920 16:44:19.463614   16686 main.go:141] libmachine: (addons-489802)     </disk>
	I0920 16:44:19.463623   16686 main.go:141] libmachine: (addons-489802)     <interface type='network'>
	I0920 16:44:19.463633   16686 main.go:141] libmachine: (addons-489802)       <source network='mk-addons-489802'/>
	I0920 16:44:19.463643   16686 main.go:141] libmachine: (addons-489802)       <model type='virtio'/>
	I0920 16:44:19.463651   16686 main.go:141] libmachine: (addons-489802)     </interface>
	I0920 16:44:19.463660   16686 main.go:141] libmachine: (addons-489802)     <interface type='network'>
	I0920 16:44:19.463672   16686 main.go:141] libmachine: (addons-489802)       <source network='default'/>
	I0920 16:44:19.463681   16686 main.go:141] libmachine: (addons-489802)       <model type='virtio'/>
	I0920 16:44:19.463703   16686 main.go:141] libmachine: (addons-489802)     </interface>
	I0920 16:44:19.463722   16686 main.go:141] libmachine: (addons-489802)     <serial type='pty'>
	I0920 16:44:19.463732   16686 main.go:141] libmachine: (addons-489802)       <target port='0'/>
	I0920 16:44:19.463738   16686 main.go:141] libmachine: (addons-489802)     </serial>
	I0920 16:44:19.463745   16686 main.go:141] libmachine: (addons-489802)     <console type='pty'>
	I0920 16:44:19.463755   16686 main.go:141] libmachine: (addons-489802)       <target type='serial' port='0'/>
	I0920 16:44:19.463762   16686 main.go:141] libmachine: (addons-489802)     </console>
	I0920 16:44:19.463767   16686 main.go:141] libmachine: (addons-489802)     <rng model='virtio'>
	I0920 16:44:19.463776   16686 main.go:141] libmachine: (addons-489802)       <backend model='random'>/dev/random</backend>
	I0920 16:44:19.463784   16686 main.go:141] libmachine: (addons-489802)     </rng>
	I0920 16:44:19.463793   16686 main.go:141] libmachine: (addons-489802)     
	I0920 16:44:19.463807   16686 main.go:141] libmachine: (addons-489802)     
	I0920 16:44:19.463822   16686 main.go:141] libmachine: (addons-489802)   </devices>
	I0920 16:44:19.463837   16686 main.go:141] libmachine: (addons-489802) </domain>
	I0920 16:44:19.463852   16686 main.go:141] libmachine: (addons-489802) 
	I0920 16:44:19.470320   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:86:10:bf in network default
	I0920 16:44:19.470900   16686 main.go:141] libmachine: (addons-489802) Ensuring networks are active...
	I0920 16:44:19.470920   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:19.471767   16686 main.go:141] libmachine: (addons-489802) Ensuring network default is active
	I0920 16:44:19.472031   16686 main.go:141] libmachine: (addons-489802) Ensuring network mk-addons-489802 is active
	I0920 16:44:19.472810   16686 main.go:141] libmachine: (addons-489802) Getting domain xml...
	I0920 16:44:19.473428   16686 main.go:141] libmachine: (addons-489802) Creating domain...
	I0920 16:44:20.958983   16686 main.go:141] libmachine: (addons-489802) Waiting to get IP...
	I0920 16:44:20.959942   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:20.960292   16686 main.go:141] libmachine: (addons-489802) DBG | unable to find current IP address of domain addons-489802 in network mk-addons-489802
	I0920 16:44:20.960332   16686 main.go:141] libmachine: (addons-489802) DBG | I0920 16:44:20.960280   16708 retry.go:31] will retry after 218.466528ms: waiting for machine to come up
	I0920 16:44:21.180891   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:21.181202   16686 main.go:141] libmachine: (addons-489802) DBG | unable to find current IP address of domain addons-489802 in network mk-addons-489802
	I0920 16:44:21.181228   16686 main.go:141] libmachine: (addons-489802) DBG | I0920 16:44:21.181159   16708 retry.go:31] will retry after 269.124789ms: waiting for machine to come up
	I0920 16:44:21.451562   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:21.451985   16686 main.go:141] libmachine: (addons-489802) DBG | unable to find current IP address of domain addons-489802 in network mk-addons-489802
	I0920 16:44:21.452021   16686 main.go:141] libmachine: (addons-489802) DBG | I0920 16:44:21.451946   16708 retry.go:31] will retry after 418.879425ms: waiting for machine to come up
	I0920 16:44:21.872595   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:21.873035   16686 main.go:141] libmachine: (addons-489802) DBG | unable to find current IP address of domain addons-489802 in network mk-addons-489802
	I0920 16:44:21.873056   16686 main.go:141] libmachine: (addons-489802) DBG | I0920 16:44:21.873002   16708 retry.go:31] will retry after 379.463169ms: waiting for machine to come up
	I0920 16:44:22.254754   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:22.255179   16686 main.go:141] libmachine: (addons-489802) DBG | unable to find current IP address of domain addons-489802 in network mk-addons-489802
	I0920 16:44:22.255208   16686 main.go:141] libmachine: (addons-489802) DBG | I0920 16:44:22.255151   16708 retry.go:31] will retry after 621.089592ms: waiting for machine to come up
	I0920 16:44:22.877890   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:22.878236   16686 main.go:141] libmachine: (addons-489802) DBG | unable to find current IP address of domain addons-489802 in network mk-addons-489802
	I0920 16:44:22.878254   16686 main.go:141] libmachine: (addons-489802) DBG | I0920 16:44:22.878215   16708 retry.go:31] will retry after 896.419124ms: waiting for machine to come up
	I0920 16:44:23.776119   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:23.776531   16686 main.go:141] libmachine: (addons-489802) DBG | unable to find current IP address of domain addons-489802 in network mk-addons-489802
	I0920 16:44:23.776580   16686 main.go:141] libmachine: (addons-489802) DBG | I0920 16:44:23.776503   16708 retry.go:31] will retry after 792.329452ms: waiting for machine to come up
	I0920 16:44:24.570579   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:24.571007   16686 main.go:141] libmachine: (addons-489802) DBG | unable to find current IP address of domain addons-489802 in network mk-addons-489802
	I0920 16:44:24.571032   16686 main.go:141] libmachine: (addons-489802) DBG | I0920 16:44:24.570964   16708 retry.go:31] will retry after 1.123730634s: waiting for machine to come up
	I0920 16:44:25.695981   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:25.696433   16686 main.go:141] libmachine: (addons-489802) DBG | unable to find current IP address of domain addons-489802 in network mk-addons-489802
	I0920 16:44:25.696455   16686 main.go:141] libmachine: (addons-489802) DBG | I0920 16:44:25.696382   16708 retry.go:31] will retry after 1.437323391s: waiting for machine to come up
	I0920 16:44:27.136109   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:27.136681   16686 main.go:141] libmachine: (addons-489802) DBG | unable to find current IP address of domain addons-489802 in network mk-addons-489802
	I0920 16:44:27.136706   16686 main.go:141] libmachine: (addons-489802) DBG | I0920 16:44:27.136631   16708 retry.go:31] will retry after 2.286987635s: waiting for machine to come up
	I0920 16:44:29.425015   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:29.425554   16686 main.go:141] libmachine: (addons-489802) DBG | unable to find current IP address of domain addons-489802 in network mk-addons-489802
	I0920 16:44:29.425597   16686 main.go:141] libmachine: (addons-489802) DBG | I0920 16:44:29.425518   16708 retry.go:31] will retry after 1.976852311s: waiting for machine to come up
	I0920 16:44:31.404712   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:31.405218   16686 main.go:141] libmachine: (addons-489802) DBG | unable to find current IP address of domain addons-489802 in network mk-addons-489802
	I0920 16:44:31.405240   16686 main.go:141] libmachine: (addons-489802) DBG | I0920 16:44:31.405170   16708 retry.go:31] will retry after 3.060545694s: waiting for machine to come up
	I0920 16:44:34.467106   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:34.467532   16686 main.go:141] libmachine: (addons-489802) DBG | unable to find current IP address of domain addons-489802 in network mk-addons-489802
	I0920 16:44:34.467559   16686 main.go:141] libmachine: (addons-489802) DBG | I0920 16:44:34.467474   16708 retry.go:31] will retry after 3.246517198s: waiting for machine to come up
	I0920 16:44:37.717806   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:37.718239   16686 main.go:141] libmachine: (addons-489802) DBG | unable to find current IP address of domain addons-489802 in network mk-addons-489802
	I0920 16:44:37.718274   16686 main.go:141] libmachine: (addons-489802) DBG | I0920 16:44:37.718168   16708 retry.go:31] will retry after 4.118490306s: waiting for machine to come up
	I0920 16:44:41.841226   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:41.841726   16686 main.go:141] libmachine: (addons-489802) Found IP for machine: 192.168.39.89
	I0920 16:44:41.841743   16686 main.go:141] libmachine: (addons-489802) Reserving static IP address...
	I0920 16:44:41.841755   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has current primary IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:41.842160   16686 main.go:141] libmachine: (addons-489802) DBG | unable to find host DHCP lease matching {name: "addons-489802", mac: "52:54:00:bf:85:db", ip: "192.168.39.89"} in network mk-addons-489802
	I0920 16:44:41.913230   16686 main.go:141] libmachine: (addons-489802) Reserved static IP address: 192.168.39.89
	I0920 16:44:41.913257   16686 main.go:141] libmachine: (addons-489802) Waiting for SSH to be available...
	I0920 16:44:41.913265   16686 main.go:141] libmachine: (addons-489802) DBG | Getting to WaitForSSH function...
	I0920 16:44:41.915767   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:41.916236   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:minikube Clientid:01:52:54:00:bf:85:db}
	I0920 16:44:41.916267   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:41.916422   16686 main.go:141] libmachine: (addons-489802) DBG | Using SSH client type: external
	I0920 16:44:41.916446   16686 main.go:141] libmachine: (addons-489802) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa (-rw-------)
	I0920 16:44:41.916467   16686 main.go:141] libmachine: (addons-489802) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.89 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 16:44:41.916475   16686 main.go:141] libmachine: (addons-489802) DBG | About to run SSH command:
	I0920 16:44:41.916485   16686 main.go:141] libmachine: (addons-489802) DBG | exit 0
	I0920 16:44:42.045938   16686 main.go:141] libmachine: (addons-489802) DBG | SSH cmd err, output: <nil>: 
	I0920 16:44:42.046220   16686 main.go:141] libmachine: (addons-489802) KVM machine creation complete!
	I0920 16:44:42.046564   16686 main.go:141] libmachine: (addons-489802) Calling .GetConfigRaw
	I0920 16:44:42.047127   16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
	I0920 16:44:42.047334   16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
	I0920 16:44:42.047475   16686 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0920 16:44:42.047490   16686 main.go:141] libmachine: (addons-489802) Calling .GetState
	I0920 16:44:42.049083   16686 main.go:141] libmachine: Detecting operating system of created instance...
	I0920 16:44:42.049109   16686 main.go:141] libmachine: Waiting for SSH to be available...
	I0920 16:44:42.049116   16686 main.go:141] libmachine: Getting to WaitForSSH function...
	I0920 16:44:42.049122   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
	I0920 16:44:42.051309   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:42.051675   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:44:42.051731   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:42.051767   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
	I0920 16:44:42.051947   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:44:42.052082   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:44:42.052201   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
	I0920 16:44:42.052358   16686 main.go:141] libmachine: Using SSH client type: native
	I0920 16:44:42.052546   16686 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0920 16:44:42.052561   16686 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0920 16:44:42.153288   16686 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 16:44:42.153332   16686 main.go:141] libmachine: Detecting the provisioner...
	I0920 16:44:42.153344   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
	I0920 16:44:42.156232   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:42.156583   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:44:42.156612   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:42.156760   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
	I0920 16:44:42.156968   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:44:42.157119   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:44:42.157234   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
	I0920 16:44:42.157410   16686 main.go:141] libmachine: Using SSH client type: native
	I0920 16:44:42.157610   16686 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0920 16:44:42.157626   16686 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0920 16:44:42.254380   16686 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0920 16:44:42.254438   16686 main.go:141] libmachine: found compatible host: buildroot
	I0920 16:44:42.254444   16686 main.go:141] libmachine: Provisioning with buildroot...
	I0920 16:44:42.254451   16686 main.go:141] libmachine: (addons-489802) Calling .GetMachineName
	I0920 16:44:42.254703   16686 buildroot.go:166] provisioning hostname "addons-489802"
	I0920 16:44:42.254734   16686 main.go:141] libmachine: (addons-489802) Calling .GetMachineName
	I0920 16:44:42.254884   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
	I0920 16:44:42.257868   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:42.258311   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:44:42.258354   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:42.258809   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
	I0920 16:44:42.259005   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:44:42.259172   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:44:42.259323   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
	I0920 16:44:42.259521   16686 main.go:141] libmachine: Using SSH client type: native
	I0920 16:44:42.259670   16686 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0920 16:44:42.259683   16686 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-489802 && echo "addons-489802" | sudo tee /etc/hostname
	I0920 16:44:42.370953   16686 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-489802
	
	I0920 16:44:42.370980   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
	I0920 16:44:42.373616   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:42.373970   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:44:42.374002   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:42.374153   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
	I0920 16:44:42.374357   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:44:42.374531   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:44:42.374634   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
	I0920 16:44:42.374808   16686 main.go:141] libmachine: Using SSH client type: native
	I0920 16:44:42.374994   16686 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0920 16:44:42.375012   16686 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-489802' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-489802/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-489802' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 16:44:42.482921   16686 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 16:44:42.482949   16686 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-8777/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-8777/.minikube}
	I0920 16:44:42.482989   16686 buildroot.go:174] setting up certificates
	I0920 16:44:42.482998   16686 provision.go:84] configureAuth start
	I0920 16:44:42.483007   16686 main.go:141] libmachine: (addons-489802) Calling .GetMachineName
	I0920 16:44:42.483254   16686 main.go:141] libmachine: (addons-489802) Calling .GetIP
	I0920 16:44:42.486082   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:42.486435   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:44:42.486458   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:42.486591   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
	I0920 16:44:42.489005   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:42.489385   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:44:42.489412   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:42.489530   16686 provision.go:143] copyHostCerts
	I0920 16:44:42.489599   16686 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem (1082 bytes)
	I0920 16:44:42.489774   16686 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem (1123 bytes)
	I0920 16:44:42.489920   16686 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem (1675 bytes)
	I0920 16:44:42.490019   16686 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem org=jenkins.addons-489802 san=[127.0.0.1 192.168.39.89 addons-489802 localhost minikube]
	I0920 16:44:42.556359   16686 provision.go:177] copyRemoteCerts
	I0920 16:44:42.556423   16686 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 16:44:42.556446   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
	I0920 16:44:42.559402   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:42.559884   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:44:42.559911   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:42.560233   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
	I0920 16:44:42.560402   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:44:42.560524   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
	I0920 16:44:42.560649   16686 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa Username:docker}
	I0920 16:44:42.640095   16686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0920 16:44:42.664291   16686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0920 16:44:42.687271   16686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0920 16:44:42.709976   16686 provision.go:87] duration metric: took 226.963662ms to configureAuth
	I0920 16:44:42.710011   16686 buildroot.go:189] setting minikube options for container-runtime
	I0920 16:44:42.710210   16686 config.go:182] Loaded profile config "addons-489802": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 16:44:42.710288   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
	I0920 16:44:42.713157   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:42.713576   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:44:42.713605   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:42.713861   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
	I0920 16:44:42.714050   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:44:42.714198   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:44:42.714335   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
	I0920 16:44:42.714575   16686 main.go:141] libmachine: Using SSH client type: native
	I0920 16:44:42.714732   16686 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0920 16:44:42.714746   16686 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 16:44:42.936196   16686 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 16:44:42.936230   16686 main.go:141] libmachine: Checking connection to Docker...
	I0920 16:44:42.936255   16686 main.go:141] libmachine: (addons-489802) Calling .GetURL
	I0920 16:44:42.937633   16686 main.go:141] libmachine: (addons-489802) DBG | Using libvirt version 6000000
	I0920 16:44:42.940023   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:42.940360   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:44:42.940383   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:42.940608   16686 main.go:141] libmachine: Docker is up and running!
	I0920 16:44:42.940623   16686 main.go:141] libmachine: Reticulating splines...
	I0920 16:44:42.940629   16686 client.go:171] duration metric: took 24.676449957s to LocalClient.Create
	I0920 16:44:42.940649   16686 start.go:167] duration metric: took 24.676502405s to libmachine.API.Create "addons-489802"
	I0920 16:44:42.940665   16686 start.go:293] postStartSetup for "addons-489802" (driver="kvm2")
	I0920 16:44:42.940675   16686 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 16:44:42.940691   16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
	I0920 16:44:42.940982   16686 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 16:44:42.941005   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
	I0920 16:44:42.943365   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:42.943725   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:44:42.943749   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:42.943950   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
	I0920 16:44:42.944124   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:44:42.944283   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
	I0920 16:44:42.944440   16686 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa Username:docker}
	I0920 16:44:43.023999   16686 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 16:44:43.028231   16686 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 16:44:43.028271   16686 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/addons for local assets ...
	I0920 16:44:43.028362   16686 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/files for local assets ...
	I0920 16:44:43.028391   16686 start.go:296] duration metric: took 87.721087ms for postStartSetup
	I0920 16:44:43.028430   16686 main.go:141] libmachine: (addons-489802) Calling .GetConfigRaw
	I0920 16:44:43.029004   16686 main.go:141] libmachine: (addons-489802) Calling .GetIP
	I0920 16:44:43.032101   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:43.032392   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:44:43.032420   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:43.032651   16686 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/config.json ...
	I0920 16:44:43.032872   16686 start.go:128] duration metric: took 24.786580765s to createHost
	I0920 16:44:43.032897   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
	I0920 16:44:43.035034   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:43.035343   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:44:43.035377   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:43.035500   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
	I0920 16:44:43.035665   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:44:43.035848   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:44:43.035974   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
	I0920 16:44:43.036134   16686 main.go:141] libmachine: Using SSH client type: native
	I0920 16:44:43.036283   16686 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0920 16:44:43.036293   16686 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 16:44:43.134258   16686 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726850683.106297733
	
	I0920 16:44:43.134281   16686 fix.go:216] guest clock: 1726850683.106297733
	I0920 16:44:43.134318   16686 fix.go:229] Guest: 2024-09-20 16:44:43.106297733 +0000 UTC Remote: 2024-09-20 16:44:43.032884764 +0000 UTC m=+24.887429631 (delta=73.412969ms)
	I0920 16:44:43.134347   16686 fix.go:200] guest clock delta is within tolerance: 73.412969ms
	I0920 16:44:43.134354   16686 start.go:83] releasing machines lock for "addons-489802", held for 24.88813735s
	I0920 16:44:43.134375   16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
	I0920 16:44:43.134602   16686 main.go:141] libmachine: (addons-489802) Calling .GetIP
	I0920 16:44:43.137503   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:43.137857   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:44:43.137885   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:43.138022   16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
	I0920 16:44:43.138471   16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
	I0920 16:44:43.138655   16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
	I0920 16:44:43.138740   16686 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 16:44:43.138784   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
	I0920 16:44:43.138890   16686 ssh_runner.go:195] Run: cat /version.json
	I0920 16:44:43.138911   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
	I0920 16:44:43.141496   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:43.141700   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:43.141814   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:44:43.141848   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:43.141984   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
	I0920 16:44:43.142122   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:44:43.142207   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:44:43.142233   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:43.142240   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
	I0920 16:44:43.142382   16686 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa Username:docker}
	I0920 16:44:43.142400   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
	I0920 16:44:43.142527   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:44:43.142639   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
	I0920 16:44:43.142738   16686 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa Username:docker}
	I0920 16:44:43.214377   16686 ssh_runner.go:195] Run: systemctl --version
	I0920 16:44:43.255061   16686 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 16:44:43.407471   16686 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 16:44:43.413920   16686 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 16:44:43.413984   16686 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 16:44:43.430049   16686 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 16:44:43.430083   16686 start.go:495] detecting cgroup driver to use...
	I0920 16:44:43.430165   16686 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 16:44:43.445755   16686 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 16:44:43.460072   16686 docker.go:217] disabling cri-docker service (if available) ...
	I0920 16:44:43.460130   16686 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 16:44:43.473445   16686 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 16:44:43.486406   16686 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 16:44:43.599287   16686 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 16:44:43.771188   16686 docker.go:233] disabling docker service ...
	I0920 16:44:43.771285   16686 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 16:44:43.786254   16686 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 16:44:43.799345   16686 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 16:44:43.929040   16686 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 16:44:44.054620   16686 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 16:44:44.068879   16686 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 16:44:44.087412   16686 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 16:44:44.087482   16686 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 16:44:44.098030   16686 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 16:44:44.098093   16686 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 16:44:44.108462   16686 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 16:44:44.119209   16686 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 16:44:44.130359   16686 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 16:44:44.141802   16686 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 16:44:44.152585   16686 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 16:44:44.169299   16686 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 16:44:44.179293   16686 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 16:44:44.188257   16686 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 16:44:44.188326   16686 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 16:44:44.200400   16686 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 16:44:44.210617   16686 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 16:44:44.322851   16686 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 16:44:44.414303   16686 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 16:44:44.414398   16686 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 16:44:44.418774   16686 start.go:563] Will wait 60s for crictl version
	I0920 16:44:44.418851   16686 ssh_runner.go:195] Run: which crictl
	I0920 16:44:44.422352   16686 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 16:44:44.464229   16686 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 16:44:44.464345   16686 ssh_runner.go:195] Run: crio --version
	I0920 16:44:44.492112   16686 ssh_runner.go:195] Run: crio --version
	I0920 16:44:44.519927   16686 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 16:44:44.520939   16686 main.go:141] libmachine: (addons-489802) Calling .GetIP
	I0920 16:44:44.523216   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:44.523500   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:44:44.523521   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:44.523769   16686 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 16:44:44.527526   16686 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 16:44:44.539346   16686 kubeadm.go:883] updating cluster {Name:addons-489802 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-489802 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.89 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 16:44:44.539450   16686 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 16:44:44.539491   16686 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 16:44:44.570607   16686 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 16:44:44.570672   16686 ssh_runner.go:195] Run: which lz4
	I0920 16:44:44.574305   16686 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 16:44:44.578003   16686 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 16:44:44.578036   16686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0920 16:44:45.832824   16686 crio.go:462] duration metric: took 1.258544501s to copy over tarball
	I0920 16:44:45.832907   16686 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 16:44:49.851668   16686 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (4.018714604s)
	I0920 16:44:49.851726   16686 crio.go:469] duration metric: took 4.01886728s to extract the tarball
	I0920 16:44:49.851737   16686 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 16:44:49.896630   16686 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 16:44:49.944783   16686 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 16:44:49.944818   16686 cache_images.go:84] Images are preloaded, skipping loading
	I0920 16:44:49.944827   16686 kubeadm.go:934] updating node { 192.168.39.89 8443 v1.31.1 crio true true} ...
	I0920 16:44:49.944968   16686 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-489802 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.89
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-489802 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 16:44:49.945079   16686 ssh_runner.go:195] Run: crio config
	I0920 16:44:50.001938   16686 cni.go:84] Creating CNI manager for ""
	I0920 16:44:50.001967   16686 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 16:44:50.001981   16686 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 16:44:50.002006   16686 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.89 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-489802 NodeName:addons-489802 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.89"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.89 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 16:44:50.002170   16686 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.89
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-489802"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.89
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.89"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 16:44:50.002231   16686 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 16:44:50.013339   16686 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 16:44:50.013411   16686 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 16:44:50.024767   16686 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0920 16:44:50.045363   16686 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 16:44:50.062898   16686 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0920 16:44:50.080572   16686 ssh_runner.go:195] Run: grep 192.168.39.89	control-plane.minikube.internal$ /etc/hosts
	I0920 16:44:50.085773   16686 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.89	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 16:44:50.098757   16686 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 16:44:50.240556   16686 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 16:44:50.258141   16686 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802 for IP: 192.168.39.89
	I0920 16:44:50.258209   16686 certs.go:194] generating shared ca certs ...
	I0920 16:44:50.258255   16686 certs.go:226] acquiring lock for ca certs: {Name:mkc7ef6c737c6bdc3fdd9dcff8f57029c020d8f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:44:50.258438   16686 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key
	I0920 16:44:50.381564   16686 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt ...
	I0920 16:44:50.381596   16686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt: {Name:mkba49b4d048d5af44df48f4edd690a694a33473 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:44:50.381797   16686 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key ...
	I0920 16:44:50.381808   16686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key: {Name:mk653576ff784ce50de2dfa9e3a0facde1d60271 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:44:50.381907   16686 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key
	I0920 16:44:50.546530   16686 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.crt ...
	I0920 16:44:50.546555   16686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.crt: {Name:mk67c6a6b77428ba0cdac9b9e34d49fcf308bb8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:44:50.546726   16686 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key ...
	I0920 16:44:50.546738   16686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key: {Name:mkd7ae4f2d01ceba146c4dc9b43c4a1a5ab41e93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:44:50.546824   16686 certs.go:256] generating profile certs ...
	I0920 16:44:50.546886   16686 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/client.key
	I0920 16:44:50.546900   16686 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/client.crt with IP's: []
	I0920 16:44:50.626758   16686 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/client.crt ...
	I0920 16:44:50.626785   16686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/client.crt: {Name:mkc5f095f711647000f5605c19ca0db353359e2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:44:50.626972   16686 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/client.key ...
	I0920 16:44:50.626986   16686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/client.key: {Name:mk3f0c684e304c5dc541f54b7034757bf95d7fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:44:50.627082   16686 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/apiserver.key.1bac25cc
	I0920 16:44:50.627100   16686 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/apiserver.crt.1bac25cc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.89]
	I0920 16:44:50.846521   16686 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/apiserver.crt.1bac25cc ...
	I0920 16:44:50.846553   16686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/apiserver.crt.1bac25cc: {Name:mkb99a44e1af5a4a578b6ff7445cbfc9f6d1c4b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:44:50.846716   16686 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/apiserver.key.1bac25cc ...
	I0920 16:44:50.846729   16686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/apiserver.key.1bac25cc: {Name:mk1ce5fd024a94836fd45952b6c3038de9bbeaae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:44:50.846799   16686 certs.go:381] copying /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/apiserver.crt.1bac25cc -> /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/apiserver.crt
	I0920 16:44:50.846874   16686 certs.go:385] copying /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/apiserver.key.1bac25cc -> /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/apiserver.key
	I0920 16:44:50.846919   16686 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/proxy-client.key
	I0920 16:44:50.846934   16686 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/proxy-client.crt with IP's: []
	I0920 16:44:51.074511   16686 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/proxy-client.crt ...
	I0920 16:44:51.074548   16686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/proxy-client.crt: {Name:mk593c697632b0437e75154f622f66ff162758f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:44:51.074697   16686 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/proxy-client.key ...
	I0920 16:44:51.074708   16686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/proxy-client.key: {Name:mkd7afdfda0e263fcdc4ad0882491ad3726f4657 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:44:51.074875   16686 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 16:44:51.074907   16686 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem (1082 bytes)
	I0920 16:44:51.074929   16686 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem (1123 bytes)
	I0920 16:44:51.074950   16686 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem (1675 bytes)
	I0920 16:44:51.075572   16686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 16:44:51.104195   16686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 16:44:51.128646   16686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 16:44:51.153291   16686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0920 16:44:51.177482   16686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0920 16:44:51.202143   16686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 16:44:51.226168   16686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 16:44:51.251069   16686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 16:44:51.274951   16686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 16:44:51.298272   16686 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 16:44:51.314508   16686 ssh_runner.go:195] Run: openssl version
	I0920 16:44:51.320418   16686 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 16:44:51.331616   16686 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 16:44:51.336211   16686 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0920 16:44:51.336270   16686 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 16:44:51.341681   16686 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 16:44:51.351994   16686 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 16:44:51.356403   16686 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 16:44:51.356470   16686 kubeadm.go:392] StartCluster: {Name:addons-489802 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-489802 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.89 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 16:44:51.356584   16686 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 16:44:51.356645   16686 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 16:44:51.396773   16686 cri.go:89] found id: ""
	I0920 16:44:51.396839   16686 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 16:44:51.407827   16686 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 16:44:51.417398   16686 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 16:44:51.426423   16686 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 16:44:51.426443   16686 kubeadm.go:157] found existing configuration files:
	
	I0920 16:44:51.426481   16686 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 16:44:51.435274   16686 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 16:44:51.435338   16686 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 16:44:51.444427   16686 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 16:44:51.453046   16686 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 16:44:51.453111   16686 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 16:44:51.462277   16686 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 16:44:51.470882   16686 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 16:44:51.470938   16686 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 16:44:51.480053   16686 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 16:44:51.488382   16686 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 16:44:51.488450   16686 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 16:44:51.497406   16686 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 16:44:51.541221   16686 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 16:44:51.541351   16686 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 16:44:51.633000   16686 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 16:44:51.633106   16686 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 16:44:51.633217   16686 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 16:44:51.641465   16686 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 16:44:51.643561   16686 out.go:235]   - Generating certificates and keys ...
	I0920 16:44:51.643637   16686 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 16:44:51.643707   16686 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 16:44:51.974976   16686 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0920 16:44:52.212429   16686 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0920 16:44:52.725412   16686 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0920 16:44:52.824449   16686 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0920 16:44:52.884139   16686 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0920 16:44:52.884436   16686 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-489802 localhost] and IPs [192.168.39.89 127.0.0.1 ::1]
	I0920 16:44:53.064017   16686 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0920 16:44:53.064225   16686 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-489802 localhost] and IPs [192.168.39.89 127.0.0.1 ::1]
	I0920 16:44:53.110684   16686 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0920 16:44:53.439405   16686 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0920 16:44:53.523372   16686 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0920 16:44:53.523450   16686 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 16:44:53.894835   16686 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 16:44:54.063405   16686 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 16:44:54.134012   16686 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 16:44:54.252802   16686 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 16:44:54.496063   16686 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 16:44:54.498352   16686 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 16:44:54.501105   16686 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 16:44:54.502882   16686 out.go:235]   - Booting up control plane ...
	I0920 16:44:54.503004   16686 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 16:44:54.503113   16686 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 16:44:54.503192   16686 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 16:44:54.517820   16686 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 16:44:54.525307   16686 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 16:44:54.525359   16686 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 16:44:54.642832   16686 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 16:44:54.642977   16686 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 16:44:55.143793   16686 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.346631ms
	I0920 16:44:55.143884   16686 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 16:45:00.142510   16686 kubeadm.go:310] [api-check] The API server is healthy after 5.001658723s
	I0920 16:45:00.161952   16686 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 16:45:00.199831   16686 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 16:45:00.237142   16686 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 16:45:00.237431   16686 kubeadm.go:310] [mark-control-plane] Marking the node addons-489802 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 16:45:00.267465   16686 kubeadm.go:310] [bootstrap-token] Using token: pxuown.8491ndv1zucibr8t
	I0920 16:45:00.269321   16686 out.go:235]   - Configuring RBAC rules ...
	I0920 16:45:00.269445   16686 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 16:45:00.277244   16686 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 16:45:00.297062   16686 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 16:45:00.303392   16686 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 16:45:00.310726   16686 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 16:45:00.317990   16686 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 16:45:00.550067   16686 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 16:45:00.983547   16686 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 16:45:01.549916   16686 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 16:45:01.549943   16686 kubeadm.go:310] 
	I0920 16:45:01.550082   16686 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 16:45:01.550165   16686 kubeadm.go:310] 
	I0920 16:45:01.550391   16686 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 16:45:01.550403   16686 kubeadm.go:310] 
	I0920 16:45:01.550435   16686 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 16:45:01.550520   16686 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 16:45:01.550590   16686 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 16:45:01.550601   16686 kubeadm.go:310] 
	I0920 16:45:01.550668   16686 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 16:45:01.550680   16686 kubeadm.go:310] 
	I0920 16:45:01.550751   16686 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 16:45:01.550761   16686 kubeadm.go:310] 
	I0920 16:45:01.550847   16686 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 16:45:01.550942   16686 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 16:45:01.551031   16686 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 16:45:01.551040   16686 kubeadm.go:310] 
	I0920 16:45:01.551130   16686 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 16:45:01.551241   16686 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 16:45:01.551252   16686 kubeadm.go:310] 
	I0920 16:45:01.551332   16686 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token pxuown.8491ndv1zucibr8t \
	I0920 16:45:01.551422   16686 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:736f3fb521855813decb4a7214f2b8beff5f81c3971b60decfa20a8807626a2d \
	I0920 16:45:01.551443   16686 kubeadm.go:310] 	--control-plane 
	I0920 16:45:01.551456   16686 kubeadm.go:310] 
	I0920 16:45:01.551575   16686 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 16:45:01.551586   16686 kubeadm.go:310] 
	I0920 16:45:01.551676   16686 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token pxuown.8491ndv1zucibr8t \
	I0920 16:45:01.551784   16686 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:736f3fb521855813decb4a7214f2b8beff5f81c3971b60decfa20a8807626a2d 
	I0920 16:45:01.552616   16686 kubeadm.go:310] W0920 16:44:51.520638     818 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 16:45:01.553045   16686 kubeadm.go:310] W0920 16:44:51.522103     818 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 16:45:01.553171   16686 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 16:45:01.553193   16686 cni.go:84] Creating CNI manager for ""
	I0920 16:45:01.553204   16686 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 16:45:01.554912   16686 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 16:45:01.556375   16686 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 16:45:01.567185   16686 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 16:45:01.590373   16686 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 16:45:01.590503   16686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 16:45:01.590518   16686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-489802 minikube.k8s.io/updated_at=2024_09_20T16_45_01_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=0626f22cf0d915d75e291a5bce701f94395056e1 minikube.k8s.io/name=addons-489802 minikube.k8s.io/primary=true
	I0920 16:45:01.611693   16686 ops.go:34] apiserver oom_adj: -16
	I0920 16:45:01.740445   16686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 16:45:02.241564   16686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 16:45:02.740509   16686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 16:45:03.241160   16686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 16:45:03.740876   16686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 16:45:04.241125   16686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 16:45:04.740796   16686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 16:45:05.241433   16686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 16:45:05.740524   16686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 16:45:05.862361   16686 kubeadm.go:1113] duration metric: took 4.271922428s to wait for elevateKubeSystemPrivileges
	I0920 16:45:05.862397   16686 kubeadm.go:394] duration metric: took 14.505940675s to StartCluster
	I0920 16:45:05.862414   16686 settings.go:142] acquiring lock: {Name:mk2a13c58cdc0faf8cddca5d6716175d45db9bfd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:45:05.862558   16686 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19672-8777/kubeconfig
	I0920 16:45:05.862903   16686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/kubeconfig: {Name:mkf32a4c736808e023459b2f0e40188618a38db1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:45:05.863101   16686 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.89 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 16:45:05.863138   16686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0920 16:45:05.863158   16686 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0920 16:45:05.863290   16686 addons.go:69] Setting yakd=true in profile "addons-489802"
	I0920 16:45:05.863282   16686 addons.go:69] Setting default-storageclass=true in profile "addons-489802"
	I0920 16:45:05.863308   16686 addons.go:234] Setting addon yakd=true in "addons-489802"
	I0920 16:45:05.863317   16686 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-489802"
	I0920 16:45:05.863312   16686 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-489802"
	I0920 16:45:05.863314   16686 addons.go:69] Setting cloud-spanner=true in profile "addons-489802"
	I0920 16:45:05.863340   16686 host.go:66] Checking if "addons-489802" exists ...
	I0920 16:45:05.863341   16686 addons.go:234] Setting addon cloud-spanner=true in "addons-489802"
	I0920 16:45:05.863342   16686 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-489802"
	I0920 16:45:05.863361   16686 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-489802"
	I0920 16:45:05.863363   16686 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-489802"
	I0920 16:45:05.863375   16686 host.go:66] Checking if "addons-489802" exists ...
	I0920 16:45:05.863390   16686 host.go:66] Checking if "addons-489802" exists ...
	I0920 16:45:05.863391   16686 host.go:66] Checking if "addons-489802" exists ...
	I0920 16:45:05.863391   16686 config.go:182] Loaded profile config "addons-489802": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 16:45:05.863448   16686 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-489802"
	I0920 16:45:05.863461   16686 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-489802"
	I0920 16:45:05.863793   16686 addons.go:69] Setting gcp-auth=true in profile "addons-489802"
	I0920 16:45:05.863800   16686 addons.go:69] Setting ingress-dns=true in profile "addons-489802"
	I0920 16:45:05.863804   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.863808   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.863821   16686 addons.go:69] Setting ingress=true in profile "addons-489802"
	I0920 16:45:05.863824   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.863831   16686 addons.go:69] Setting metrics-server=true in profile "addons-489802"
	I0920 16:45:05.863821   16686 addons.go:69] Setting inspektor-gadget=true in profile "addons-489802"
	I0920 16:45:05.863839   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.863843   16686 addons.go:234] Setting addon metrics-server=true in "addons-489802"
	I0920 16:45:05.863845   16686 addons.go:69] Setting volcano=true in profile "addons-489802"
	I0920 16:45:05.863812   16686 mustload.go:65] Loading cluster: addons-489802
	I0920 16:45:05.863852   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.863856   16686 addons.go:234] Setting addon volcano=true in "addons-489802"
	I0920 16:45:05.863865   16686 host.go:66] Checking if "addons-489802" exists ...
	I0920 16:45:05.863881   16686 host.go:66] Checking if "addons-489802" exists ...
	I0920 16:45:05.863918   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.863925   16686 addons.go:69] Setting registry=true in profile "addons-489802"
	I0920 16:45:05.863943   16686 addons.go:234] Setting addon registry=true in "addons-489802"
	I0920 16:45:05.863943   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.863955   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.863978   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.864003   16686 config.go:182] Loaded profile config "addons-489802": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 16:45:05.864008   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.864067   16686 addons.go:69] Setting storage-provisioner=true in profile "addons-489802"
	I0920 16:45:05.864077   16686 addons.go:234] Setting addon storage-provisioner=true in "addons-489802"
	I0920 16:45:05.864162   16686 addons.go:69] Setting volumesnapshots=true in profile "addons-489802"
	I0920 16:45:05.864180   16686 addons.go:234] Setting addon volumesnapshots=true in "addons-489802"
	I0920 16:45:05.864214   16686 host.go:66] Checking if "addons-489802" exists ...
	I0920 16:45:05.864241   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.864270   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.863833   16686 addons.go:234] Setting addon ingress=true in "addons-489802"
	I0920 16:45:05.864312   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.864337   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.863812   16686 addons.go:234] Setting addon ingress-dns=true in "addons-489802"
	I0920 16:45:05.864407   16686 host.go:66] Checking if "addons-489802" exists ...
	I0920 16:45:05.863847   16686 addons.go:234] Setting addon inspektor-gadget=true in "addons-489802"
	I0920 16:45:05.863810   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.864596   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.864641   16686 host.go:66] Checking if "addons-489802" exists ...
	I0920 16:45:05.864662   16686 host.go:66] Checking if "addons-489802" exists ...
	I0920 16:45:05.864741   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.864770   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.864799   16686 host.go:66] Checking if "addons-489802" exists ...
	I0920 16:45:05.864991   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.864993   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.865016   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.865021   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.865128   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.865158   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.865250   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.865287   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.865605   16686 host.go:66] Checking if "addons-489802" exists ...
	I0920 16:45:05.873149   16686 out.go:177] * Verifying Kubernetes components...
	I0920 16:45:05.875354   16686 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 16:45:05.886351   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.886408   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.886439   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.886493   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.886542   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36167
	I0920 16:45:05.886778   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45461
	I0920 16:45:05.886908   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40277
	I0920 16:45:05.887721   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:05.887867   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:05.887935   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:05.888511   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:05.888539   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:05.888665   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:05.888682   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:05.889051   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:05.889074   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:05.889168   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39407
	I0920 16:45:05.889340   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:05.889387   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:05.889430   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:05.889990   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.890030   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.890136   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.890165   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.894535   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:05.895113   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.895154   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.904311   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:05.904341   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:05.905034   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:05.905227   16686 main.go:141] libmachine: (addons-489802) Calling .GetState
	I0920 16:45:05.910612   16686 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-489802"
	I0920 16:45:05.910663   16686 host.go:66] Checking if "addons-489802" exists ...
	I0920 16:45:05.911040   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.911095   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.911196   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43009
	I0920 16:45:05.912127   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36389
	I0920 16:45:05.912633   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:05.913296   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:05.913317   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:05.913620   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43891
	I0920 16:45:05.913784   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41005
	I0920 16:45:05.913785   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:05.914527   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.914569   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.914814   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:05.914815   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:05.915345   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:05.915366   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:05.915470   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:05.915488   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:05.916370   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:05.916574   16686 main.go:141] libmachine: (addons-489802) Calling .GetState
	I0920 16:45:05.916621   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:05.917159   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.917200   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.917629   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:05.918192   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:05.918213   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:05.918613   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:05.918669   16686 host.go:66] Checking if "addons-489802" exists ...
	I0920 16:45:05.919045   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.919074   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.922095   16686 main.go:141] libmachine: (addons-489802) Calling .GetState
	I0920 16:45:05.925413   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39609
	I0920 16:45:05.926161   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:05.926895   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:05.926919   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:05.927445   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:05.928038   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.928083   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.930652   16686 addons.go:234] Setting addon default-storageclass=true in "addons-489802"
	I0920 16:45:05.930702   16686 host.go:66] Checking if "addons-489802" exists ...
	I0920 16:45:05.931084   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.931143   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.932706   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34593
	I0920 16:45:05.933363   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:05.934073   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:05.934093   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:05.934558   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:05.935171   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.935210   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.941706   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43133
	I0920 16:45:05.942347   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:05.943149   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:05.943173   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:05.943717   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:05.949811   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36123
	I0920 16:45:05.950710   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.950769   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.951083   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:05.951845   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:05.951868   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:05.952349   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:05.952538   16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
	I0920 16:45:05.953123   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33577
	I0920 16:45:05.954739   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:05.955577   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38365
	I0920 16:45:05.956118   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46489
	I0920 16:45:05.956311   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:05.956877   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:05.956902   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:05.957263   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:05.957283   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:05.958119   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:05.958195   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41117
	I0920 16:45:05.958880   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.958921   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.959186   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:05.959739   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:05.959761   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:05.959785   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:05.960399   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:05.960985   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.961025   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.961535   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:05.961729   16686 main.go:141] libmachine: (addons-489802) Calling .GetState
	I0920 16:45:05.961940   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:05.961958   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:05.962782   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:05.963365   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.963414   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.963800   16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
	I0920 16:45:05.966313   16686 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0920 16:45:05.967714   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36999
	I0920 16:45:05.967733   16686 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 16:45:05.967750   16686 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 16:45:05.967775   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
	I0920 16:45:05.971362   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42171
	I0920 16:45:05.972858   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33479
	I0920 16:45:05.974844   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:05.975487   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:45:05.975517   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:05.975763   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
	I0920 16:45:05.975965   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:45:05.976140   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
	I0920 16:45:05.976363   16686 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa Username:docker}
	I0920 16:45:05.977671   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42025
	I0920 16:45:05.978187   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:05.981448   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46501
	I0920 16:45:05.981604   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44741
	I0920 16:45:05.982424   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:05.982550   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:05.982830   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:05.982881   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:05.983467   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:05.983492   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:05.983551   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:05.983961   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:05.983979   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:05.984042   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:05.984224   16686 main.go:141] libmachine: (addons-489802) Calling .GetState
	I0920 16:45:05.984715   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35263
	I0920 16:45:05.984871   16686 main.go:141] libmachine: (addons-489802) Calling .GetState
	I0920 16:45:05.984923   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:05.985197   16686 main.go:141] libmachine: (addons-489802) Calling .GetState
	I0920 16:45:05.986711   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:05.987367   16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
	I0920 16:45:05.987635   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:05.987654   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:05.987994   16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
	I0920 16:45:05.988156   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:05.988566   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42959
	I0920 16:45:05.989594   16686 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0920 16:45:05.990395   16686 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0920 16:45:05.991212   16686 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 16:45:05.991233   16686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0920 16:45:05.991257   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
	I0920 16:45:05.991416   16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
	I0920 16:45:05.992716   16686 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0920 16:45:05.992737   16686 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0920 16:45:05.992760   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
	I0920 16:45:05.992873   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39923
	I0920 16:45:05.993699   16686 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0920 16:45:05.995293   16686 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0920 16:45:05.995314   16686 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0920 16:45:05.995337   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
	I0920 16:45:05.995421   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
	I0920 16:45:05.995474   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:05.995494   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:45:05.995520   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:05.995539   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39693
	I0920 16:45:06.002124   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:06.002163   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:45:06.002180   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:06.002186   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
	I0920 16:45:06.002226   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:45:06.002256   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
	I0920 16:45:06.002186   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:06.002304   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:45:06.002330   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:06.002392   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:06.002441   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:06.002794   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:06.002895   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:06.003001   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:06.003084   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:06.003168   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
	I0920 16:45:06.003348   16686 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa Username:docker}
	I0920 16:45:06.003599   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:06.003651   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:06.003661   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:06.003693   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:45:06.003693   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:06.003708   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:06.003715   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:45:06.003952   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:06.003969   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:06.004102   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:06.004235   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:06.004248   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:06.004312   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:06.004332   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:06.004348   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:06.004535   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:06.004574   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
	I0920 16:45:06.004738   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
	I0920 16:45:06.004727   16686 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa Username:docker}
	I0920 16:45:06.004793   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:06.005068   16686 main.go:141] libmachine: (addons-489802) Calling .GetState
	I0920 16:45:06.005104   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:06.005120   16686 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa Username:docker}
	I0920 16:45:06.005134   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:06.005135   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:06.005145   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:06.006374   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:06.006382   16686 main.go:141] libmachine: (addons-489802) Calling .GetState
	I0920 16:45:06.006398   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:06.006377   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:06.007189   16686 main.go:141] libmachine: (addons-489802) Calling .GetState
	I0920 16:45:06.007202   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36911
	I0920 16:45:06.007213   16686 main.go:141] libmachine: (addons-489802) Calling .GetState
	I0920 16:45:06.007251   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46439
	I0920 16:45:06.007358   16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
	I0920 16:45:06.007582   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:06.007618   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:06.008305   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:06.009013   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:06.009036   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:06.009097   16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
	I0920 16:45:06.009454   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:06.009483   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:06.011482   16686 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0920 16:45:06.011667   16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
	I0920 16:45:06.011700   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:06.011718   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:06.011719   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:06.011730   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:06.011738   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:06.011780   16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
	I0920 16:45:06.012083   16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
	I0920 16:45:06.012119   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:06.012127   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	W0920 16:45:06.012215   16686 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0920 16:45:06.013040   16686 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0920 16:45:06.013057   16686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0920 16:45:06.013076   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
	I0920 16:45:06.013854   16686 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0920 16:45:06.013875   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:06.014222   16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
	I0920 16:45:06.014278   16686 main.go:141] libmachine: (addons-489802) Calling .GetState
	I0920 16:45:06.015566   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:06.015585   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:06.016191   16686 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0920 16:45:06.016298   16686 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0920 16:45:06.016476   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:06.016889   16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
	I0920 16:45:06.017494   16686 main.go:141] libmachine: (addons-489802) Calling .GetState
	I0920 16:45:06.018839   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:06.019261   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:45:06.019283   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:06.019485   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
	I0920 16:45:06.019664   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:45:06.019716   16686 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0920 16:45:06.019816   16686 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 16:45:06.019996   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
	I0920 16:45:06.020051   16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
	I0920 16:45:06.020211   16686 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa Username:docker}
	I0920 16:45:06.020731   16686 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 16:45:06.021987   16686 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0920 16:45:06.022029   16686 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0920 16:45:06.022093   16686 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 16:45:06.022300   16686 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 16:45:06.022755   16686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 16:45:06.022776   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
	I0920 16:45:06.023143   16686 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0920 16:45:06.023160   16686 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0920 16:45:06.023177   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
	I0920 16:45:06.024174   16686 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0920 16:45:06.024191   16686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0920 16:45:06.024275   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
	I0920 16:45:06.024664   16686 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0920 16:45:06.025980   16686 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0920 16:45:06.027309   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:06.027785   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:45:06.027815   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:06.027929   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:06.028009   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
	I0920 16:45:06.028181   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:45:06.028474   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:45:06.028495   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:06.028615   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
	I0920 16:45:06.028701   16686 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0920 16:45:06.028891   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:06.028889   16686 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa Username:docker}
	I0920 16:45:06.028923   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
	I0920 16:45:06.029196   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:45:06.029192   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:45:06.029222   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:06.029483   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
	I0920 16:45:06.029709   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
	I0920 16:45:06.029887   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:45:06.029906   16686 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa Username:docker}
	I0920 16:45:06.030033   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
	I0920 16:45:06.030190   16686 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa Username:docker}
	I0920 16:45:06.031196   16686 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0920 16:45:06.032725   16686 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0920 16:45:06.032746   16686 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0920 16:45:06.032780   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
	I0920 16:45:06.034644   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42341
	I0920 16:45:06.035197   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37823
	I0920 16:45:06.035340   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:06.036022   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:06.036041   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:06.036112   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:06.036407   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:06.036475   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:06.036695   16686 main.go:141] libmachine: (addons-489802) Calling .GetState
	I0920 16:45:06.036796   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:06.036813   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:06.037369   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
	I0920 16:45:06.037379   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:06.037431   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46467
	I0920 16:45:06.037435   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:45:06.037447   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:06.037568   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:45:06.037633   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36745
	I0920 16:45:06.037767   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:06.037792   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
	I0920 16:45:06.037889   16686 main.go:141] libmachine: (addons-489802) Calling .GetState
	I0920 16:45:06.037985   16686 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa Username:docker}
	I0920 16:45:06.038291   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:06.038315   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:06.038531   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:06.038620   16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
	I0920 16:45:06.038675   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:06.038861   16686 main.go:141] libmachine: (addons-489802) Calling .GetState
	I0920 16:45:06.039491   16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
	I0920 16:45:06.039654   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:06.039669   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:06.040233   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:06.040465   16686 main.go:141] libmachine: (addons-489802) Calling .GetState
	I0920 16:45:06.040605   16686 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0920 16:45:06.040832   16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
	I0920 16:45:06.041303   16686 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 16:45:06.041318   16686 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 16:45:06.041334   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
	I0920 16:45:06.041615   16686 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0920 16:45:06.042140   16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
	I0920 16:45:06.043269   16686 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0920 16:45:06.043289   16686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0920 16:45:06.043306   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
	I0920 16:45:06.044349   16686 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0920 16:45:06.044617   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:06.044625   16686 out.go:177]   - Using image docker.io/busybox:stable
	I0920 16:45:06.045036   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:45:06.045057   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:06.045261   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
	I0920 16:45:06.045420   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:45:06.045924   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
	I0920 16:45:06.046045   16686 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 16:45:06.046062   16686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0920 16:45:06.046076   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
	I0920 16:45:06.046233   16686 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa Username:docker}
	I0920 16:45:06.046927   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:06.047431   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:45:06.047463   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:06.047597   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
	I0920 16:45:06.047765   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:45:06.047891   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
	I0920 16:45:06.048008   16686 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa Username:docker}
	I0920 16:45:06.048154   16686 out.go:177]   - Using image docker.io/registry:2.8.3
	I0920 16:45:06.049631   16686 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0920 16:45:06.049649   16686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0920 16:45:06.049663   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:06.049676   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
	I0920 16:45:06.050129   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:45:06.050156   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:06.050430   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
	I0920 16:45:06.050586   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:45:06.050750   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
	I0920 16:45:06.050868   16686 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa Username:docker}
	I0920 16:45:06.052498   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:06.052871   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:45:06.052900   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:06.053033   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
	I0920 16:45:06.053170   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:45:06.053326   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
	I0920 16:45:06.053496   16686 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa Username:docker}
	I0920 16:45:06.353051   16686 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 16:45:06.353074   16686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0920 16:45:06.375750   16686 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 16:45:06.375808   16686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0920 16:45:06.391326   16686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 16:45:06.493613   16686 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0920 16:45:06.493638   16686 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0920 16:45:06.505773   16686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0920 16:45:06.532977   16686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0920 16:45:06.533515   16686 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0920 16:45:06.533534   16686 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0920 16:45:06.540683   16686 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0920 16:45:06.540708   16686 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0920 16:45:06.543084   16686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 16:45:06.544984   16686 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0920 16:45:06.545000   16686 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0920 16:45:06.551458   16686 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0920 16:45:06.551479   16686 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0920 16:45:06.556172   16686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0920 16:45:06.557507   16686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 16:45:06.566682   16686 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0920 16:45:06.566703   16686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0920 16:45:06.627313   16686 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0920 16:45:06.627340   16686 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0920 16:45:06.640927   16686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 16:45:06.670548   16686 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 16:45:06.670574   16686 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 16:45:06.763522   16686 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0920 16:45:06.763549   16686 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0920 16:45:06.783481   16686 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0920 16:45:06.783521   16686 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0920 16:45:06.819177   16686 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0920 16:45:06.819204   16686 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0920 16:45:06.839272   16686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0920 16:45:06.896200   16686 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0920 16:45:06.896230   16686 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0920 16:45:06.910579   16686 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0920 16:45:06.910614   16686 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0920 16:45:06.930437   16686 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0920 16:45:06.930463   16686 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0920 16:45:06.940831   16686 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 16:45:06.940867   16686 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 16:45:07.047035   16686 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0920 16:45:07.047062   16686 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0920 16:45:07.215806   16686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 16:45:07.218901   16686 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0920 16:45:07.218932   16686 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0920 16:45:07.223882   16686 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0920 16:45:07.223905   16686 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0920 16:45:07.227082   16686 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0920 16:45:07.227103   16686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0920 16:45:07.256340   16686 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0920 16:45:07.256375   16686 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0920 16:45:07.464044   16686 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0920 16:45:07.464078   16686 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0920 16:45:07.493814   16686 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0920 16:45:07.493851   16686 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0920 16:45:07.582458   16686 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 16:45:07.582479   16686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0920 16:45:07.603848   16686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0920 16:45:07.828047   16686 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0920 16:45:07.828070   16686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0920 16:45:07.844298   16686 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0920 16:45:07.844335   16686 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0920 16:45:08.029971   16686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 16:45:08.174001   16686 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 16:45:08.174023   16686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0920 16:45:08.192445   16686 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0920 16:45:08.192475   16686 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0920 16:45:08.510930   16686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 16:45:08.524911   16686 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0920 16:45:08.524942   16686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0920 16:45:08.726846   16686 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0920 16:45:08.726879   16686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0920 16:45:09.009410   16686 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 16:45:09.009447   16686 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0920 16:45:09.024627   16686 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.648835712s)
	I0920 16:45:09.024679   16686 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.648847664s)
	I0920 16:45:09.024704   16686 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0920 16:45:09.024765   16686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.633411979s)
	I0920 16:45:09.024811   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:09.024825   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:09.025119   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:09.025144   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:09.025153   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:09.025161   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:09.025404   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:09.025445   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:09.025920   16686 node_ready.go:35] waiting up to 6m0s for node "addons-489802" to be "Ready" ...
	I0920 16:45:09.035518   16686 node_ready.go:49] node "addons-489802" has status "Ready":"True"
	I0920 16:45:09.035609   16686 node_ready.go:38] duration metric: took 9.661904ms for node "addons-489802" to be "Ready" ...
	I0920 16:45:09.035637   16686 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 16:45:09.051148   16686 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-nqbzq" in "kube-system" namespace to be "Ready" ...
	I0920 16:45:09.322288   16686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 16:45:09.534546   16686 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-489802" context rescaled to 1 replicas
	I0920 16:45:11.158586   16686 pod_ready.go:103] pod "coredns-7c65d6cfc9-nqbzq" in "kube-system" namespace has status "Ready":"False"
	I0920 16:45:12.692545   16686 pod_ready.go:93] pod "coredns-7c65d6cfc9-nqbzq" in "kube-system" namespace has status "Ready":"True"
	I0920 16:45:12.692574   16686 pod_ready.go:82] duration metric: took 3.641395186s for pod "coredns-7c65d6cfc9-nqbzq" in "kube-system" namespace to be "Ready" ...
	I0920 16:45:12.692587   16686 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-tm9vr" in "kube-system" namespace to be "Ready" ...
	I0920 16:45:12.993726   16686 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0920 16:45:12.993782   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
	I0920 16:45:12.997095   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:12.997468   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:45:12.997509   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:12.997646   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
	I0920 16:45:12.997868   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:45:12.998029   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
	I0920 16:45:12.998260   16686 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa Username:docker}
	I0920 16:45:13.539202   16686 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0920 16:45:13.682847   16686 addons.go:234] Setting addon gcp-auth=true in "addons-489802"
	I0920 16:45:13.682906   16686 host.go:66] Checking if "addons-489802" exists ...
	I0920 16:45:13.683199   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:13.683239   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:13.702441   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33011
	I0920 16:45:13.702905   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:13.703420   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:13.703442   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:13.703814   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:13.704438   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:13.704485   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:13.722380   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33875
	I0920 16:45:13.723033   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:13.723749   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:13.723776   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:13.724178   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:13.724416   16686 main.go:141] libmachine: (addons-489802) Calling .GetState
	I0920 16:45:13.726164   16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
	I0920 16:45:13.726406   16686 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0920 16:45:13.726432   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
	I0920 16:45:13.729255   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:13.729760   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:45:13.729791   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:13.729945   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
	I0920 16:45:13.730109   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:45:13.730294   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
	I0920 16:45:13.730440   16686 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa Username:docker}
	I0920 16:45:13.776226   16686 pod_ready.go:98] pod "coredns-7c65d6cfc9-tm9vr" in "kube-system" namespace has status phase "Failed" (skipping!): {Phase:Failed Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 16:45:13 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 16:45:06 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 16:45:06 +0000 UTC Reason:PodFailed Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 16:45:06 +0000 UTC Reason:PodFailed Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 16:45:06 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.89 HostIPs:[{IP:192.168.39.89}] PodIP:10.244.0.3 Po
dIPs:[{IP:10.244.0.3}] StartTime:2024-09-20 16:45:06 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2024-09-20 16:45:10 +0000 UTC,FinishedAt:2024-09-20 16:45:11 +0000 UTC,ContainerID:cri-o://918bc5fe873828ba31e8b226c084835ff9648d49d56fd967df98c04026fcd9c4,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://918bc5fe873828ba31e8b226c084835ff9648d49d56fd967df98c04026fcd9c4 Started:0xc00179695c AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001a1e830} {Name:kube-api-access-l4wh8 MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc001a1e840}] User:nil AllocatedResourcesStatus:[]}
] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0920 16:45:13.776273   16686 pod_ready.go:82] duration metric: took 1.083676607s for pod "coredns-7c65d6cfc9-tm9vr" in "kube-system" namespace to be "Ready" ...
	E0920 16:45:13.776285   16686 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-tm9vr" in "kube-system" namespace has status phase "Failed" (skipping!): {Phase:Failed Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 16:45:13 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 16:45:06 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 16:45:06 +0000 UTC Reason:PodFailed Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 16:45:06 +0000 UTC Reason:PodFailed Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 16:45:06 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.89 HostIPs:[{IP:192.16
8.39.89}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-09-20 16:45:06 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2024-09-20 16:45:10 +0000 UTC,FinishedAt:2024-09-20 16:45:11 +0000 UTC,ContainerID:cri-o://918bc5fe873828ba31e8b226c084835ff9648d49d56fd967df98c04026fcd9c4,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://918bc5fe873828ba31e8b226c084835ff9648d49d56fd967df98c04026fcd9c4 Started:0xc00179695c AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001a1e830} {Name:kube-api-access-l4wh8 MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc001a1e840}] User:nil
AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0920 16:45:13.776297   16686 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-489802" in "kube-system" namespace to be "Ready" ...
	I0920 16:45:13.895071   16686 pod_ready.go:93] pod "etcd-addons-489802" in "kube-system" namespace has status "Ready":"True"
	I0920 16:45:13.895098   16686 pod_ready.go:82] duration metric: took 118.793361ms for pod "etcd-addons-489802" in "kube-system" namespace to be "Ready" ...
	I0920 16:45:13.895111   16686 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-489802" in "kube-system" namespace to be "Ready" ...
	I0920 16:45:14.014764   16686 pod_ready.go:93] pod "kube-apiserver-addons-489802" in "kube-system" namespace has status "Ready":"True"
	I0920 16:45:14.014787   16686 pod_ready.go:82] duration metric: took 119.668585ms for pod "kube-apiserver-addons-489802" in "kube-system" namespace to be "Ready" ...
	I0920 16:45:14.014841   16686 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-489802" in "kube-system" namespace to be "Ready" ...
	I0920 16:45:14.127671   16686 pod_ready.go:93] pod "kube-controller-manager-addons-489802" in "kube-system" namespace has status "Ready":"True"
	I0920 16:45:14.127694   16686 pod_ready.go:82] duration metric: took 112.838527ms for pod "kube-controller-manager-addons-489802" in "kube-system" namespace to be "Ready" ...
	I0920 16:45:14.127705   16686 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xr4bt" in "kube-system" namespace to be "Ready" ...
	I0920 16:45:14.150341   16686 pod_ready.go:93] pod "kube-proxy-xr4bt" in "kube-system" namespace has status "Ready":"True"
	I0920 16:45:14.150367   16686 pod_ready.go:82] duration metric: took 22.655966ms for pod "kube-proxy-xr4bt" in "kube-system" namespace to be "Ready" ...
	I0920 16:45:14.150376   16686 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-489802" in "kube-system" namespace to be "Ready" ...
	I0920 16:45:14.206202   16686 pod_ready.go:93] pod "kube-scheduler-addons-489802" in "kube-system" namespace has status "Ready":"True"
	I0920 16:45:14.206226   16686 pod_ready.go:82] duration metric: took 55.843139ms for pod "kube-scheduler-addons-489802" in "kube-system" namespace to be "Ready" ...
	I0920 16:45:14.206238   16686 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-54hhx" in "kube-system" namespace to be "Ready" ...
	I0920 16:45:15.135704   16686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.629885928s)
	I0920 16:45:15.135777   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:15.135782   16686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.602774066s)
	I0920 16:45:15.135815   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:15.135832   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:15.135837   16686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.592733845s)
	I0920 16:45:15.135860   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:15.135874   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:15.135791   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:15.135976   16686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.579777747s)
	I0920 16:45:15.136071   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:15.136137   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:15.136165   16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
	I0920 16:45:15.136165   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:15.136176   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:15.136187   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:15.136191   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:15.136195   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:15.136195   16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
	I0920 16:45:15.136202   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:15.136241   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:15.136199   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:15.136269   16686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.578731979s)
	I0920 16:45:15.136290   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:15.136196   16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
	I0920 16:45:15.136312   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:15.136322   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:15.136299   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:15.136332   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:15.136345   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:15.136388   16686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.297083849s)
	I0920 16:45:15.136410   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:15.136420   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:15.136467   16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
	I0920 16:45:15.136492   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:15.136499   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:15.136506   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:15.136513   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:15.136540   16686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.920700025s)
	I0920 16:45:15.136560   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:15.136569   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:15.136342   16686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.495383696s)
	I0920 16:45:15.136654   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:15.136666   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:15.136665   16686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.532769315s)
	I0920 16:45:15.136718   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:15.136726   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:15.136765   16686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.106759371s)
	I0920 16:45:15.136781   16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
	W0920 16:45:15.136792   16686 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0920 16:45:15.136807   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:15.136815   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:15.136815   16686 retry.go:31] will retry after 374.579066ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0920 16:45:15.136939   16686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.625889401s)
	I0920 16:45:15.136963   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:15.136976   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:15.137039   16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
	I0920 16:45:15.137050   16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
	I0920 16:45:15.137071   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:15.137102   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:15.137131   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:15.137137   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:15.137152   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:15.137158   16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
	I0920 16:45:15.137108   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:15.137170   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:15.137178   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:15.137186   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:15.137875   16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
	I0920 16:45:15.137908   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:15.137915   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:15.137922   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:15.137929   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:15.137975   16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
	I0920 16:45:15.137994   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:15.137999   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:15.138006   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:15.138013   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:15.138047   16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
	I0920 16:45:15.138061   16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
	I0920 16:45:15.138078   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:15.138084   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:15.138093   16686 addons.go:475] Verifying addon registry=true in "addons-489802"
	I0920 16:45:15.138895   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:15.138916   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:15.138927   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:15.138936   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:15.139035   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:15.139050   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:15.137073   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:15.139271   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:15.137144   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:15.139348   16686 addons.go:475] Verifying addon ingress=true in "addons-489802"
	I0920 16:45:15.139477   16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
	I0920 16:45:15.137089   16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
	I0920 16:45:15.139526   16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
	I0920 16:45:15.139550   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:15.139564   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:15.139719   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:15.139735   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:15.139509   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:15.139873   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:15.139884   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:15.139894   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:15.140278   16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
	I0920 16:45:15.140316   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:15.140328   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:15.141359   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:15.141378   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:15.141387   16686 addons.go:475] Verifying addon metrics-server=true in "addons-489802"
	I0920 16:45:15.141742   16686 out.go:177] * Verifying ingress addon...
	I0920 16:45:15.141861   16686 out.go:177] * Verifying registry addon...
	I0920 16:45:15.142395   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:15.142416   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:15.142438   16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
	I0920 16:45:15.144272   16686 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-489802 service yakd-dashboard -n yakd-dashboard
	
	I0920 16:45:15.144625   16686 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0920 16:45:15.144652   16686 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0920 16:45:15.182676   16686 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0920 16:45:15.182707   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:15.183762   16686 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0920 16:45:15.183790   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:15.473454   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:15.473474   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:15.473959   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:15.473976   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:15.479442   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:15.479466   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:15.479704   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:15.479721   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	W0920 16:45:15.479879   16686 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0920 16:45:15.512325   16686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 16:45:15.658712   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:15.659607   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:16.155622   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:16.160001   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:16.241480   16686 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-54hhx" in "kube-system" namespace has status "Ready":"False"
	I0920 16:45:16.517442   16686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.195100107s)
	I0920 16:45:16.517489   16686 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.791061379s)
	I0920 16:45:16.517497   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:16.517513   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:16.517795   16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
	I0920 16:45:16.517795   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:16.517817   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:16.517843   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:16.517851   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:16.518062   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:16.518079   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:16.518089   16686 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-489802"
	I0920 16:45:16.519716   16686 out.go:177] * Verifying csi-hostpath-driver addon...
	I0920 16:45:16.519723   16686 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 16:45:16.521078   16686 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0920 16:45:16.521713   16686 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0920 16:45:16.522238   16686 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0920 16:45:16.522258   16686 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0920 16:45:16.561413   16686 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0920 16:45:16.561441   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:16.652853   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:16.654932   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:16.670493   16686 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0920 16:45:16.670518   16686 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0920 16:45:16.788959   16686 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 16:45:16.788986   16686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0920 16:45:16.869081   16686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 16:45:17.027599   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:17.156633   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:17.157163   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:17.527462   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:17.650521   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:17.650643   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:17.734897   16686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.222504857s)
	I0920 16:45:17.734961   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:17.734978   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:17.735373   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:17.735395   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:17.735414   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:17.735423   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:17.735676   16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
	I0920 16:45:17.735715   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:17.735732   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:18.039389   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:18.191248   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:18.192032   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:18.226929   16686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.357782077s)
	I0920 16:45:18.227006   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:18.227027   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:18.227352   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:18.227371   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:18.227380   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:18.227388   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:18.227596   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:18.227608   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:18.229117   16686 addons.go:475] Verifying addon gcp-auth=true in "addons-489802"
	I0920 16:45:18.230928   16686 out.go:177] * Verifying gcp-auth addon...
	I0920 16:45:18.233132   16686 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0920 16:45:18.302814   16686 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-54hhx" in "kube-system" namespace has status "Ready":"False"
	I0920 16:45:18.303833   16686 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0920 16:45:18.303849   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:18.526206   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:18.650162   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:18.650906   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:18.737130   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:19.027359   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:19.151083   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:19.152167   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:19.237097   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:19.530489   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:19.651552   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:19.651799   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:19.737916   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:20.027552   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:20.150028   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:20.150617   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:20.237634   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:20.527445   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:20.651604   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:20.652378   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:20.712902   16686 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-54hhx" in "kube-system" namespace has status "Ready":"False"
	I0920 16:45:20.736944   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:21.029114   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:21.149408   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:21.150699   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:21.236999   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:21.527442   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:21.967907   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:21.968174   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:22.070927   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:22.072675   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:22.149613   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:22.150237   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:22.237824   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:22.531579   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:22.650997   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:22.651735   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:22.714124   16686 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-54hhx" in "kube-system" namespace has status "Ready":"False"
	I0920 16:45:22.738003   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:23.036430   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:23.154161   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:23.155271   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:23.274914   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:23.528959   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:23.662172   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:23.665690   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:23.747609   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:24.028698   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:24.163651   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:24.164456   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:24.248826   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:24.526972   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:24.652716   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:24.653397   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:24.715653   16686 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-54hhx" in "kube-system" namespace has status "Ready":"False"
	I0920 16:45:24.740107   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:25.028341   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:25.150991   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:25.153743   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:25.634814   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:25.635566   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:25.651776   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:25.652748   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:25.736431   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:26.032193   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:26.150517   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:26.150967   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:26.238433   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:26.527250   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:26.650016   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:26.650451   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:26.737952   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:27.027290   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:27.150220   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:27.150405   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:27.213074   16686 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-54hhx" in "kube-system" namespace has status "Ready":"True"
	I0920 16:45:27.213099   16686 pod_ready.go:82] duration metric: took 13.006853784s for pod "nvidia-device-plugin-daemonset-54hhx" in "kube-system" namespace to be "Ready" ...
	I0920 16:45:27.213106   16686 pod_ready.go:39] duration metric: took 18.177423912s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 16:45:27.213122   16686 api_server.go:52] waiting for apiserver process to appear ...
	I0920 16:45:27.213169   16686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 16:45:27.236400   16686 api_server.go:72] duration metric: took 21.373270823s to wait for apiserver process to appear ...
	I0920 16:45:27.236426   16686 api_server.go:88] waiting for apiserver healthz status ...
	I0920 16:45:27.236445   16686 api_server.go:253] Checking apiserver healthz at https://192.168.39.89:8443/healthz ...
	I0920 16:45:27.239701   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:27.242110   16686 api_server.go:279] https://192.168.39.89:8443/healthz returned 200:
	ok
	I0920 16:45:27.243105   16686 api_server.go:141] control plane version: v1.31.1
	I0920 16:45:27.243132   16686 api_server.go:131] duration metric: took 6.699495ms to wait for apiserver health ...
	I0920 16:45:27.243142   16686 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 16:45:27.251414   16686 system_pods.go:59] 17 kube-system pods found
	I0920 16:45:27.251443   16686 system_pods.go:61] "coredns-7c65d6cfc9-nqbzq" [734f1782-975a-486b-adf3-32f60c376a9a] Running
	I0920 16:45:27.251451   16686 system_pods.go:61] "csi-hostpath-attacher-0" [8fc733e6-4135-418b-a554-490bd25dabe7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0920 16:45:27.251458   16686 system_pods.go:61] "csi-hostpath-resizer-0" [85755d16-e8fa-4878-9184-45658ba8d8ac] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0920 16:45:27.251465   16686 system_pods.go:61] "csi-hostpathplugin-hglqr" [0aeb8bcc-1f9f-40f6-8aa1-4822a64115f2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0920 16:45:27.251469   16686 system_pods.go:61] "etcd-addons-489802" [7f35387b-c7f1-4436-8369-77849b9c2383] Running
	I0920 16:45:27.251475   16686 system_pods.go:61] "kube-apiserver-addons-489802" [9c4029f4-8e01-4d5d-a866-518e553ac713] Running
	I0920 16:45:27.251481   16686 system_pods.go:61] "kube-controller-manager-addons-489802" [30219691-4d43-476d-8720-80aa4f2b6b54] Running
	I0920 16:45:27.251488   16686 system_pods.go:61] "kube-ingress-dns-minikube" [1f722d5e-9dee-4b0e-8661-9c4181ea4f9b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0920 16:45:27.251495   16686 system_pods.go:61] "kube-proxy-xr4bt" [7a20cb9e-3e82-4bda-9529-7e024f9681a4] Running
	I0920 16:45:27.251504   16686 system_pods.go:61] "kube-scheduler-addons-489802" [8b17a764-82bc-4003-8b0c-9d46c614e15d] Running
	I0920 16:45:27.251512   16686 system_pods.go:61] "metrics-server-84c5f94fbc-txlrn" [b6d2625e-ba6e-44e1-b245-0edc2adaa243] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 16:45:27.251518   16686 system_pods.go:61] "nvidia-device-plugin-daemonset-54hhx" [b022f644-f7de-4d74-aed4-63ad47ef0b71] Running
	I0920 16:45:27.251526   16686 system_pods.go:61] "registry-66c9cd494c-7swkh" [1e3cfba8-c77f-46f3-b6b1-46c7a36ae3a4] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0920 16:45:27.251534   16686 system_pods.go:61] "registry-proxy-ggl6q" [a467b141-5827-4440-b11f-9203739b4a10] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0920 16:45:27.251542   16686 system_pods.go:61] "snapshot-controller-56fcc65765-2hz6g" [0d531a52-cced-4b3d-adfd-5d62357591e8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 16:45:27.251549   16686 system_pods.go:61] "snapshot-controller-56fcc65765-4l9hv" [eccfc252-ad9c-4b70-bb1c-d81a71214556] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 16:45:27.251553   16686 system_pods.go:61] "storage-provisioner" [1e04b7e0-a0fe-4e65-9ba5-63be2690da1d] Running
	I0920 16:45:27.251561   16686 system_pods.go:74] duration metric: took 8.412514ms to wait for pod list to return data ...
	I0920 16:45:27.251568   16686 default_sa.go:34] waiting for default service account to be created ...
	I0920 16:45:27.254735   16686 default_sa.go:45] found service account: "default"
	I0920 16:45:27.254760   16686 default_sa.go:55] duration metric: took 3.185589ms for default service account to be created ...
	I0920 16:45:27.254770   16686 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 16:45:27.261725   16686 system_pods.go:86] 17 kube-system pods found
	I0920 16:45:27.261752   16686 system_pods.go:89] "coredns-7c65d6cfc9-nqbzq" [734f1782-975a-486b-adf3-32f60c376a9a] Running
	I0920 16:45:27.261759   16686 system_pods.go:89] "csi-hostpath-attacher-0" [8fc733e6-4135-418b-a554-490bd25dabe7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0920 16:45:27.261766   16686 system_pods.go:89] "csi-hostpath-resizer-0" [85755d16-e8fa-4878-9184-45658ba8d8ac] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0920 16:45:27.261772   16686 system_pods.go:89] "csi-hostpathplugin-hglqr" [0aeb8bcc-1f9f-40f6-8aa1-4822a64115f2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0920 16:45:27.261776   16686 system_pods.go:89] "etcd-addons-489802" [7f35387b-c7f1-4436-8369-77849b9c2383] Running
	I0920 16:45:27.261780   16686 system_pods.go:89] "kube-apiserver-addons-489802" [9c4029f4-8e01-4d5d-a866-518e553ac713] Running
	I0920 16:45:27.261784   16686 system_pods.go:89] "kube-controller-manager-addons-489802" [30219691-4d43-476d-8720-80aa4f2b6b54] Running
	I0920 16:45:27.261791   16686 system_pods.go:89] "kube-ingress-dns-minikube" [1f722d5e-9dee-4b0e-8661-9c4181ea4f9b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0920 16:45:27.261795   16686 system_pods.go:89] "kube-proxy-xr4bt" [7a20cb9e-3e82-4bda-9529-7e024f9681a4] Running
	I0920 16:45:27.261799   16686 system_pods.go:89] "kube-scheduler-addons-489802" [8b17a764-82bc-4003-8b0c-9d46c614e15d] Running
	I0920 16:45:27.261805   16686 system_pods.go:89] "metrics-server-84c5f94fbc-txlrn" [b6d2625e-ba6e-44e1-b245-0edc2adaa243] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 16:45:27.261809   16686 system_pods.go:89] "nvidia-device-plugin-daemonset-54hhx" [b022f644-f7de-4d74-aed4-63ad47ef0b71] Running
	I0920 16:45:27.261815   16686 system_pods.go:89] "registry-66c9cd494c-7swkh" [1e3cfba8-c77f-46f3-b6b1-46c7a36ae3a4] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0920 16:45:27.261820   16686 system_pods.go:89] "registry-proxy-ggl6q" [a467b141-5827-4440-b11f-9203739b4a10] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0920 16:45:27.261828   16686 system_pods.go:89] "snapshot-controller-56fcc65765-2hz6g" [0d531a52-cced-4b3d-adfd-5d62357591e8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 16:45:27.261858   16686 system_pods.go:89] "snapshot-controller-56fcc65765-4l9hv" [eccfc252-ad9c-4b70-bb1c-d81a71214556] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 16:45:27.261868   16686 system_pods.go:89] "storage-provisioner" [1e04b7e0-a0fe-4e65-9ba5-63be2690da1d] Running
	I0920 16:45:27.261877   16686 system_pods.go:126] duration metric: took 7.099706ms to wait for k8s-apps to be running ...
	I0920 16:45:27.261887   16686 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 16:45:27.261932   16686 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 16:45:27.276406   16686 system_svc.go:56] duration metric: took 14.508978ms WaitForService to wait for kubelet
	I0920 16:45:27.276438   16686 kubeadm.go:582] duration metric: took 21.413312681s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 16:45:27.276460   16686 node_conditions.go:102] verifying NodePressure condition ...
	I0920 16:45:27.280248   16686 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 16:45:27.280278   16686 node_conditions.go:123] node cpu capacity is 2
	I0920 16:45:27.280291   16686 node_conditions.go:105] duration metric: took 3.825237ms to run NodePressure ...
	I0920 16:45:27.280304   16686 start.go:241] waiting for startup goroutines ...
	I0920 16:45:27.526718   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:27.649095   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:27.649421   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:27.737354   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:28.027233   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:28.150225   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:28.150730   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:28.236702   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:28.528434   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:28.650325   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:28.650405   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:28.740070   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:29.026096   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:29.149445   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:29.150058   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:29.237452   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:29.527135   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:29.649902   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:29.649932   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:29.737559   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:30.026698   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:30.150115   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:30.150769   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:30.238484   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:30.527374   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:30.648850   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:30.649272   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:30.738810   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:31.028473   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:31.150589   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:31.156282   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:31.237373   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:31.527393   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:31.649166   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:31.650780   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:31.736824   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:32.027837   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:32.152463   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:32.153143   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:32.237068   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:32.528272   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:32.649079   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:32.650818   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:32.738352   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:33.026553   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:33.149902   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:33.150275   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:33.237524   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:33.537491   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:33.649781   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:33.650261   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:33.737265   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:34.028817   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:34.150791   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:34.152125   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:34.237490   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:34.526864   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:34.649685   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:34.650181   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:34.736977   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:35.029888   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:35.150945   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:35.155795   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:35.240335   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:35.527786   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:35.654336   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:35.655062   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:35.737485   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:36.027635   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:36.151566   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:36.152493   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:36.238231   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:36.527246   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:36.655057   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:36.655723   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:36.738138   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:37.030365   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:37.150592   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:37.150821   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:37.236830   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:37.526749   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:37.650962   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:37.652318   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:37.738164   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:38.031402   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:38.155846   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:38.156510   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:38.252531   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:38.528674   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:38.655016   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:38.658754   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:38.739024   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:39.026715   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:39.151013   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:39.154202   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:39.238586   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:39.527713   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:39.649075   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:39.649203   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:39.737480   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:40.027567   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:40.150474   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:40.151696   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:40.250888   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:40.526616   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:40.652188   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:40.652389   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:40.736985   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:41.026770   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:41.150827   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:41.151842   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:41.237101   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:41.526185   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:41.650288   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:41.650519   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:41.737186   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:42.027683   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:42.149240   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:42.150504   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:42.491904   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:42.592635   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:42.650756   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:42.651320   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:42.737069   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:43.029825   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:43.149551   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:43.149935   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:43.237114   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:43.528788   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:43.650325   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:43.650461   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:43.739219   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:44.027085   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:44.150296   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:44.150650   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:44.238279   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:44.527675   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:44.649728   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:44.650268   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:44.737823   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:45.028181   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:45.150501   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:45.151145   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:45.237285   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:45.527586   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:45.649593   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:45.650452   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:45.738407   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:46.030564   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:46.150486   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:46.150734   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:46.237087   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:46.551259   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:46.651342   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:46.653245   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:46.737384   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:47.029654   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:47.150343   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:47.150347   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:47.238187   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:47.535430   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:47.650178   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:47.651863   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:47.739041   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:48.029210   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:48.150091   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:48.154252   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:48.240363   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:48.529142   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:48.653143   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:48.655833   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:48.738746   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:49.027666   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:49.150751   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:49.151834   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:49.236647   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:49.530861   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:49.651140   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:49.651675   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:49.740617   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:50.026729   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:50.159867   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:50.160090   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:50.239757   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:50.527622   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:50.654766   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:50.655361   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:50.737483   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:51.027995   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:51.149643   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:51.149801   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:51.237905   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:51.526411   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:51.649489   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:51.650326   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:51.738210   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:52.036253   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:52.149599   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:52.151253   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:52.237057   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:52.527569   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:52.648975   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:52.650153   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:52.737191   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:53.027592   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:53.150060   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:53.150479   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:53.236403   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:53.526504   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:53.649297   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:53.651436   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:53.737405   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:54.028487   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:54.150980   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:54.151321   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:54.237711   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:54.527354   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:54.650301   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:54.650677   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:54.737955   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:55.031032   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:55.149243   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:55.150181   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:55.238167   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:55.528915   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:55.649892   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:55.650313   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:55.738797   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:56.028783   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:56.151114   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:56.151294   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:56.237410   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:56.527498   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:56.650436   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:56.650776   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:56.736898   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:57.026952   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:57.149669   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:57.150915   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:57.237031   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:57.526939   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:57.648982   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:57.650547   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:57.737696   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:58.026729   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:58.150041   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:58.150968   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:58.237146   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:58.527288   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:58.651780   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:58.652013   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:58.738908   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:59.026605   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:59.149437   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:59.149648   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:59.237722   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:59.527090   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:59.650035   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:59.651041   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:59.737351   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:00.027912   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:00.558370   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:46:00.561620   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:00.563942   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:00.565779   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:00.661977   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:46:00.662874   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:00.739219   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:01.029865   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:01.154749   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:46:01.155165   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:01.237401   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:01.530045   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:01.649221   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:46:01.649554   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:01.740003   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:02.026763   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:02.150502   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:02.150590   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:46:02.236863   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:02.529068   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:02.650888   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:46:02.651000   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:02.750263   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:03.026716   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:03.149149   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:46:03.149545   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:03.237369   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:03.534553   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:03.650442   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:46:03.650862   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:03.737614   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:04.026913   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:04.149387   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:46:04.149593   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:04.243360   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:04.527336   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:04.650842   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:46:04.651139   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:04.739255   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:05.027878   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:05.150204   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:46:05.150545   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:05.244231   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:05.529349   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:05.652867   16686 kapi.go:107] duration metric: took 50.508229978s to wait for kubernetes.io/minikube-addons=registry ...
	I0920 16:46:05.652925   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:05.739640   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:06.033981   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:06.149185   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:06.237046   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:06.528004   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:06.649435   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:06.895278   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:07.026949   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:07.149429   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:07.237034   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:07.526452   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:07.649544   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:07.737620   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:08.028390   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:08.150933   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:08.237962   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:08.529026   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:08.650034   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:08.737105   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:09.027687   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:09.149020   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:09.239286   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:09.529929   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:09.666377   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:09.746102   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:10.030699   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:10.155669   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:10.239033   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:10.530724   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:10.651556   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:10.737407   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:11.027890   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:11.149069   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:11.236960   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:11.527373   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:11.649887   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:11.737323   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:12.027469   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:12.149540   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:12.237298   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:12.527280   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:12.650565   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:12.750782   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:13.027210   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:13.149266   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:13.236795   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:13.527089   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:13.650076   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:13.739568   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:14.028427   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:14.150142   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:14.238716   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:14.529618   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:14.649719   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:14.737439   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:15.029527   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:15.149916   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:15.236871   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:15.527484   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:15.660993   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:15.737550   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:16.027986   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:16.149414   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:16.237560   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:16.528143   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:16.649180   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:16.749844   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:17.027012   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:17.149822   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:17.237094   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:17.527302   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:17.650815   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:17.737697   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:18.027958   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:18.151414   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:18.237081   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:18.755707   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:18.756298   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:18.756334   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:19.027579   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:19.149746   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:19.237870   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:19.532636   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:19.649362   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:19.743684   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:20.029394   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:20.152735   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:20.238771   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:20.528220   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:20.650381   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:20.739497   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:21.028952   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:21.149828   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:21.238039   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:21.532796   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:21.648825   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:21.736739   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:22.025994   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:22.149742   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:22.237902   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:22.526869   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:22.651053   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:22.754073   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:23.029507   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:23.150844   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:23.236975   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:23.530954   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:23.649940   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:23.737663   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:24.027816   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:24.149027   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:24.236905   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:24.528126   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:24.649610   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:24.737256   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:25.029079   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:25.168465   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:25.279560   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:25.529941   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:25.649862   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:25.738675   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:26.031710   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:26.149047   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:26.237178   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:26.527079   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:26.649467   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:26.737219   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:27.027260   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:27.150392   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:27.237951   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:27.526593   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:27.649815   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:27.738065   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:28.026169   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:28.150226   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:28.237640   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:28.526680   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:28.649544   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:28.737407   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:29.027688   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:29.150021   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:29.236763   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:29.563052   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:29.652576   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:29.739028   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:30.029796   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:30.150520   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:30.240233   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:30.526626   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:30.651044   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:30.739007   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:31.027062   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:31.541329   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:31.546535   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:31.546967   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:31.652149   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:31.736761   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:32.026342   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:32.149699   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:32.238624   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:32.526975   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:32.650436   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:32.740112   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:33.028897   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:33.150155   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:33.250978   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:33.528932   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:33.649886   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:33.743165   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:34.028352   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:34.150042   16686 kapi.go:107] duration metric: took 1m19.005386454s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0920 16:46:34.237404   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:34.526686   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:34.740025   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:35.033014   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:35.241504   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:35.527579   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:35.738045   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:36.034900   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:36.242839   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:36.528649   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:36.738556   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:37.027713   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:37.237641   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:37.527114   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:37.736812   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:38.027753   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:38.240755   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:38.526552   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:38.739220   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:39.027014   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:39.240347   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:39.534783   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:39.739002   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:40.032069   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:40.239670   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:40.527751   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:40.742044   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:41.026894   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:41.237898   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:41.526185   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:41.737861   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:42.026935   16686 kapi.go:107] duration metric: took 1m25.505217334s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0920 16:46:42.236807   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:42.738034   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:43.237393   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:43.739267   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:44.237884   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:44.738051   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:45.236733   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:45.737720   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:46.236788   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:46.739281   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:47.237290   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:47.737521   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:48.237326   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:48.737915   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:49.238707   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:49.738314   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:50.237798   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:50.737959   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:51.237197   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:51.737289   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:52.236949   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:52.737530   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:53.237179   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:53.737635   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:54.237901   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:54.737648   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:55.238274   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:55.738085   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:56.237671   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:56.737704   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:57.237419   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:57.737353   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:58.237702   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:58.737197   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:59.237153   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:59.737583   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:00.238191   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:00.737084   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:01.237072   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:01.737245   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:02.237128   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:02.737215   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:03.237530   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:03.737290   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:04.237086   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:04.737817   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:05.237856   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:05.738321   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:06.237429   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:06.737202   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:07.236740   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:07.738137   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:08.237395   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:08.738090   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:09.237251   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:09.847229   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:10.237467   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:10.737639   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:11.237672   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:11.737856   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:12.237892   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:12.737947   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:13.236851   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:13.737127   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:14.236749   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:14.737645   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:15.240515   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:15.737944   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:16.236760   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:16.737628   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:17.237203   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:17.736930   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:18.237666   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:18.737293   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:19.253355   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:19.738180   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:20.239996   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:20.737102   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:21.239307   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:21.737634   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:22.237896   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:22.738438   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:23.237672   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:23.737184   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:24.239150   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:24.737464   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:25.237351   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:25.737539   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:26.237905   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:26.737559   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:27.237704   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:27.738056   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:28.237766   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:28.737159   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:29.237477   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:29.737337   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:30.238578   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:30.737543   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:31.237419   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:31.737583   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:32.237893   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:32.737619   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:33.237679   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:33.737168   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:34.237268   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:34.737264   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:35.237495   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:35.738039   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:36.238149   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:36.737649   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:37.237524   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:37.737017   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:38.238138   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:38.737568   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:39.237391   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:39.736477   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:40.238059   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:40.738010   16686 kapi.go:107] duration metric: took 2m22.504874191s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0920 16:47:40.740079   16686 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-489802 cluster.
	I0920 16:47:40.741424   16686 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0920 16:47:40.742789   16686 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0920 16:47:40.744449   16686 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, inspektor-gadget, ingress-dns, storage-provisioner, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0920 16:47:40.745981   16686 addons.go:510] duration metric: took 2m34.882823136s for enable addons: enabled=[nvidia-device-plugin cloud-spanner inspektor-gadget ingress-dns storage-provisioner metrics-server yakd default-storageclass volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0920 16:47:40.746064   16686 start.go:246] waiting for cluster config update ...
	I0920 16:47:40.746085   16686 start.go:255] writing updated cluster config ...
	I0920 16:47:40.746667   16686 ssh_runner.go:195] Run: rm -f paused
	I0920 16:47:40.832742   16686 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 16:47:40.834777   16686 out.go:177] * Done! kubectl is now configured to use "addons-489802" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 20 16:59:03 addons-489802 crio[664]: time="2024-09-20 16:59:03.850092127Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726851543850058657,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559237,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=052cee7b-a505-44b1-8502-cc064e91cda7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 16:59:03 addons-489802 crio[664]: time="2024-09-20 16:59:03.850774696Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1600841a-21d1-4255-875d-852d5f1a8661 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 16:59:03 addons-489802 crio[664]: time="2024-09-20 16:59:03.850836017Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1600841a-21d1-4255-875d-852d5f1a8661 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 16:59:03 addons-489802 crio[664]: time="2024-09-20 16:59:03.851137878Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9e143aa05bf909cb432c7c457640fb3b37fe1ea1e59b7f3bd1b4e187f5d0306b,PodSandboxId:98723fa66f58f86ebafded473a3572b8e63a415dd3812cbd4bab3b75153880fc,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1726851536437119830,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-fcflm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e43058a4-2b7d-46c4-874f-da5ebfcc43a0,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3b98df31c510e2b7d8467304c108770bfabad7ebb2494f12313d8f912b2482c,PodSandboxId:ddccd18e28f19bcd554a80347c0802f4ddf6d7bad08d4b2ac6f27eb3e102b20d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726851396918964948,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4c34572d-1118-4bb3-8265-b67b3104bc59,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c1fd10705c644580fdef2fc3075d7c9349c8e4b44d3899910dd41e40c87e2ce,PodSandboxId:66f4ad3477a6cc6a00655cc193d28db01097870bd3585db50c33f9e7cc96f8cf,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726850860245098258,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-wzvr2,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: 9688d654-b2f1-4e67-b21f-737c57cb6d4f,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5e85742448a79b5c1857677ad2a134c6852e9495035cbf9c25e3a7521dd6bb2,PodSandboxId:61840f6d138dd81b5b65efdfcdb4db6fc37465b1ee033b0bee2142714f07f4ae,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726850777385583505,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-b6mtt,i
o.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: fd711ef0-0010-45af-a950-49c84a55c942,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a9b75a453cd62d16bb90bc10f20dd616029cfd1dbb3300fdd9d3b272d5c1367,PodSandboxId:85dbe34d0b929d7356ea58dd7954b02069f214007674d69cc2313ed32dff2fc1,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726850776662845564,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name:
ingress-nginx-admission-create-h7lw7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 52fba05c-46c5-4916-b5e4-386dadb0ae61,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0690e87ddb4f9357eefd739e00c9cec1ec022eda1379279b535ba4678c33b26,PodSandboxId:36aedadeb2582fe5df950ad8776d82e03104ab51a608a29ba00e9113b19e678e,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1726850772239877059,Labels:map[string]string{io.kubernetes.co
ntainer.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-rhmqb,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: e5f1d3f8-1767-4ad2-b5b8-eb5bf18bc163,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a0d036505e72ed6b62c09226aa4d219c30e6e162e73ebffc595f568b216931d,PodSandboxId:1ae7bada2f668e2292fc48d3426bfa34e41215d4336864f62a4f90b4ee95709f,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,Creat
edAt:1726850737987301783,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-txlrn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6d2625e-ba6e-44e1-b245-0edc2adaa243,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a981c68e927108571692d174ebc0cf47e600882543d6dd401c23cbcd805d49d,PodSandboxId:11b2a45f795d401fe4c78cf74478d3d702eff22fef4bdd814d8198ee5072d604,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6
e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726850713649115674,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e04b7e0-a0fe-4e65-9ba5-63be2690da1d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70c74f4f1e0bde75fc553a034aa664a515c218d2b72725850921f92314b6ec06,PodSandboxId:cfda686abf7f1fef69de6a34f633f44ac3c87637d6ec92d05dc4a45a4d5652b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e
956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726850710125894103,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nqbzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 734f1782-975a-486b-adf3-32f60c376a9a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c60a90d5ed294c1a5015ea6f6b5c5259e8d437a6b5dd0f9dd758bb62d91c7b7,PodSandboxId:b53a284c395cfb6bdea6622b664327da6733b43d0375a7570cfa3dac443563e5,Metadata:&ContainerMetadata{Name:kube-proxy,Attemp
t:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726850707153952752,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xr4bt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a20cb9e-3e82-4bda-9529-7e024f9681a4,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44c347dc4cb2326d9ce7eef959abf86dcaee69ecf824e59fbe44600500e8a0f4,PodSandboxId:0ccdde3d3e8e30fec62b1f315de346cf5989b81e93276bfcf9792ae014efb9d5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSp
ec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726850695786767603,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-489802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8db84d1368c7024c014f2f2f0d973aae,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79fb233450407c6fdf879eb55124a5840bf49aaa572c10f7add06512d38df264,PodSandboxId:b3c515c903cd8c54cc3829530f8702fa82f07287a4bcae50433ffb0e6100c34b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageS
pec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726850695761844946,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-489802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de814a9694fb61ae23ac46f9b9deb6e7,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ebda0675cfbe9e7b3e6c1ca40351339db78cf3954608a12cc779850ee452a23,PodSandboxId:ce3e5a61bc6e6a8044b701e61a79b033d814fb58851347acc4b4eaab63045047,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d2
6915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726850695741523021,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-489802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 016cfe34770e4cbd59f73407149e44ff,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53631bbb5fc199153283953dffaf83c3d2a2b4cdbda98ab81770b42af5dfe30e,PodSandboxId:c9a4930506bbb11794aa02ab9a68cfe8370b91453dd7ab2cce5eac61a155cacf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002
289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726850695699198150,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-489802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50faea81a2001503e00d2a0be1ceba9e,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1600841a-21d1-4255-875d-852d5f1a8661 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 16:59:03 addons-489802 crio[664]: time="2024-09-20 16:59:03.890817584Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a809486c-354d-4c5e-a39d-a8d31b408a8a name=/runtime.v1.RuntimeService/Version
	Sep 20 16:59:03 addons-489802 crio[664]: time="2024-09-20 16:59:03.890914381Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a809486c-354d-4c5e-a39d-a8d31b408a8a name=/runtime.v1.RuntimeService/Version
	Sep 20 16:59:03 addons-489802 crio[664]: time="2024-09-20 16:59:03.892232381Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a8fcee92-25ff-4fc1-b293-6c0566f57bbd name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 16:59:03 addons-489802 crio[664]: time="2024-09-20 16:59:03.893449308Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726851543893416205,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559237,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a8fcee92-25ff-4fc1-b293-6c0566f57bbd name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 16:59:03 addons-489802 crio[664]: time="2024-09-20 16:59:03.893948002Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=08d41411-c009-463c-999b-7952cf715fae name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 16:59:03 addons-489802 crio[664]: time="2024-09-20 16:59:03.894003014Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=08d41411-c009-463c-999b-7952cf715fae name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 16:59:03 addons-489802 crio[664]: time="2024-09-20 16:59:03.894300003Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9e143aa05bf909cb432c7c457640fb3b37fe1ea1e59b7f3bd1b4e187f5d0306b,PodSandboxId:98723fa66f58f86ebafded473a3572b8e63a415dd3812cbd4bab3b75153880fc,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1726851536437119830,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-fcflm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e43058a4-2b7d-46c4-874f-da5ebfcc43a0,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3b98df31c510e2b7d8467304c108770bfabad7ebb2494f12313d8f912b2482c,PodSandboxId:ddccd18e28f19bcd554a80347c0802f4ddf6d7bad08d4b2ac6f27eb3e102b20d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726851396918964948,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4c34572d-1118-4bb3-8265-b67b3104bc59,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c1fd10705c644580fdef2fc3075d7c9349c8e4b44d3899910dd41e40c87e2ce,PodSandboxId:66f4ad3477a6cc6a00655cc193d28db01097870bd3585db50c33f9e7cc96f8cf,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726850860245098258,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-wzvr2,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: 9688d654-b2f1-4e67-b21f-737c57cb6d4f,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5e85742448a79b5c1857677ad2a134c6852e9495035cbf9c25e3a7521dd6bb2,PodSandboxId:61840f6d138dd81b5b65efdfcdb4db6fc37465b1ee033b0bee2142714f07f4ae,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726850777385583505,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-b6mtt,i
o.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: fd711ef0-0010-45af-a950-49c84a55c942,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a9b75a453cd62d16bb90bc10f20dd616029cfd1dbb3300fdd9d3b272d5c1367,PodSandboxId:85dbe34d0b929d7356ea58dd7954b02069f214007674d69cc2313ed32dff2fc1,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726850776662845564,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name:
ingress-nginx-admission-create-h7lw7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 52fba05c-46c5-4916-b5e4-386dadb0ae61,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0690e87ddb4f9357eefd739e00c9cec1ec022eda1379279b535ba4678c33b26,PodSandboxId:36aedadeb2582fe5df950ad8776d82e03104ab51a608a29ba00e9113b19e678e,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1726850772239877059,Labels:map[string]string{io.kubernetes.co
ntainer.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-rhmqb,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: e5f1d3f8-1767-4ad2-b5b8-eb5bf18bc163,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a0d036505e72ed6b62c09226aa4d219c30e6e162e73ebffc595f568b216931d,PodSandboxId:1ae7bada2f668e2292fc48d3426bfa34e41215d4336864f62a4f90b4ee95709f,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,Creat
edAt:1726850737987301783,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-txlrn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6d2625e-ba6e-44e1-b245-0edc2adaa243,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a981c68e927108571692d174ebc0cf47e600882543d6dd401c23cbcd805d49d,PodSandboxId:11b2a45f795d401fe4c78cf74478d3d702eff22fef4bdd814d8198ee5072d604,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6
e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726850713649115674,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e04b7e0-a0fe-4e65-9ba5-63be2690da1d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70c74f4f1e0bde75fc553a034aa664a515c218d2b72725850921f92314b6ec06,PodSandboxId:cfda686abf7f1fef69de6a34f633f44ac3c87637d6ec92d05dc4a45a4d5652b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e
956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726850710125894103,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nqbzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 734f1782-975a-486b-adf3-32f60c376a9a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c60a90d5ed294c1a5015ea6f6b5c5259e8d437a6b5dd0f9dd758bb62d91c7b7,PodSandboxId:b53a284c395cfb6bdea6622b664327da6733b43d0375a7570cfa3dac443563e5,Metadata:&ContainerMetadata{Name:kube-proxy,Attemp
t:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726850707153952752,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xr4bt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a20cb9e-3e82-4bda-9529-7e024f9681a4,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44c347dc4cb2326d9ce7eef959abf86dcaee69ecf824e59fbe44600500e8a0f4,PodSandboxId:0ccdde3d3e8e30fec62b1f315de346cf5989b81e93276bfcf9792ae014efb9d5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSp
ec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726850695786767603,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-489802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8db84d1368c7024c014f2f2f0d973aae,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79fb233450407c6fdf879eb55124a5840bf49aaa572c10f7add06512d38df264,PodSandboxId:b3c515c903cd8c54cc3829530f8702fa82f07287a4bcae50433ffb0e6100c34b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageS
pec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726850695761844946,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-489802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de814a9694fb61ae23ac46f9b9deb6e7,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ebda0675cfbe9e7b3e6c1ca40351339db78cf3954608a12cc779850ee452a23,PodSandboxId:ce3e5a61bc6e6a8044b701e61a79b033d814fb58851347acc4b4eaab63045047,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d2
6915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726850695741523021,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-489802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 016cfe34770e4cbd59f73407149e44ff,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53631bbb5fc199153283953dffaf83c3d2a2b4cdbda98ab81770b42af5dfe30e,PodSandboxId:c9a4930506bbb11794aa02ab9a68cfe8370b91453dd7ab2cce5eac61a155cacf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002
289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726850695699198150,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-489802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50faea81a2001503e00d2a0be1ceba9e,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=08d41411-c009-463c-999b-7952cf715fae name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 16:59:03 addons-489802 crio[664]: time="2024-09-20 16:59:03.929260588Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dccde48f-8f6c-47be-bc9c-4bba59cb24f1 name=/runtime.v1.RuntimeService/Version
	Sep 20 16:59:03 addons-489802 crio[664]: time="2024-09-20 16:59:03.929401241Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dccde48f-8f6c-47be-bc9c-4bba59cb24f1 name=/runtime.v1.RuntimeService/Version
	Sep 20 16:59:03 addons-489802 crio[664]: time="2024-09-20 16:59:03.930901743Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6a3e7f33-c1f5-4086-9bf2-3c14b9b0e000 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 16:59:03 addons-489802 crio[664]: time="2024-09-20 16:59:03.932239994Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726851543932210470,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559237,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6a3e7f33-c1f5-4086-9bf2-3c14b9b0e000 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 16:59:03 addons-489802 crio[664]: time="2024-09-20 16:59:03.932836273Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a71ee3fd-6d6b-4798-88fc-2edce4586217 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 16:59:03 addons-489802 crio[664]: time="2024-09-20 16:59:03.932914898Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a71ee3fd-6d6b-4798-88fc-2edce4586217 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 16:59:03 addons-489802 crio[664]: time="2024-09-20 16:59:03.933261108Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9e143aa05bf909cb432c7c457640fb3b37fe1ea1e59b7f3bd1b4e187f5d0306b,PodSandboxId:98723fa66f58f86ebafded473a3572b8e63a415dd3812cbd4bab3b75153880fc,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1726851536437119830,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-fcflm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e43058a4-2b7d-46c4-874f-da5ebfcc43a0,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3b98df31c510e2b7d8467304c108770bfabad7ebb2494f12313d8f912b2482c,PodSandboxId:ddccd18e28f19bcd554a80347c0802f4ddf6d7bad08d4b2ac6f27eb3e102b20d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726851396918964948,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4c34572d-1118-4bb3-8265-b67b3104bc59,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c1fd10705c644580fdef2fc3075d7c9349c8e4b44d3899910dd41e40c87e2ce,PodSandboxId:66f4ad3477a6cc6a00655cc193d28db01097870bd3585db50c33f9e7cc96f8cf,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726850860245098258,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-wzvr2,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: 9688d654-b2f1-4e67-b21f-737c57cb6d4f,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5e85742448a79b5c1857677ad2a134c6852e9495035cbf9c25e3a7521dd6bb2,PodSandboxId:61840f6d138dd81b5b65efdfcdb4db6fc37465b1ee033b0bee2142714f07f4ae,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726850777385583505,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-b6mtt,i
o.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: fd711ef0-0010-45af-a950-49c84a55c942,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a9b75a453cd62d16bb90bc10f20dd616029cfd1dbb3300fdd9d3b272d5c1367,PodSandboxId:85dbe34d0b929d7356ea58dd7954b02069f214007674d69cc2313ed32dff2fc1,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726850776662845564,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name:
ingress-nginx-admission-create-h7lw7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 52fba05c-46c5-4916-b5e4-386dadb0ae61,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0690e87ddb4f9357eefd739e00c9cec1ec022eda1379279b535ba4678c33b26,PodSandboxId:36aedadeb2582fe5df950ad8776d82e03104ab51a608a29ba00e9113b19e678e,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1726850772239877059,Labels:map[string]string{io.kubernetes.co
ntainer.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-rhmqb,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: e5f1d3f8-1767-4ad2-b5b8-eb5bf18bc163,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a0d036505e72ed6b62c09226aa4d219c30e6e162e73ebffc595f568b216931d,PodSandboxId:1ae7bada2f668e2292fc48d3426bfa34e41215d4336864f62a4f90b4ee95709f,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,Creat
edAt:1726850737987301783,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-txlrn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6d2625e-ba6e-44e1-b245-0edc2adaa243,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a981c68e927108571692d174ebc0cf47e600882543d6dd401c23cbcd805d49d,PodSandboxId:11b2a45f795d401fe4c78cf74478d3d702eff22fef4bdd814d8198ee5072d604,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6
e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726850713649115674,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e04b7e0-a0fe-4e65-9ba5-63be2690da1d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70c74f4f1e0bde75fc553a034aa664a515c218d2b72725850921f92314b6ec06,PodSandboxId:cfda686abf7f1fef69de6a34f633f44ac3c87637d6ec92d05dc4a45a4d5652b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e
956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726850710125894103,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nqbzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 734f1782-975a-486b-adf3-32f60c376a9a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c60a90d5ed294c1a5015ea6f6b5c5259e8d437a6b5dd0f9dd758bb62d91c7b7,PodSandboxId:b53a284c395cfb6bdea6622b664327da6733b43d0375a7570cfa3dac443563e5,Metadata:&ContainerMetadata{Name:kube-proxy,Attemp
t:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726850707153952752,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xr4bt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a20cb9e-3e82-4bda-9529-7e024f9681a4,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44c347dc4cb2326d9ce7eef959abf86dcaee69ecf824e59fbe44600500e8a0f4,PodSandboxId:0ccdde3d3e8e30fec62b1f315de346cf5989b81e93276bfcf9792ae014efb9d5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSp
ec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726850695786767603,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-489802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8db84d1368c7024c014f2f2f0d973aae,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79fb233450407c6fdf879eb55124a5840bf49aaa572c10f7add06512d38df264,PodSandboxId:b3c515c903cd8c54cc3829530f8702fa82f07287a4bcae50433ffb0e6100c34b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageS
pec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726850695761844946,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-489802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de814a9694fb61ae23ac46f9b9deb6e7,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ebda0675cfbe9e7b3e6c1ca40351339db78cf3954608a12cc779850ee452a23,PodSandboxId:ce3e5a61bc6e6a8044b701e61a79b033d814fb58851347acc4b4eaab63045047,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d2
6915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726850695741523021,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-489802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 016cfe34770e4cbd59f73407149e44ff,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53631bbb5fc199153283953dffaf83c3d2a2b4cdbda98ab81770b42af5dfe30e,PodSandboxId:c9a4930506bbb11794aa02ab9a68cfe8370b91453dd7ab2cce5eac61a155cacf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002
289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726850695699198150,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-489802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50faea81a2001503e00d2a0be1ceba9e,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a71ee3fd-6d6b-4798-88fc-2edce4586217 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 16:59:03 addons-489802 crio[664]: time="2024-09-20 16:59:03.973545575Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a3961c4b-83a1-42f8-833c-39e467c7c910 name=/runtime.v1.RuntimeService/Version
	Sep 20 16:59:03 addons-489802 crio[664]: time="2024-09-20 16:59:03.973634627Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a3961c4b-83a1-42f8-833c-39e467c7c910 name=/runtime.v1.RuntimeService/Version
	Sep 20 16:59:03 addons-489802 crio[664]: time="2024-09-20 16:59:03.974823584Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1571a176-67f7-4c7e-a826-ec839a120120 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 16:59:03 addons-489802 crio[664]: time="2024-09-20 16:59:03.976180900Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726851543976147658,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559237,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1571a176-67f7-4c7e-a826-ec839a120120 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 16:59:03 addons-489802 crio[664]: time="2024-09-20 16:59:03.976785452Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=05d82587-5012-45a7-8071-a8b02fc387b0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 16:59:03 addons-489802 crio[664]: time="2024-09-20 16:59:03.976854897Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=05d82587-5012-45a7-8071-a8b02fc387b0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 16:59:03 addons-489802 crio[664]: time="2024-09-20 16:59:03.977186910Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9e143aa05bf909cb432c7c457640fb3b37fe1ea1e59b7f3bd1b4e187f5d0306b,PodSandboxId:98723fa66f58f86ebafded473a3572b8e63a415dd3812cbd4bab3b75153880fc,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1726851536437119830,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-fcflm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e43058a4-2b7d-46c4-874f-da5ebfcc43a0,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3b98df31c510e2b7d8467304c108770bfabad7ebb2494f12313d8f912b2482c,PodSandboxId:ddccd18e28f19bcd554a80347c0802f4ddf6d7bad08d4b2ac6f27eb3e102b20d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726851396918964948,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4c34572d-1118-4bb3-8265-b67b3104bc59,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c1fd10705c644580fdef2fc3075d7c9349c8e4b44d3899910dd41e40c87e2ce,PodSandboxId:66f4ad3477a6cc6a00655cc193d28db01097870bd3585db50c33f9e7cc96f8cf,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726850860245098258,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-wzvr2,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: 9688d654-b2f1-4e67-b21f-737c57cb6d4f,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5e85742448a79b5c1857677ad2a134c6852e9495035cbf9c25e3a7521dd6bb2,PodSandboxId:61840f6d138dd81b5b65efdfcdb4db6fc37465b1ee033b0bee2142714f07f4ae,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726850777385583505,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-b6mtt,i
o.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: fd711ef0-0010-45af-a950-49c84a55c942,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a9b75a453cd62d16bb90bc10f20dd616029cfd1dbb3300fdd9d3b272d5c1367,PodSandboxId:85dbe34d0b929d7356ea58dd7954b02069f214007674d69cc2313ed32dff2fc1,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726850776662845564,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name:
ingress-nginx-admission-create-h7lw7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 52fba05c-46c5-4916-b5e4-386dadb0ae61,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0690e87ddb4f9357eefd739e00c9cec1ec022eda1379279b535ba4678c33b26,PodSandboxId:36aedadeb2582fe5df950ad8776d82e03104ab51a608a29ba00e9113b19e678e,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1726850772239877059,Labels:map[string]string{io.kubernetes.co
ntainer.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-rhmqb,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: e5f1d3f8-1767-4ad2-b5b8-eb5bf18bc163,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a0d036505e72ed6b62c09226aa4d219c30e6e162e73ebffc595f568b216931d,PodSandboxId:1ae7bada2f668e2292fc48d3426bfa34e41215d4336864f62a4f90b4ee95709f,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,Creat
edAt:1726850737987301783,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-txlrn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6d2625e-ba6e-44e1-b245-0edc2adaa243,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a981c68e927108571692d174ebc0cf47e600882543d6dd401c23cbcd805d49d,PodSandboxId:11b2a45f795d401fe4c78cf74478d3d702eff22fef4bdd814d8198ee5072d604,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6
e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726850713649115674,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e04b7e0-a0fe-4e65-9ba5-63be2690da1d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70c74f4f1e0bde75fc553a034aa664a515c218d2b72725850921f92314b6ec06,PodSandboxId:cfda686abf7f1fef69de6a34f633f44ac3c87637d6ec92d05dc4a45a4d5652b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e
956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726850710125894103,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nqbzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 734f1782-975a-486b-adf3-32f60c376a9a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c60a90d5ed294c1a5015ea6f6b5c5259e8d437a6b5dd0f9dd758bb62d91c7b7,PodSandboxId:b53a284c395cfb6bdea6622b664327da6733b43d0375a7570cfa3dac443563e5,Metadata:&ContainerMetadata{Name:kube-proxy,Attemp
t:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726850707153952752,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xr4bt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a20cb9e-3e82-4bda-9529-7e024f9681a4,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44c347dc4cb2326d9ce7eef959abf86dcaee69ecf824e59fbe44600500e8a0f4,PodSandboxId:0ccdde3d3e8e30fec62b1f315de346cf5989b81e93276bfcf9792ae014efb9d5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSp
ec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726850695786767603,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-489802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8db84d1368c7024c014f2f2f0d973aae,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79fb233450407c6fdf879eb55124a5840bf49aaa572c10f7add06512d38df264,PodSandboxId:b3c515c903cd8c54cc3829530f8702fa82f07287a4bcae50433ffb0e6100c34b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageS
pec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726850695761844946,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-489802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de814a9694fb61ae23ac46f9b9deb6e7,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ebda0675cfbe9e7b3e6c1ca40351339db78cf3954608a12cc779850ee452a23,PodSandboxId:ce3e5a61bc6e6a8044b701e61a79b033d814fb58851347acc4b4eaab63045047,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d2
6915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726850695741523021,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-489802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 016cfe34770e4cbd59f73407149e44ff,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53631bbb5fc199153283953dffaf83c3d2a2b4cdbda98ab81770b42af5dfe30e,PodSandboxId:c9a4930506bbb11794aa02ab9a68cfe8370b91453dd7ab2cce5eac61a155cacf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002
289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726850695699198150,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-489802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50faea81a2001503e00d2a0be1ceba9e,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=05d82587-5012-45a7-8071-a8b02fc387b0 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9e143aa05bf90       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        7 seconds ago       Running             hello-world-app           0                   98723fa66f58f       hello-world-app-55bf9c44b4-fcflm
	b3b98df31c510       docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56                              2 minutes ago       Running             nginx                     0                   ddccd18e28f19       nginx
	1c1fd10705c64       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 11 minutes ago      Running             gcp-auth                  0                   66f4ad3477a6c       gcp-auth-89d5ffd79-wzvr2
	a5e85742448a7       ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242                                                             12 minutes ago      Exited              patch                     1                   61840f6d138dd       ingress-nginx-admission-patch-b6mtt
	5a9b75a453cd6       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   12 minutes ago      Exited              create                    0                   85dbe34d0b929       ingress-nginx-admission-create-h7lw7
	b0690e87ddb4f       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             12 minutes ago      Running             local-path-provisioner    0                   36aedadeb2582       local-path-provisioner-86d989889c-rhmqb
	3a0d036505e72       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a        13 minutes ago      Running             metrics-server            0                   1ae7bada2f668       metrics-server-84c5f94fbc-txlrn
	5a981c68e9271       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             13 minutes ago      Running             storage-provisioner       0                   11b2a45f795d4       storage-provisioner
	70c74f4f1e0bd       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             13 minutes ago      Running             coredns                   0                   cfda686abf7f1       coredns-7c65d6cfc9-nqbzq
	7c60a90d5ed29       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                             13 minutes ago      Running             kube-proxy                0                   b53a284c395cf       kube-proxy-xr4bt
	44c347dc4cb23       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                             14 minutes ago      Running             kube-controller-manager   0                   0ccdde3d3e8e3       kube-controller-manager-addons-489802
	79fb233450407       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                             14 minutes ago      Running             kube-apiserver            0                   b3c515c903cd8       kube-apiserver-addons-489802
	5ebda0675cfbe       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             14 minutes ago      Running             etcd                      0                   ce3e5a61bc6e6       etcd-addons-489802
	53631bbb5fc19       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                             14 minutes ago      Running             kube-scheduler            0                   c9a4930506bbb       kube-scheduler-addons-489802
	
	
	==> coredns [70c74f4f1e0bde75fc553a034aa664a515c218d2b72725850921f92314b6ec06] <==
	[INFO] 127.0.0.1:51784 - 8829 "HINFO IN 5160120906343044549.4812313304468353436. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012102619s
	[INFO] 10.244.0.7:49904 - 44683 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000739291s
	[INFO] 10.244.0.7:49904 - 13446 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000838879s
	[INFO] 10.244.0.7:37182 - 17696 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000137198s
	[INFO] 10.244.0.7:37182 - 29725 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000120771s
	[INFO] 10.244.0.7:40785 - 12767 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00012186s
	[INFO] 10.244.0.7:40785 - 24273 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000223065s
	[INFO] 10.244.0.7:54049 - 5032 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000122634s
	[INFO] 10.244.0.7:54049 - 51625 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000075286s
	[INFO] 10.244.0.7:57416 - 8811 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000080693s
	[INFO] 10.244.0.7:57416 - 56406 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000038363s
	[INFO] 10.244.0.7:59797 - 29819 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000040968s
	[INFO] 10.244.0.7:59797 - 16249 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000038791s
	[INFO] 10.244.0.7:39368 - 3897 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000045812s
	[INFO] 10.244.0.7:39368 - 53818 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000034439s
	[INFO] 10.244.0.7:57499 - 43541 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000049958s
	[INFO] 10.244.0.7:57499 - 15379 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000036533s
	[INFO] 10.244.0.21:51858 - 31367 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000847603s
	[INFO] 10.244.0.21:33579 - 64948 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000139841s
	[INFO] 10.244.0.21:48527 - 40604 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000280976s
	[INFO] 10.244.0.21:52717 - 13930 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000169344s
	[INFO] 10.244.0.21:58755 - 3796 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000147676s
	[INFO] 10.244.0.21:51813 - 12818 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000082135s
	[INFO] 10.244.0.21:51795 - 17985 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.004530788s
	[INFO] 10.244.0.21:47998 - 23926 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002659458s
	
	
	==> describe nodes <==
	Name:               addons-489802
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-489802
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0626f22cf0d915d75e291a5bce701f94395056e1
	                    minikube.k8s.io/name=addons-489802
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T16_45_01_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-489802
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 16:44:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-489802
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 16:58:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 16:57:14 +0000   Fri, 20 Sep 2024 16:44:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 16:57:14 +0000   Fri, 20 Sep 2024 16:44:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 16:57:14 +0000   Fri, 20 Sep 2024 16:44:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 16:57:14 +0000   Fri, 20 Sep 2024 16:45:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.89
	  Hostname:    addons-489802
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 fd813db21ac84502aef251a6893e0027
	  System UUID:                fd813db2-1ac8-4502-aef2-51a6893e0027
	  Boot ID:                    ed0a3698-272d-483a-ba56-acac4def529a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  default                     hello-world-app-55bf9c44b4-fcflm           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m35s
	  gcp-auth                    gcp-auth-89d5ffd79-wzvr2                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 coredns-7c65d6cfc9-nqbzq                   100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     13m
	  kube-system                 etcd-addons-489802                         100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         14m
	  kube-system                 kube-apiserver-addons-489802               250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-addons-489802      200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-xr4bt                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-addons-489802               100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 metrics-server-84c5f94fbc-txlrn            100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         13m
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  local-path-storage          local-path-provisioner-86d989889c-rhmqb    0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (9%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m (x2 over 14m)  kubelet          Node addons-489802 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x2 over 14m)  kubelet          Node addons-489802 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x2 over 14m)  kubelet          Node addons-489802 status is now: NodeHasSufficientPID
	  Normal  NodeReady                14m                kubelet          Node addons-489802 status is now: NodeReady
	  Normal  RegisteredNode           13m                node-controller  Node addons-489802 event: Registered Node addons-489802 in Controller
	
	
	==> dmesg <==
	[ +10.203071] kauditd_printk_skb: 70 callbacks suppressed
	[ +17.983286] kauditd_printk_skb: 2 callbacks suppressed
	[Sep20 16:46] kauditd_printk_skb: 4 callbacks suppressed
	[ +13.042505] kauditd_printk_skb: 42 callbacks suppressed
	[  +6.124032] kauditd_printk_skb: 45 callbacks suppressed
	[ +10.494816] kauditd_printk_skb: 64 callbacks suppressed
	[  +5.981422] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.234675] kauditd_printk_skb: 34 callbacks suppressed
	[Sep20 16:47] kauditd_printk_skb: 28 callbacks suppressed
	[  +7.543099] kauditd_printk_skb: 9 callbacks suppressed
	[Sep20 16:48] kauditd_printk_skb: 2 callbacks suppressed
	[Sep20 16:49] kauditd_printk_skb: 28 callbacks suppressed
	[Sep20 16:52] kauditd_printk_skb: 28 callbacks suppressed
	[Sep20 16:55] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.170883] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.280461] kauditd_printk_skb: 17 callbacks suppressed
	[Sep20 16:56] kauditd_printk_skb: 24 callbacks suppressed
	[ +10.067719] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.043461] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.256575] kauditd_printk_skb: 14 callbacks suppressed
	[  +9.179843] kauditd_printk_skb: 27 callbacks suppressed
	[ +15.573697] kauditd_printk_skb: 7 callbacks suppressed
	[Sep20 16:57] kauditd_printk_skb: 61 callbacks suppressed
	[Sep20 16:58] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.225825] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [5ebda0675cfbe9e7b3e6c1ca40351339db78cf3954608a12cc779850ee452a23] <==
	{"level":"warn","ts":"2024-09-20T16:46:31.521799Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"298.668395ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T16:46:31.522054Z","caller":"traceutil/trace.go:171","msg":"trace[655563733] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1072; }","duration":"298.968755ms","start":"2024-09-20T16:46:31.223072Z","end":"2024-09-20T16:46:31.522041Z","steps":["trace[655563733] 'agreement among raft nodes before linearized reading'  (duration: 298.302775ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T16:46:31.522572Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"285.514745ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T16:46:31.522662Z","caller":"traceutil/trace.go:171","msg":"trace[397127513] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1072; }","duration":"285.60775ms","start":"2024-09-20T16:46:31.237046Z","end":"2024-09-20T16:46:31.522653Z","steps":["trace[397127513] 'agreement among raft nodes before linearized reading'  (duration: 285.506056ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T16:46:31.523097Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"389.094744ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T16:46:31.521069Z","caller":"traceutil/trace.go:171","msg":"trace[1366548052] transaction","detail":"{read_only:false; response_revision:1072; number_of_response:1; }","duration":"451.994343ms","start":"2024-09-20T16:46:31.069059Z","end":"2024-09-20T16:46:31.521053Z","steps":["trace[1366548052] 'process raft request'  (duration: 450.539479ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T16:46:31.523185Z","caller":"traceutil/trace.go:171","msg":"trace[1958014936] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1072; }","duration":"389.189661ms","start":"2024-09-20T16:46:31.133988Z","end":"2024-09-20T16:46:31.523178Z","steps":["trace[1958014936] 'agreement among raft nodes before linearized reading'  (duration: 388.742689ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T16:46:31.523315Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T16:46:31.133949Z","time spent":"389.346336ms","remote":"127.0.0.1:44644","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-09-20T16:46:31.523518Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T16:46:31.069043Z","time spent":"454.199637ms","remote":"127.0.0.1:44626","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1066 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-09-20T16:46:34.697548Z","caller":"traceutil/trace.go:171","msg":"trace[1773063632] transaction","detail":"{read_only:false; response_revision:1087; number_of_response:1; }","duration":"138.671352ms","start":"2024-09-20T16:46:34.558854Z","end":"2024-09-20T16:46:34.697526Z","steps":["trace[1773063632] 'process raft request'  (duration: 138.455302ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T16:47:09.828412Z","caller":"traceutil/trace.go:171","msg":"trace[1350480991] linearizableReadLoop","detail":"{readStateIndex:1234; appliedIndex:1233; }","duration":"107.953401ms","start":"2024-09-20T16:47:09.720376Z","end":"2024-09-20T16:47:09.828329Z","steps":["trace[1350480991] 'read index received'  (duration: 107.782449ms)","trace[1350480991] 'applied index is now lower than readState.Index'  (duration: 170.357µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-20T16:47:09.828591Z","caller":"traceutil/trace.go:171","msg":"trace[1677279500] transaction","detail":"{read_only:false; response_revision:1192; number_of_response:1; }","duration":"108.710691ms","start":"2024-09-20T16:47:09.719867Z","end":"2024-09-20T16:47:09.828578Z","steps":["trace[1677279500] 'process raft request'  (duration: 108.343763ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T16:47:09.828834Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.468877ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T16:47:09.828877Z","caller":"traceutil/trace.go:171","msg":"trace[823583891] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1192; }","duration":"108.573167ms","start":"2024-09-20T16:47:09.720295Z","end":"2024-09-20T16:47:09.828868Z","steps":["trace[823583891] 'agreement among raft nodes before linearized reading'  (duration: 108.427543ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T16:54:56.686206Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1494}
	{"level":"info","ts":"2024-09-20T16:54:56.732913Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1494,"took":"45.95642ms","hash":3143060453,"current-db-size-bytes":6316032,"current-db-size":"6.3 MB","current-db-size-in-use-bytes":3231744,"current-db-size-in-use":"3.2 MB"}
	{"level":"info","ts":"2024-09-20T16:54:56.733061Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3143060453,"revision":1494,"compact-revision":-1}
	{"level":"info","ts":"2024-09-20T16:55:52.021318Z","caller":"traceutil/trace.go:171","msg":"trace[2100115174] transaction","detail":"{read_only:false; response_revision:2018; number_of_response:1; }","duration":"379.66185ms","start":"2024-09-20T16:55:51.641590Z","end":"2024-09-20T16:55:52.021252Z","steps":["trace[2100115174] 'process raft request'  (duration: 379.545504ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T16:55:52.021786Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T16:55:51.641574Z","time spent":"380.006071ms","remote":"127.0.0.1:44742","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":538,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:1986 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:451 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >"}
	{"level":"info","ts":"2024-09-20T16:55:52.022293Z","caller":"traceutil/trace.go:171","msg":"trace[35214985] linearizableReadLoop","detail":"{readStateIndex:2175; appliedIndex:2174; }","duration":"196.804789ms","start":"2024-09-20T16:55:51.825473Z","end":"2024-09-20T16:55:52.022278Z","steps":["trace[35214985] 'read index received'  (duration: 196.433504ms)","trace[35214985] 'applied index is now lower than readState.Index'  (duration: 370.887µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-20T16:55:52.022475Z","caller":"traceutil/trace.go:171","msg":"trace[1790896376] transaction","detail":"{read_only:false; response_revision:2019; number_of_response:1; }","duration":"211.987025ms","start":"2024-09-20T16:55:51.810476Z","end":"2024-09-20T16:55:52.022463Z","steps":["trace[1790896376] 'process raft request'  (duration: 211.729812ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T16:55:52.022604Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"197.118957ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T16:55:52.022641Z","caller":"traceutil/trace.go:171","msg":"trace[1794876456] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2019; }","duration":"197.165972ms","start":"2024-09-20T16:55:51.825467Z","end":"2024-09-20T16:55:52.022633Z","steps":["trace[1794876456] 'agreement among raft nodes before linearized reading'  (duration: 197.096047ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T16:56:32.273552Z","caller":"traceutil/trace.go:171","msg":"trace[1806753974] transaction","detail":"{read_only:false; response_revision:2278; number_of_response:1; }","duration":"138.283014ms","start":"2024-09-20T16:56:32.135255Z","end":"2024-09-20T16:56:32.273538Z","steps":["trace[1806753974] 'process raft request'  (duration: 137.851209ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T16:56:36.295953Z","caller":"traceutil/trace.go:171","msg":"trace[1488171244] transaction","detail":"{read_only:false; response_revision:2301; number_of_response:1; }","duration":"162.589325ms","start":"2024-09-20T16:56:36.131622Z","end":"2024-09-20T16:56:36.294211Z","steps":["trace[1488171244] 'process raft request'  (duration: 162.248073ms)"],"step_count":1}
	
	
	==> gcp-auth [1c1fd10705c644580fdef2fc3075d7c9349c8e4b44d3899910dd41e40c87e2ce] <==
	2024/09/20 16:47:43 Ready to write response ...
	2024/09/20 16:47:43 Ready to marshal response ...
	2024/09/20 16:47:43 Ready to write response ...
	2024/09/20 16:55:47 Ready to marshal response ...
	2024/09/20 16:55:47 Ready to write response ...
	2024/09/20 16:55:47 Ready to marshal response ...
	2024/09/20 16:55:47 Ready to write response ...
	2024/09/20 16:55:47 Ready to marshal response ...
	2024/09/20 16:55:47 Ready to write response ...
	2024/09/20 16:55:57 Ready to marshal response ...
	2024/09/20 16:55:57 Ready to write response ...
	2024/09/20 16:56:16 Ready to marshal response ...
	2024/09/20 16:56:16 Ready to write response ...
	2024/09/20 16:56:16 Ready to marshal response ...
	2024/09/20 16:56:16 Ready to write response ...
	2024/09/20 16:56:23 Ready to marshal response ...
	2024/09/20 16:56:23 Ready to write response ...
	2024/09/20 16:56:28 Ready to marshal response ...
	2024/09/20 16:56:28 Ready to write response ...
	2024/09/20 16:56:29 Ready to marshal response ...
	2024/09/20 16:56:29 Ready to write response ...
	2024/09/20 16:56:50 Ready to marshal response ...
	2024/09/20 16:56:50 Ready to write response ...
	2024/09/20 16:58:53 Ready to marshal response ...
	2024/09/20 16:58:53 Ready to write response ...
	
	
	==> kernel <==
	 16:59:04 up 14 min,  0 users,  load average: 0.59, 0.54, 0.43
	Linux addons-489802 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [79fb233450407c6fdf879eb55124a5840bf49aaa572c10f7add06512d38df264] <==
	E0920 16:47:19.399216       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"context canceled\"}: context canceled" logger="UnhandledError"
	E0920 16:47:19.400809       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E0920 16:47:19.402902       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E0920 16:47:19.404104       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0920 16:47:19.412494       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="13.458229ms" method="GET" path="/apis/apps/v1/namespaces/yakd-dashboard/replicasets/yakd-dashboard-67d98fc6b" result=null
	I0920 16:55:47.034722       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.110.200.88"}
	I0920 16:56:11.192249       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0920 16:56:12.228711       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0920 16:56:29.568621       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0920 16:56:29.873321       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.104.88.195"}
	I0920 16:56:40.306913       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0920 16:57:06.651926       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 16:57:06.652138       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 16:57:06.689330       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 16:57:06.689633       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 16:57:06.726410       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 16:57:06.732004       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 16:57:06.763090       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 16:57:06.763215       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 16:57:06.917264       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 16:57:06.917824       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0920 16:57:07.763897       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0920 16:57:07.917564       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0920 16:57:07.952201       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0920 16:58:53.706479       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.96.236.222"}
	
	
	==> kube-controller-manager [44c347dc4cb2326d9ce7eef959abf86dcaee69ecf824e59fbe44600500e8a0f4] <==
	E0920 16:57:39.898830       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 16:57:41.340321       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 16:57:41.340427       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 16:57:54.425049       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 16:57:54.425316       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 16:58:06.126036       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 16:58:06.126129       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 16:58:15.621699       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 16:58:15.621777       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 16:58:32.194556       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 16:58:32.194738       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 16:58:34.534915       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 16:58:34.535028       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 16:58:46.734473       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 16:58:46.734604       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 16:58:49.813801       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 16:58:49.813964       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0920 16:58:53.541798       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="56.050645ms"
	I0920 16:58:53.554785       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="12.879914ms"
	I0920 16:58:53.555691       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="37.516µs"
	I0920 16:58:55.890225       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0920 16:58:55.896739       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="391.822µs"
	I0920 16:58:55.899067       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	I0920 16:58:57.242030       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="9.416938ms"
	I0920 16:58:57.243267       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="60.155µs"
	
	
	==> kube-proxy [7c60a90d5ed294c1a5015ea6f6b5c5259e8d437a6b5dd0f9dd758bb62d91c7b7] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 16:45:07.927443       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0920 16:45:07.961049       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.89"]
	E0920 16:45:07.961134       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 16:45:08.130722       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 16:45:08.130762       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 16:45:08.130790       1 server_linux.go:169] "Using iptables Proxier"
	I0920 16:45:08.135726       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 16:45:08.136036       1 server.go:483] "Version info" version="v1.31.1"
	I0920 16:45:08.136059       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 16:45:08.137263       1 config.go:199] "Starting service config controller"
	I0920 16:45:08.137318       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 16:45:08.137400       1 config.go:105] "Starting endpoint slice config controller"
	I0920 16:45:08.137405       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 16:45:08.137933       1 config.go:328] "Starting node config controller"
	I0920 16:45:08.137953       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 16:45:08.237708       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 16:45:08.237750       1 shared_informer.go:320] Caches are synced for service config
	I0920 16:45:08.239006       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [53631bbb5fc199153283953dffaf83c3d2a2b4cdbda98ab81770b42af5dfe30e] <==
	W0920 16:44:58.228924       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0920 16:44:58.230396       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 16:44:58.228968       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0920 16:44:58.230429       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 16:44:59.045447       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0920 16:44:59.045496       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 16:44:59.126233       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0920 16:44:59.126435       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 16:44:59.147240       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0920 16:44:59.147292       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 16:44:59.277135       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0920 16:44:59.278460       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 16:44:59.296223       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0920 16:44:59.296273       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0920 16:44:59.348771       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0920 16:44:59.348828       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 16:44:59.368238       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0920 16:44:59.368290       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 16:44:59.411207       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0920 16:44:59.411256       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 16:44:59.475030       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0920 16:44:59.475087       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 16:44:59.605643       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0920 16:44:59.605806       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0920 16:45:02.104787       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 20 16:58:56 addons-489802 kubelet[1210]: I0920 16:58:56.884691    1210 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f722d5e-9dee-4b0e-8661-9c4181ea4f9b" path="/var/lib/kubelet/pods/1f722d5e-9dee-4b0e-8661-9c4181ea4f9b/volumes"
	Sep 20 16:58:56 addons-489802 kubelet[1210]: I0920 16:58:56.885309    1210 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52fba05c-46c5-4916-b5e4-386dadb0ae61" path="/var/lib/kubelet/pods/52fba05c-46c5-4916-b5e4-386dadb0ae61/volumes"
	Sep 20 16:58:56 addons-489802 kubelet[1210]: I0920 16:58:56.885841    1210 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd711ef0-0010-45af-a950-49c84a55c942" path="/var/lib/kubelet/pods/fd711ef0-0010-45af-a950-49c84a55c942/volumes"
	Sep 20 16:58:57 addons-489802 kubelet[1210]: I0920 16:58:57.231739    1210 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-55bf9c44b4-fcflm" podStartSLOduration=1.947285524 podStartE2EDuration="4.231701512s" podCreationTimestamp="2024-09-20 16:58:53 +0000 UTC" firstStartedPulling="2024-09-20 16:58:54.137666298 +0000 UTC m=+833.397844370" lastFinishedPulling="2024-09-20 16:58:56.422082297 +0000 UTC m=+835.682260358" observedRunningTime="2024-09-20 16:58:57.23074269 +0000 UTC m=+836.490920751" watchObservedRunningTime="2024-09-20 16:58:57.231701512 +0000 UTC m=+836.491879592"
	Sep 20 16:58:59 addons-489802 kubelet[1210]: I0920 16:58:59.113932    1210 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f93f931b-28ea-417f-9956-b9dce76ebe38-webhook-cert\") pod \"f93f931b-28ea-417f-9956-b9dce76ebe38\" (UID: \"f93f931b-28ea-417f-9956-b9dce76ebe38\") "
	Sep 20 16:58:59 addons-489802 kubelet[1210]: I0920 16:58:59.113988    1210 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9zdx\" (UniqueName: \"kubernetes.io/projected/f93f931b-28ea-417f-9956-b9dce76ebe38-kube-api-access-f9zdx\") pod \"f93f931b-28ea-417f-9956-b9dce76ebe38\" (UID: \"f93f931b-28ea-417f-9956-b9dce76ebe38\") "
	Sep 20 16:58:59 addons-489802 kubelet[1210]: I0920 16:58:59.116965    1210 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f93f931b-28ea-417f-9956-b9dce76ebe38-kube-api-access-f9zdx" (OuterVolumeSpecName: "kube-api-access-f9zdx") pod "f93f931b-28ea-417f-9956-b9dce76ebe38" (UID: "f93f931b-28ea-417f-9956-b9dce76ebe38"). InnerVolumeSpecName "kube-api-access-f9zdx". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 20 16:58:59 addons-489802 kubelet[1210]: I0920 16:58:59.117996    1210 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f93f931b-28ea-417f-9956-b9dce76ebe38-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "f93f931b-28ea-417f-9956-b9dce76ebe38" (UID: "f93f931b-28ea-417f-9956-b9dce76ebe38"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 20 16:58:59 addons-489802 kubelet[1210]: I0920 16:58:59.214784    1210 reconciler_common.go:288] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f93f931b-28ea-417f-9956-b9dce76ebe38-webhook-cert\") on node \"addons-489802\" DevicePath \"\""
	Sep 20 16:58:59 addons-489802 kubelet[1210]: I0920 16:58:59.214824    1210 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-f9zdx\" (UniqueName: \"kubernetes.io/projected/f93f931b-28ea-417f-9956-b9dce76ebe38-kube-api-access-f9zdx\") on node \"addons-489802\" DevicePath \"\""
	Sep 20 16:58:59 addons-489802 kubelet[1210]: I0920 16:58:59.230231    1210 scope.go:117] "RemoveContainer" containerID="29c24274c3f958be71bf70e73d568bc6a4bb1bb6c65a5881e3fc34fefcc9fcf2"
	Sep 20 16:58:59 addons-489802 kubelet[1210]: I0920 16:58:59.249228    1210 scope.go:117] "RemoveContainer" containerID="29c24274c3f958be71bf70e73d568bc6a4bb1bb6c65a5881e3fc34fefcc9fcf2"
	Sep 20 16:58:59 addons-489802 kubelet[1210]: E0920 16:58:59.249797    1210 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29c24274c3f958be71bf70e73d568bc6a4bb1bb6c65a5881e3fc34fefcc9fcf2\": container with ID starting with 29c24274c3f958be71bf70e73d568bc6a4bb1bb6c65a5881e3fc34fefcc9fcf2 not found: ID does not exist" containerID="29c24274c3f958be71bf70e73d568bc6a4bb1bb6c65a5881e3fc34fefcc9fcf2"
	Sep 20 16:58:59 addons-489802 kubelet[1210]: I0920 16:58:59.249837    1210 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29c24274c3f958be71bf70e73d568bc6a4bb1bb6c65a5881e3fc34fefcc9fcf2"} err="failed to get container status \"29c24274c3f958be71bf70e73d568bc6a4bb1bb6c65a5881e3fc34fefcc9fcf2\": rpc error: code = NotFound desc = could not find container \"29c24274c3f958be71bf70e73d568bc6a4bb1bb6c65a5881e3fc34fefcc9fcf2\": container with ID starting with 29c24274c3f958be71bf70e73d568bc6a4bb1bb6c65a5881e3fc34fefcc9fcf2 not found: ID does not exist"
	Sep 20 16:59:00 addons-489802 kubelet[1210]: E0920 16:59:00.883797    1210 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="9a99a392-d151-4f13-b9fa-105113d19455"
	Sep 20 16:59:00 addons-489802 kubelet[1210]: I0920 16:59:00.885048    1210 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f93f931b-28ea-417f-9956-b9dce76ebe38" path="/var/lib/kubelet/pods/f93f931b-28ea-417f-9956-b9dce76ebe38/volumes"
	Sep 20 16:59:00 addons-489802 kubelet[1210]: E0920 16:59:00.923128    1210 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 20 16:59:00 addons-489802 kubelet[1210]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 20 16:59:00 addons-489802 kubelet[1210]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 20 16:59:00 addons-489802 kubelet[1210]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 20 16:59:00 addons-489802 kubelet[1210]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 20 16:59:01 addons-489802 kubelet[1210]: E0920 16:59:01.501543    1210 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726851541501167915,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559237,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 16:59:01 addons-489802 kubelet[1210]: E0920 16:59:01.501580    1210 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726851541501167915,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559237,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 16:59:04 addons-489802 kubelet[1210]: I0920 16:59:04.542615    1210 scope.go:117] "RemoveContainer" containerID="a5e85742448a79b5c1857677ad2a134c6852e9495035cbf9c25e3a7521dd6bb2"
	Sep 20 16:59:04 addons-489802 kubelet[1210]: I0920 16:59:04.582400    1210 scope.go:117] "RemoveContainer" containerID="5a9b75a453cd62d16bb90bc10f20dd616029cfd1dbb3300fdd9d3b272d5c1367"
	
	
	==> storage-provisioner [5a981c68e927108571692d174ebc0cf47e600882543d6dd401c23cbcd805d49d] <==
	I0920 16:45:14.933598       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0920 16:45:15.129203       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0920 16:45:15.129288       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0920 16:45:15.469563       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0920 16:45:15.471781       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-489802_9f119035-fb3e-4caa-852b-e718c04f6499!
	I0920 16:45:15.471465       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"47834956-e67b-4561-9f20-a2c3f45edc3a", APIVersion:"v1", ResourceVersion:"708", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-489802_9f119035-fb3e-4caa-852b-e718c04f6499 became leader
	I0920 16:45:15.594691       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-489802_9f119035-fb3e-4caa-852b-e718c04f6499!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-489802 -n addons-489802
helpers_test.go:261: (dbg) Run:  kubectl --context addons-489802 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-489802 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-489802 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-489802/192.168.39.89
	Start Time:       Fri, 20 Sep 2024 16:47:43 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.22
	IPs:
	  IP:  10.244.0.22
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lh4vn (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-lh4vn:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  11m                  default-scheduler  Successfully assigned default/busybox to addons-489802
	  Normal   Pulling    9m49s (x4 over 11m)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     9m48s (x4 over 11m)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     9m48s (x4 over 11m)  kubelet            Error: ErrImagePull
	  Warning  Failed     9m34s (x6 over 11m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    82s (x41 over 11m)   kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (156.02s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (359.18s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 10.190473ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-txlrn" [b6d2625e-ba6e-44e1-b245-0edc2adaa243] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.005374191s
addons_test.go:413: (dbg) Run:  kubectl --context addons-489802 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-489802 top pods -n kube-system: exit status 1 (78.101576ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-nqbzq, age: 10m52.91430401s

                                                
                                                
** /stderr **
I0920 16:55:58.915803   15973 retry.go:31] will retry after 3.420514123s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-489802 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-489802 top pods -n kube-system: exit status 1 (72.257422ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-nqbzq, age: 10m56.410215623s

                                                
                                                
** /stderr **
I0920 16:56:02.411646   15973 retry.go:31] will retry after 2.849126887s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-489802 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-489802 top pods -n kube-system: exit status 1 (67.937209ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-nqbzq, age: 10m59.327362074s

                                                
                                                
** /stderr **
I0920 16:56:05.329362   15973 retry.go:31] will retry after 10.098435643s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-489802 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-489802 top pods -n kube-system: exit status 1 (67.407422ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-nqbzq, age: 11m9.493894169s

                                                
                                                
** /stderr **
I0920 16:56:15.495587   15973 retry.go:31] will retry after 8.071590018s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-489802 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-489802 top pods -n kube-system: exit status 1 (68.701877ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-nqbzq, age: 11m17.634984164s

                                                
                                                
** /stderr **
I0920 16:56:23.636588   15973 retry.go:31] will retry after 22.04767722s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-489802 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-489802 top pods -n kube-system: exit status 1 (93.444801ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-nqbzq, age: 11m39.776103559s

                                                
                                                
** /stderr **
I0920 16:56:45.778069   15973 retry.go:31] will retry after 31.821331795s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-489802 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-489802 top pods -n kube-system: exit status 1 (68.410331ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-nqbzq, age: 12m11.669767861s

                                                
                                                
** /stderr **
I0920 16:57:17.671817   15973 retry.go:31] will retry after 33.735645652s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-489802 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-489802 top pods -n kube-system: exit status 1 (68.728028ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-nqbzq, age: 12m45.475374324s

                                                
                                                
** /stderr **
I0920 16:57:51.477396   15973 retry.go:31] will retry after 1m5.927901974s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-489802 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-489802 top pods -n kube-system: exit status 1 (64.469572ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-nqbzq, age: 13m51.469269455s

                                                
                                                
** /stderr **
I0920 16:58:57.471004   15973 retry.go:31] will retry after 55.741387514s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-489802 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-489802 top pods -n kube-system: exit status 1 (65.200117ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-nqbzq, age: 14m47.276999974s

                                                
                                                
** /stderr **
I0920 16:59:53.278866   15973 retry.go:31] will retry after 46.892913041s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-489802 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-489802 top pods -n kube-system: exit status 1 (71.579493ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-nqbzq, age: 15m34.242691512s

                                                
                                                
** /stderr **
I0920 17:00:40.244646   15973 retry.go:31] will retry after 1m8.75499726s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-489802 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-489802 top pods -n kube-system: exit status 1 (67.216259ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-nqbzq, age: 16m43.065020698s

                                                
                                                
** /stderr **
addons_test.go:427: failed checking metric server: exit status 1
addons_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p addons-489802 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-489802 -n addons-489802
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-489802 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-489802 logs -n 25: (1.431218427s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-349545                                                                     | download-only-349545 | jenkins | v1.34.0 | 20 Sep 24 16:44 UTC | 20 Sep 24 16:44 UTC |
	| delete  | -p download-only-858543                                                                     | download-only-858543 | jenkins | v1.34.0 | 20 Sep 24 16:44 UTC | 20 Sep 24 16:44 UTC |
	| delete  | -p download-only-349545                                                                     | download-only-349545 | jenkins | v1.34.0 | 20 Sep 24 16:44 UTC | 20 Sep 24 16:44 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-811854 | jenkins | v1.34.0 | 20 Sep 24 16:44 UTC |                     |
	|         | binary-mirror-811854                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:34057                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-811854                                                                     | binary-mirror-811854 | jenkins | v1.34.0 | 20 Sep 24 16:44 UTC | 20 Sep 24 16:44 UTC |
	| addons  | disable dashboard -p                                                                        | addons-489802        | jenkins | v1.34.0 | 20 Sep 24 16:44 UTC |                     |
	|         | addons-489802                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-489802        | jenkins | v1.34.0 | 20 Sep 24 16:44 UTC |                     |
	|         | addons-489802                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-489802 --wait=true                                                                | addons-489802        | jenkins | v1.34.0 | 20 Sep 24 16:44 UTC | 20 Sep 24 16:47 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-489802        | jenkins | v1.34.0 | 20 Sep 24 16:55 UTC | 20 Sep 24 16:55 UTC |
	|         | -p addons-489802                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-489802        | jenkins | v1.34.0 | 20 Sep 24 16:55 UTC | 20 Sep 24 16:55 UTC |
	|         | addons-489802                                                                               |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-489802        | jenkins | v1.34.0 | 20 Sep 24 16:55 UTC | 20 Sep 24 16:55 UTC |
	|         | -p addons-489802                                                                            |                      |         |         |                     |                     |
	| addons  | addons-489802 addons disable                                                                | addons-489802        | jenkins | v1.34.0 | 20 Sep 24 16:55 UTC | 20 Sep 24 16:56 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-489802 addons disable                                                                | addons-489802        | jenkins | v1.34.0 | 20 Sep 24 16:55 UTC | 20 Sep 24 16:56 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-489802        | jenkins | v1.34.0 | 20 Sep 24 16:56 UTC | 20 Sep 24 16:56 UTC |
	|         | addons-489802                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-489802 ssh cat                                                                       | addons-489802        | jenkins | v1.34.0 | 20 Sep 24 16:56 UTC | 20 Sep 24 16:56 UTC |
	|         | /opt/local-path-provisioner/pvc-b8225ab7-cae8-4ab5-8ca1-5e74b7712f98_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-489802 addons disable                                                                | addons-489802        | jenkins | v1.34.0 | 20 Sep 24 16:56 UTC | 20 Sep 24 16:56 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-489802 ssh curl -s                                                                   | addons-489802        | jenkins | v1.34.0 | 20 Sep 24 16:56 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ip      | addons-489802 ip                                                                            | addons-489802        | jenkins | v1.34.0 | 20 Sep 24 16:56 UTC | 20 Sep 24 16:56 UTC |
	| addons  | addons-489802 addons disable                                                                | addons-489802        | jenkins | v1.34.0 | 20 Sep 24 16:56 UTC | 20 Sep 24 16:56 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-489802 addons                                                                        | addons-489802        | jenkins | v1.34.0 | 20 Sep 24 16:56 UTC | 20 Sep 24 16:57 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-489802 addons                                                                        | addons-489802        | jenkins | v1.34.0 | 20 Sep 24 16:57 UTC | 20 Sep 24 16:57 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-489802 ip                                                                            | addons-489802        | jenkins | v1.34.0 | 20 Sep 24 16:58 UTC | 20 Sep 24 16:58 UTC |
	| addons  | addons-489802 addons disable                                                                | addons-489802        | jenkins | v1.34.0 | 20 Sep 24 16:58 UTC | 20 Sep 24 16:58 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-489802 addons disable                                                                | addons-489802        | jenkins | v1.34.0 | 20 Sep 24 16:58 UTC | 20 Sep 24 16:59 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-489802 addons                                                                        | addons-489802        | jenkins | v1.34.0 | 20 Sep 24 17:01 UTC | 20 Sep 24 17:01 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 16:44:18
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 16:44:18.178711   16686 out.go:345] Setting OutFile to fd 1 ...
	I0920 16:44:18.178820   16686 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 16:44:18.178830   16686 out.go:358] Setting ErrFile to fd 2...
	I0920 16:44:18.178837   16686 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 16:44:18.179018   16686 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-8777/.minikube/bin
	I0920 16:44:18.179615   16686 out.go:352] Setting JSON to false
	I0920 16:44:18.180405   16686 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1601,"bootTime":1726849057,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 16:44:18.180501   16686 start.go:139] virtualization: kvm guest
	I0920 16:44:18.182896   16686 out.go:177] * [addons-489802] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 16:44:18.184216   16686 notify.go:220] Checking for updates...
	I0920 16:44:18.184222   16686 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 16:44:18.185469   16686 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 16:44:18.186874   16686 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-8777/kubeconfig
	I0920 16:44:18.188324   16686 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-8777/.minikube
	I0920 16:44:18.190351   16686 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 16:44:18.191922   16686 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 16:44:18.193502   16686 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 16:44:18.225366   16686 out.go:177] * Using the kvm2 driver based on user configuration
	I0920 16:44:18.226431   16686 start.go:297] selected driver: kvm2
	I0920 16:44:18.226443   16686 start.go:901] validating driver "kvm2" against <nil>
	I0920 16:44:18.226453   16686 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 16:44:18.227135   16686 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 16:44:18.227230   16686 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19672-8777/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 16:44:18.242065   16686 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 16:44:18.242112   16686 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 16:44:18.242404   16686 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 16:44:18.242437   16686 cni.go:84] Creating CNI manager for ""
	I0920 16:44:18.242490   16686 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 16:44:18.242500   16686 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 16:44:18.242555   16686 start.go:340] cluster config:
	{Name:addons-489802 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-489802 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 16:44:18.242664   16686 iso.go:125] acquiring lock: {Name:mkba95ef0488e46f622333e9f317f43def93040b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 16:44:18.244379   16686 out.go:177] * Starting "addons-489802" primary control-plane node in "addons-489802" cluster
	I0920 16:44:18.245561   16686 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 16:44:18.245610   16686 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0920 16:44:18.245618   16686 cache.go:56] Caching tarball of preloaded images
	I0920 16:44:18.245687   16686 preload.go:172] Found /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 16:44:18.245698   16686 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 16:44:18.246011   16686 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/config.json ...
	I0920 16:44:18.246032   16686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/config.json: {Name:mka75e2e382f021a76fc6885b0195d64c12ed744 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:44:18.246164   16686 start.go:360] acquireMachinesLock for addons-489802: {Name:mkfeedb385cf08b5d2aa00913e85815d02a180c2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 16:44:18.246208   16686 start.go:364] duration metric: took 31.448µs to acquireMachinesLock for "addons-489802"
	I0920 16:44:18.246223   16686 start.go:93] Provisioning new machine with config: &{Name:addons-489802 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-489802 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 16:44:18.246282   16686 start.go:125] createHost starting for "" (driver="kvm2")
	I0920 16:44:18.247940   16686 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0920 16:44:18.248080   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:44:18.248117   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:44:18.262329   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33039
	I0920 16:44:18.262809   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:44:18.263337   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:44:18.263357   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:44:18.263710   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:44:18.263878   16686 main.go:141] libmachine: (addons-489802) Calling .GetMachineName
	I0920 16:44:18.263996   16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
	I0920 16:44:18.264148   16686 start.go:159] libmachine.API.Create for "addons-489802" (driver="kvm2")
	I0920 16:44:18.264173   16686 client.go:168] LocalClient.Create starting
	I0920 16:44:18.264205   16686 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem
	I0920 16:44:18.669459   16686 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem
	I0920 16:44:18.951878   16686 main.go:141] libmachine: Running pre-create checks...
	I0920 16:44:18.951905   16686 main.go:141] libmachine: (addons-489802) Calling .PreCreateCheck
	I0920 16:44:18.952422   16686 main.go:141] libmachine: (addons-489802) Calling .GetConfigRaw
	I0920 16:44:18.952871   16686 main.go:141] libmachine: Creating machine...
	I0920 16:44:18.952893   16686 main.go:141] libmachine: (addons-489802) Calling .Create
	I0920 16:44:18.953060   16686 main.go:141] libmachine: (addons-489802) Creating KVM machine...
	I0920 16:44:18.954192   16686 main.go:141] libmachine: (addons-489802) DBG | found existing default KVM network
	I0920 16:44:18.954932   16686 main.go:141] libmachine: (addons-489802) DBG | I0920 16:44:18.954771   16708 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002211f0}
	I0920 16:44:18.954987   16686 main.go:141] libmachine: (addons-489802) DBG | created network xml: 
	I0920 16:44:18.955015   16686 main.go:141] libmachine: (addons-489802) DBG | <network>
	I0920 16:44:18.955034   16686 main.go:141] libmachine: (addons-489802) DBG |   <name>mk-addons-489802</name>
	I0920 16:44:18.955053   16686 main.go:141] libmachine: (addons-489802) DBG |   <dns enable='no'/>
	I0920 16:44:18.955078   16686 main.go:141] libmachine: (addons-489802) DBG |   
	I0920 16:44:18.955099   16686 main.go:141] libmachine: (addons-489802) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0920 16:44:18.955108   16686 main.go:141] libmachine: (addons-489802) DBG |     <dhcp>
	I0920 16:44:18.955115   16686 main.go:141] libmachine: (addons-489802) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0920 16:44:18.955126   16686 main.go:141] libmachine: (addons-489802) DBG |     </dhcp>
	I0920 16:44:18.955132   16686 main.go:141] libmachine: (addons-489802) DBG |   </ip>
	I0920 16:44:18.955142   16686 main.go:141] libmachine: (addons-489802) DBG |   
	I0920 16:44:18.955152   16686 main.go:141] libmachine: (addons-489802) DBG | </network>
	I0920 16:44:18.955180   16686 main.go:141] libmachine: (addons-489802) DBG | 
	I0920 16:44:18.961544   16686 main.go:141] libmachine: (addons-489802) DBG | trying to create private KVM network mk-addons-489802 192.168.39.0/24...
	I0920 16:44:19.029008   16686 main.go:141] libmachine: (addons-489802) Setting up store path in /home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802 ...
	I0920 16:44:19.029031   16686 main.go:141] libmachine: (addons-489802) DBG | private KVM network mk-addons-489802 192.168.39.0/24 created
	I0920 16:44:19.029050   16686 main.go:141] libmachine: (addons-489802) Building disk image from file:///home/jenkins/minikube-integration/19672-8777/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso
	I0920 16:44:19.029076   16686 main.go:141] libmachine: (addons-489802) Downloading /home/jenkins/minikube-integration/19672-8777/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19672-8777/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso...
	I0920 16:44:19.029097   16686 main.go:141] libmachine: (addons-489802) DBG | I0920 16:44:19.028953   16708 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19672-8777/.minikube
	I0920 16:44:19.344578   16686 main.go:141] libmachine: (addons-489802) DBG | I0920 16:44:19.344398   16708 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa...
	I0920 16:44:19.462008   16686 main.go:141] libmachine: (addons-489802) DBG | I0920 16:44:19.461879   16708 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/addons-489802.rawdisk...
	I0920 16:44:19.462055   16686 main.go:141] libmachine: (addons-489802) DBG | Writing magic tar header
	I0920 16:44:19.462065   16686 main.go:141] libmachine: (addons-489802) DBG | Writing SSH key tar header
	I0920 16:44:19.462072   16686 main.go:141] libmachine: (addons-489802) DBG | I0920 16:44:19.462027   16708 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802 ...
	I0920 16:44:19.462210   16686 main.go:141] libmachine: (addons-489802) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802
	I0920 16:44:19.462252   16686 main.go:141] libmachine: (addons-489802) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-8777/.minikube/machines
	I0920 16:44:19.462263   16686 main.go:141] libmachine: (addons-489802) Setting executable bit set on /home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802 (perms=drwx------)
	I0920 16:44:19.462287   16686 main.go:141] libmachine: (addons-489802) Setting executable bit set on /home/jenkins/minikube-integration/19672-8777/.minikube/machines (perms=drwxr-xr-x)
	I0920 16:44:19.462302   16686 main.go:141] libmachine: (addons-489802) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-8777/.minikube
	I0920 16:44:19.462312   16686 main.go:141] libmachine: (addons-489802) Setting executable bit set on /home/jenkins/minikube-integration/19672-8777/.minikube (perms=drwxr-xr-x)
	I0920 16:44:19.462324   16686 main.go:141] libmachine: (addons-489802) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-8777
	I0920 16:44:19.462340   16686 main.go:141] libmachine: (addons-489802) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0920 16:44:19.462350   16686 main.go:141] libmachine: (addons-489802) DBG | Checking permissions on dir: /home/jenkins
	I0920 16:44:19.462361   16686 main.go:141] libmachine: (addons-489802) DBG | Checking permissions on dir: /home
	I0920 16:44:19.462374   16686 main.go:141] libmachine: (addons-489802) Setting executable bit set on /home/jenkins/minikube-integration/19672-8777 (perms=drwxrwxr-x)
	I0920 16:44:19.462383   16686 main.go:141] libmachine: (addons-489802) DBG | Skipping /home - not owner
	I0920 16:44:19.462409   16686 main.go:141] libmachine: (addons-489802) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0920 16:44:19.462428   16686 main.go:141] libmachine: (addons-489802) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0920 16:44:19.462441   16686 main.go:141] libmachine: (addons-489802) Creating domain...
	I0920 16:44:19.463291   16686 main.go:141] libmachine: (addons-489802) define libvirt domain using xml: 
	I0920 16:44:19.463308   16686 main.go:141] libmachine: (addons-489802) <domain type='kvm'>
	I0920 16:44:19.463315   16686 main.go:141] libmachine: (addons-489802)   <name>addons-489802</name>
	I0920 16:44:19.463321   16686 main.go:141] libmachine: (addons-489802)   <memory unit='MiB'>4000</memory>
	I0920 16:44:19.463328   16686 main.go:141] libmachine: (addons-489802)   <vcpu>2</vcpu>
	I0920 16:44:19.463335   16686 main.go:141] libmachine: (addons-489802)   <features>
	I0920 16:44:19.463346   16686 main.go:141] libmachine: (addons-489802)     <acpi/>
	I0920 16:44:19.463360   16686 main.go:141] libmachine: (addons-489802)     <apic/>
	I0920 16:44:19.463368   16686 main.go:141] libmachine: (addons-489802)     <pae/>
	I0920 16:44:19.463375   16686 main.go:141] libmachine: (addons-489802)     
	I0920 16:44:19.463386   16686 main.go:141] libmachine: (addons-489802)   </features>
	I0920 16:44:19.463393   16686 main.go:141] libmachine: (addons-489802)   <cpu mode='host-passthrough'>
	I0920 16:44:19.463402   16686 main.go:141] libmachine: (addons-489802)   
	I0920 16:44:19.463408   16686 main.go:141] libmachine: (addons-489802)   </cpu>
	I0920 16:44:19.463415   16686 main.go:141] libmachine: (addons-489802)   <os>
	I0920 16:44:19.463424   16686 main.go:141] libmachine: (addons-489802)     <type>hvm</type>
	I0920 16:44:19.463435   16686 main.go:141] libmachine: (addons-489802)     <boot dev='cdrom'/>
	I0920 16:44:19.463445   16686 main.go:141] libmachine: (addons-489802)     <boot dev='hd'/>
	I0920 16:44:19.463472   16686 main.go:141] libmachine: (addons-489802)     <bootmenu enable='no'/>
	I0920 16:44:19.463497   16686 main.go:141] libmachine: (addons-489802)   </os>
	I0920 16:44:19.463520   16686 main.go:141] libmachine: (addons-489802)   <devices>
	I0920 16:44:19.463534   16686 main.go:141] libmachine: (addons-489802)     <disk type='file' device='cdrom'>
	I0920 16:44:19.463547   16686 main.go:141] libmachine: (addons-489802)       <source file='/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/boot2docker.iso'/>
	I0920 16:44:19.463558   16686 main.go:141] libmachine: (addons-489802)       <target dev='hdc' bus='scsi'/>
	I0920 16:44:19.463570   16686 main.go:141] libmachine: (addons-489802)       <readonly/>
	I0920 16:44:19.463577   16686 main.go:141] libmachine: (addons-489802)     </disk>
	I0920 16:44:19.463584   16686 main.go:141] libmachine: (addons-489802)     <disk type='file' device='disk'>
	I0920 16:44:19.463592   16686 main.go:141] libmachine: (addons-489802)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0920 16:44:19.463600   16686 main.go:141] libmachine: (addons-489802)       <source file='/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/addons-489802.rawdisk'/>
	I0920 16:44:19.463608   16686 main.go:141] libmachine: (addons-489802)       <target dev='hda' bus='virtio'/>
	I0920 16:44:19.463614   16686 main.go:141] libmachine: (addons-489802)     </disk>
	I0920 16:44:19.463623   16686 main.go:141] libmachine: (addons-489802)     <interface type='network'>
	I0920 16:44:19.463633   16686 main.go:141] libmachine: (addons-489802)       <source network='mk-addons-489802'/>
	I0920 16:44:19.463643   16686 main.go:141] libmachine: (addons-489802)       <model type='virtio'/>
	I0920 16:44:19.463651   16686 main.go:141] libmachine: (addons-489802)     </interface>
	I0920 16:44:19.463660   16686 main.go:141] libmachine: (addons-489802)     <interface type='network'>
	I0920 16:44:19.463672   16686 main.go:141] libmachine: (addons-489802)       <source network='default'/>
	I0920 16:44:19.463681   16686 main.go:141] libmachine: (addons-489802)       <model type='virtio'/>
	I0920 16:44:19.463703   16686 main.go:141] libmachine: (addons-489802)     </interface>
	I0920 16:44:19.463722   16686 main.go:141] libmachine: (addons-489802)     <serial type='pty'>
	I0920 16:44:19.463732   16686 main.go:141] libmachine: (addons-489802)       <target port='0'/>
	I0920 16:44:19.463738   16686 main.go:141] libmachine: (addons-489802)     </serial>
	I0920 16:44:19.463745   16686 main.go:141] libmachine: (addons-489802)     <console type='pty'>
	I0920 16:44:19.463755   16686 main.go:141] libmachine: (addons-489802)       <target type='serial' port='0'/>
	I0920 16:44:19.463762   16686 main.go:141] libmachine: (addons-489802)     </console>
	I0920 16:44:19.463767   16686 main.go:141] libmachine: (addons-489802)     <rng model='virtio'>
	I0920 16:44:19.463776   16686 main.go:141] libmachine: (addons-489802)       <backend model='random'>/dev/random</backend>
	I0920 16:44:19.463784   16686 main.go:141] libmachine: (addons-489802)     </rng>
	I0920 16:44:19.463793   16686 main.go:141] libmachine: (addons-489802)     
	I0920 16:44:19.463807   16686 main.go:141] libmachine: (addons-489802)     
	I0920 16:44:19.463822   16686 main.go:141] libmachine: (addons-489802)   </devices>
	I0920 16:44:19.463837   16686 main.go:141] libmachine: (addons-489802) </domain>
	I0920 16:44:19.463852   16686 main.go:141] libmachine: (addons-489802) 
	I0920 16:44:19.470320   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:86:10:bf in network default
	I0920 16:44:19.470900   16686 main.go:141] libmachine: (addons-489802) Ensuring networks are active...
	I0920 16:44:19.470920   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:19.471767   16686 main.go:141] libmachine: (addons-489802) Ensuring network default is active
	I0920 16:44:19.472031   16686 main.go:141] libmachine: (addons-489802) Ensuring network mk-addons-489802 is active
	I0920 16:44:19.472810   16686 main.go:141] libmachine: (addons-489802) Getting domain xml...
	I0920 16:44:19.473428   16686 main.go:141] libmachine: (addons-489802) Creating domain...
	I0920 16:44:20.958983   16686 main.go:141] libmachine: (addons-489802) Waiting to get IP...
	I0920 16:44:20.959942   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:20.960292   16686 main.go:141] libmachine: (addons-489802) DBG | unable to find current IP address of domain addons-489802 in network mk-addons-489802
	I0920 16:44:20.960332   16686 main.go:141] libmachine: (addons-489802) DBG | I0920 16:44:20.960280   16708 retry.go:31] will retry after 218.466528ms: waiting for machine to come up
	I0920 16:44:21.180891   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:21.181202   16686 main.go:141] libmachine: (addons-489802) DBG | unable to find current IP address of domain addons-489802 in network mk-addons-489802
	I0920 16:44:21.181228   16686 main.go:141] libmachine: (addons-489802) DBG | I0920 16:44:21.181159   16708 retry.go:31] will retry after 269.124789ms: waiting for machine to come up
	I0920 16:44:21.451562   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:21.451985   16686 main.go:141] libmachine: (addons-489802) DBG | unable to find current IP address of domain addons-489802 in network mk-addons-489802
	I0920 16:44:21.452021   16686 main.go:141] libmachine: (addons-489802) DBG | I0920 16:44:21.451946   16708 retry.go:31] will retry after 418.879425ms: waiting for machine to come up
	I0920 16:44:21.872595   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:21.873035   16686 main.go:141] libmachine: (addons-489802) DBG | unable to find current IP address of domain addons-489802 in network mk-addons-489802
	I0920 16:44:21.873056   16686 main.go:141] libmachine: (addons-489802) DBG | I0920 16:44:21.873002   16708 retry.go:31] will retry after 379.463169ms: waiting for machine to come up
	I0920 16:44:22.254754   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:22.255179   16686 main.go:141] libmachine: (addons-489802) DBG | unable to find current IP address of domain addons-489802 in network mk-addons-489802
	I0920 16:44:22.255208   16686 main.go:141] libmachine: (addons-489802) DBG | I0920 16:44:22.255151   16708 retry.go:31] will retry after 621.089592ms: waiting for machine to come up
	I0920 16:44:22.877890   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:22.878236   16686 main.go:141] libmachine: (addons-489802) DBG | unable to find current IP address of domain addons-489802 in network mk-addons-489802
	I0920 16:44:22.878254   16686 main.go:141] libmachine: (addons-489802) DBG | I0920 16:44:22.878215   16708 retry.go:31] will retry after 896.419124ms: waiting for machine to come up
	I0920 16:44:23.776119   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:23.776531   16686 main.go:141] libmachine: (addons-489802) DBG | unable to find current IP address of domain addons-489802 in network mk-addons-489802
	I0920 16:44:23.776580   16686 main.go:141] libmachine: (addons-489802) DBG | I0920 16:44:23.776503   16708 retry.go:31] will retry after 792.329452ms: waiting for machine to come up
	I0920 16:44:24.570579   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:24.571007   16686 main.go:141] libmachine: (addons-489802) DBG | unable to find current IP address of domain addons-489802 in network mk-addons-489802
	I0920 16:44:24.571032   16686 main.go:141] libmachine: (addons-489802) DBG | I0920 16:44:24.570964   16708 retry.go:31] will retry after 1.123730634s: waiting for machine to come up
	I0920 16:44:25.695981   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:25.696433   16686 main.go:141] libmachine: (addons-489802) DBG | unable to find current IP address of domain addons-489802 in network mk-addons-489802
	I0920 16:44:25.696455   16686 main.go:141] libmachine: (addons-489802) DBG | I0920 16:44:25.696382   16708 retry.go:31] will retry after 1.437323391s: waiting for machine to come up
	I0920 16:44:27.136109   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:27.136681   16686 main.go:141] libmachine: (addons-489802) DBG | unable to find current IP address of domain addons-489802 in network mk-addons-489802
	I0920 16:44:27.136706   16686 main.go:141] libmachine: (addons-489802) DBG | I0920 16:44:27.136631   16708 retry.go:31] will retry after 2.286987635s: waiting for machine to come up
	I0920 16:44:29.425015   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:29.425554   16686 main.go:141] libmachine: (addons-489802) DBG | unable to find current IP address of domain addons-489802 in network mk-addons-489802
	I0920 16:44:29.425597   16686 main.go:141] libmachine: (addons-489802) DBG | I0920 16:44:29.425518   16708 retry.go:31] will retry after 1.976852311s: waiting for machine to come up
	I0920 16:44:31.404712   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:31.405218   16686 main.go:141] libmachine: (addons-489802) DBG | unable to find current IP address of domain addons-489802 in network mk-addons-489802
	I0920 16:44:31.405240   16686 main.go:141] libmachine: (addons-489802) DBG | I0920 16:44:31.405170   16708 retry.go:31] will retry after 3.060545694s: waiting for machine to come up
	I0920 16:44:34.467106   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:34.467532   16686 main.go:141] libmachine: (addons-489802) DBG | unable to find current IP address of domain addons-489802 in network mk-addons-489802
	I0920 16:44:34.467559   16686 main.go:141] libmachine: (addons-489802) DBG | I0920 16:44:34.467474   16708 retry.go:31] will retry after 3.246517198s: waiting for machine to come up
	I0920 16:44:37.717806   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:37.718239   16686 main.go:141] libmachine: (addons-489802) DBG | unable to find current IP address of domain addons-489802 in network mk-addons-489802
	I0920 16:44:37.718274   16686 main.go:141] libmachine: (addons-489802) DBG | I0920 16:44:37.718168   16708 retry.go:31] will retry after 4.118490306s: waiting for machine to come up
	I0920 16:44:41.841226   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:41.841726   16686 main.go:141] libmachine: (addons-489802) Found IP for machine: 192.168.39.89
	I0920 16:44:41.841743   16686 main.go:141] libmachine: (addons-489802) Reserving static IP address...
	I0920 16:44:41.841755   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has current primary IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:41.842160   16686 main.go:141] libmachine: (addons-489802) DBG | unable to find host DHCP lease matching {name: "addons-489802", mac: "52:54:00:bf:85:db", ip: "192.168.39.89"} in network mk-addons-489802
	I0920 16:44:41.913230   16686 main.go:141] libmachine: (addons-489802) Reserved static IP address: 192.168.39.89
	I0920 16:44:41.913257   16686 main.go:141] libmachine: (addons-489802) Waiting for SSH to be available...
	I0920 16:44:41.913265   16686 main.go:141] libmachine: (addons-489802) DBG | Getting to WaitForSSH function...
	I0920 16:44:41.915767   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:41.916236   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:minikube Clientid:01:52:54:00:bf:85:db}
	I0920 16:44:41.916267   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:41.916422   16686 main.go:141] libmachine: (addons-489802) DBG | Using SSH client type: external
	I0920 16:44:41.916446   16686 main.go:141] libmachine: (addons-489802) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa (-rw-------)
	I0920 16:44:41.916467   16686 main.go:141] libmachine: (addons-489802) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.89 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 16:44:41.916475   16686 main.go:141] libmachine: (addons-489802) DBG | About to run SSH command:
	I0920 16:44:41.916485   16686 main.go:141] libmachine: (addons-489802) DBG | exit 0
	I0920 16:44:42.045938   16686 main.go:141] libmachine: (addons-489802) DBG | SSH cmd err, output: <nil>: 
	I0920 16:44:42.046220   16686 main.go:141] libmachine: (addons-489802) KVM machine creation complete!
	I0920 16:44:42.046564   16686 main.go:141] libmachine: (addons-489802) Calling .GetConfigRaw
	I0920 16:44:42.047127   16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
	I0920 16:44:42.047334   16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
	I0920 16:44:42.047475   16686 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0920 16:44:42.047490   16686 main.go:141] libmachine: (addons-489802) Calling .GetState
	I0920 16:44:42.049083   16686 main.go:141] libmachine: Detecting operating system of created instance...
	I0920 16:44:42.049109   16686 main.go:141] libmachine: Waiting for SSH to be available...
	I0920 16:44:42.049116   16686 main.go:141] libmachine: Getting to WaitForSSH function...
	I0920 16:44:42.049122   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
	I0920 16:44:42.051309   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:42.051675   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:44:42.051731   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:42.051767   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
	I0920 16:44:42.051947   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:44:42.052082   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:44:42.052201   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
	I0920 16:44:42.052358   16686 main.go:141] libmachine: Using SSH client type: native
	I0920 16:44:42.052546   16686 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0920 16:44:42.052561   16686 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0920 16:44:42.153288   16686 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 16:44:42.153332   16686 main.go:141] libmachine: Detecting the provisioner...
	I0920 16:44:42.153344   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
	I0920 16:44:42.156232   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:42.156583   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:44:42.156612   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:42.156760   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
	I0920 16:44:42.156968   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:44:42.157119   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:44:42.157234   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
	I0920 16:44:42.157410   16686 main.go:141] libmachine: Using SSH client type: native
	I0920 16:44:42.157610   16686 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0920 16:44:42.157626   16686 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0920 16:44:42.254380   16686 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0920 16:44:42.254438   16686 main.go:141] libmachine: found compatible host: buildroot
	I0920 16:44:42.254444   16686 main.go:141] libmachine: Provisioning with buildroot...
	I0920 16:44:42.254451   16686 main.go:141] libmachine: (addons-489802) Calling .GetMachineName
	I0920 16:44:42.254703   16686 buildroot.go:166] provisioning hostname "addons-489802"
	I0920 16:44:42.254734   16686 main.go:141] libmachine: (addons-489802) Calling .GetMachineName
	I0920 16:44:42.254884   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
	I0920 16:44:42.257868   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:42.258311   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:44:42.258354   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:42.258809   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
	I0920 16:44:42.259005   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:44:42.259172   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:44:42.259323   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
	I0920 16:44:42.259521   16686 main.go:141] libmachine: Using SSH client type: native
	I0920 16:44:42.259670   16686 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0920 16:44:42.259683   16686 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-489802 && echo "addons-489802" | sudo tee /etc/hostname
	I0920 16:44:42.370953   16686 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-489802
	
	I0920 16:44:42.370980   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
	I0920 16:44:42.373616   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:42.373970   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:44:42.374002   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:42.374153   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
	I0920 16:44:42.374357   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:44:42.374531   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:44:42.374634   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
	I0920 16:44:42.374808   16686 main.go:141] libmachine: Using SSH client type: native
	I0920 16:44:42.374994   16686 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0920 16:44:42.375012   16686 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-489802' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-489802/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-489802' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 16:44:42.482921   16686 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 16:44:42.482949   16686 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-8777/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-8777/.minikube}
	I0920 16:44:42.482989   16686 buildroot.go:174] setting up certificates
	I0920 16:44:42.482998   16686 provision.go:84] configureAuth start
	I0920 16:44:42.483007   16686 main.go:141] libmachine: (addons-489802) Calling .GetMachineName
	I0920 16:44:42.483254   16686 main.go:141] libmachine: (addons-489802) Calling .GetIP
	I0920 16:44:42.486082   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:42.486435   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:44:42.486458   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:42.486591   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
	I0920 16:44:42.489005   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:42.489385   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:44:42.489412   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:42.489530   16686 provision.go:143] copyHostCerts
	I0920 16:44:42.489599   16686 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem (1082 bytes)
	I0920 16:44:42.489774   16686 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem (1123 bytes)
	I0920 16:44:42.489920   16686 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem (1675 bytes)
	I0920 16:44:42.490019   16686 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem org=jenkins.addons-489802 san=[127.0.0.1 192.168.39.89 addons-489802 localhost minikube]
	I0920 16:44:42.556359   16686 provision.go:177] copyRemoteCerts
	I0920 16:44:42.556423   16686 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 16:44:42.556446   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
	I0920 16:44:42.559402   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:42.559884   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:44:42.559911   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:42.560233   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
	I0920 16:44:42.560402   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:44:42.560524   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
	I0920 16:44:42.560649   16686 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa Username:docker}
	I0920 16:44:42.640095   16686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0920 16:44:42.664291   16686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0920 16:44:42.687271   16686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0920 16:44:42.709976   16686 provision.go:87] duration metric: took 226.963662ms to configureAuth
	I0920 16:44:42.710011   16686 buildroot.go:189] setting minikube options for container-runtime
	I0920 16:44:42.710210   16686 config.go:182] Loaded profile config "addons-489802": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 16:44:42.710288   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
	I0920 16:44:42.713157   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:42.713576   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:44:42.713605   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:42.713861   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
	I0920 16:44:42.714050   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:44:42.714198   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:44:42.714335   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
	I0920 16:44:42.714575   16686 main.go:141] libmachine: Using SSH client type: native
	I0920 16:44:42.714732   16686 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0920 16:44:42.714746   16686 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 16:44:42.936196   16686 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 16:44:42.936230   16686 main.go:141] libmachine: Checking connection to Docker...
	I0920 16:44:42.936255   16686 main.go:141] libmachine: (addons-489802) Calling .GetURL
	I0920 16:44:42.937633   16686 main.go:141] libmachine: (addons-489802) DBG | Using libvirt version 6000000
	I0920 16:44:42.940023   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:42.940360   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:44:42.940383   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:42.940608   16686 main.go:141] libmachine: Docker is up and running!
	I0920 16:44:42.940623   16686 main.go:141] libmachine: Reticulating splines...
	I0920 16:44:42.940629   16686 client.go:171] duration metric: took 24.676449957s to LocalClient.Create
	I0920 16:44:42.940649   16686 start.go:167] duration metric: took 24.676502405s to libmachine.API.Create "addons-489802"
	I0920 16:44:42.940665   16686 start.go:293] postStartSetup for "addons-489802" (driver="kvm2")
	I0920 16:44:42.940675   16686 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 16:44:42.940691   16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
	I0920 16:44:42.940982   16686 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 16:44:42.941005   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
	I0920 16:44:42.943365   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:42.943725   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:44:42.943749   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:42.943950   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
	I0920 16:44:42.944124   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:44:42.944283   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
	I0920 16:44:42.944440   16686 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa Username:docker}
	I0920 16:44:43.023999   16686 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 16:44:43.028231   16686 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 16:44:43.028271   16686 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/addons for local assets ...
	I0920 16:44:43.028362   16686 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/files for local assets ...
	I0920 16:44:43.028391   16686 start.go:296] duration metric: took 87.721087ms for postStartSetup
	I0920 16:44:43.028430   16686 main.go:141] libmachine: (addons-489802) Calling .GetConfigRaw
	I0920 16:44:43.029004   16686 main.go:141] libmachine: (addons-489802) Calling .GetIP
	I0920 16:44:43.032101   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:43.032392   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:44:43.032420   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:43.032651   16686 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/config.json ...
	I0920 16:44:43.032872   16686 start.go:128] duration metric: took 24.786580765s to createHost
	I0920 16:44:43.032897   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
	I0920 16:44:43.035034   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:43.035343   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:44:43.035377   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:43.035500   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
	I0920 16:44:43.035665   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:44:43.035848   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:44:43.035974   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
	I0920 16:44:43.036134   16686 main.go:141] libmachine: Using SSH client type: native
	I0920 16:44:43.036283   16686 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0920 16:44:43.036293   16686 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 16:44:43.134258   16686 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726850683.106297733
	
	I0920 16:44:43.134281   16686 fix.go:216] guest clock: 1726850683.106297733
	I0920 16:44:43.134318   16686 fix.go:229] Guest: 2024-09-20 16:44:43.106297733 +0000 UTC Remote: 2024-09-20 16:44:43.032884764 +0000 UTC m=+24.887429631 (delta=73.412969ms)
	I0920 16:44:43.134347   16686 fix.go:200] guest clock delta is within tolerance: 73.412969ms
	I0920 16:44:43.134354   16686 start.go:83] releasing machines lock for "addons-489802", held for 24.88813735s
	I0920 16:44:43.134375   16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
	I0920 16:44:43.134602   16686 main.go:141] libmachine: (addons-489802) Calling .GetIP
	I0920 16:44:43.137503   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:43.137857   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:44:43.137885   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:43.138022   16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
	I0920 16:44:43.138471   16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
	I0920 16:44:43.138655   16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
	I0920 16:44:43.138740   16686 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 16:44:43.138784   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
	I0920 16:44:43.138890   16686 ssh_runner.go:195] Run: cat /version.json
	I0920 16:44:43.138911   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
	I0920 16:44:43.141496   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:43.141700   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:43.141814   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:44:43.141848   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:43.141984   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
	I0920 16:44:43.142122   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:44:43.142207   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:44:43.142233   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:43.142240   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
	I0920 16:44:43.142382   16686 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa Username:docker}
	I0920 16:44:43.142400   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
	I0920 16:44:43.142527   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:44:43.142639   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
	I0920 16:44:43.142738   16686 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa Username:docker}
	I0920 16:44:43.214377   16686 ssh_runner.go:195] Run: systemctl --version
	I0920 16:44:43.255061   16686 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 16:44:43.407471   16686 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 16:44:43.413920   16686 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 16:44:43.413984   16686 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 16:44:43.430049   16686 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 16:44:43.430083   16686 start.go:495] detecting cgroup driver to use...
	I0920 16:44:43.430165   16686 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 16:44:43.445755   16686 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 16:44:43.460072   16686 docker.go:217] disabling cri-docker service (if available) ...
	I0920 16:44:43.460130   16686 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 16:44:43.473445   16686 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 16:44:43.486406   16686 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 16:44:43.599287   16686 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 16:44:43.771188   16686 docker.go:233] disabling docker service ...
	I0920 16:44:43.771285   16686 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 16:44:43.786254   16686 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 16:44:43.799345   16686 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 16:44:43.929040   16686 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 16:44:44.054620   16686 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 16:44:44.068879   16686 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 16:44:44.087412   16686 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 16:44:44.087482   16686 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 16:44:44.098030   16686 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 16:44:44.098093   16686 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 16:44:44.108462   16686 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 16:44:44.119209   16686 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 16:44:44.130359   16686 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 16:44:44.141802   16686 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 16:44:44.152585   16686 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 16:44:44.169299   16686 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 16:44:44.179293   16686 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 16:44:44.188257   16686 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 16:44:44.188326   16686 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 16:44:44.200400   16686 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 16:44:44.210617   16686 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 16:44:44.322851   16686 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 16:44:44.414303   16686 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 16:44:44.414398   16686 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 16:44:44.418774   16686 start.go:563] Will wait 60s for crictl version
	I0920 16:44:44.418851   16686 ssh_runner.go:195] Run: which crictl
	I0920 16:44:44.422352   16686 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 16:44:44.464229   16686 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 16:44:44.464345   16686 ssh_runner.go:195] Run: crio --version
	I0920 16:44:44.492112   16686 ssh_runner.go:195] Run: crio --version
	I0920 16:44:44.519927   16686 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 16:44:44.520939   16686 main.go:141] libmachine: (addons-489802) Calling .GetIP
	I0920 16:44:44.523216   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:44.523500   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:44:44.523521   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:44:44.523769   16686 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 16:44:44.527526   16686 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 16:44:44.539346   16686 kubeadm.go:883] updating cluster {Name:addons-489802 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-489802 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.89 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 16:44:44.539450   16686 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 16:44:44.539491   16686 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 16:44:44.570607   16686 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 16:44:44.570672   16686 ssh_runner.go:195] Run: which lz4
	I0920 16:44:44.574305   16686 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 16:44:44.578003   16686 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 16:44:44.578036   16686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0920 16:44:45.832824   16686 crio.go:462] duration metric: took 1.258544501s to copy over tarball
	I0920 16:44:45.832907   16686 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 16:44:49.851668   16686 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (4.018714604s)
	I0920 16:44:49.851726   16686 crio.go:469] duration metric: took 4.01886728s to extract the tarball
	I0920 16:44:49.851737   16686 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 16:44:49.896630   16686 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 16:44:49.944783   16686 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 16:44:49.944818   16686 cache_images.go:84] Images are preloaded, skipping loading
	I0920 16:44:49.944827   16686 kubeadm.go:934] updating node { 192.168.39.89 8443 v1.31.1 crio true true} ...
	I0920 16:44:49.944968   16686 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-489802 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.89
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-489802 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 16:44:49.945079   16686 ssh_runner.go:195] Run: crio config
	I0920 16:44:50.001938   16686 cni.go:84] Creating CNI manager for ""
	I0920 16:44:50.001967   16686 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 16:44:50.001981   16686 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 16:44:50.002006   16686 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.89 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-489802 NodeName:addons-489802 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.89"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.89 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 16:44:50.002170   16686 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.89
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-489802"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.89
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.89"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 16:44:50.002231   16686 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 16:44:50.013339   16686 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 16:44:50.013411   16686 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 16:44:50.024767   16686 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0920 16:44:50.045363   16686 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 16:44:50.062898   16686 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0920 16:44:50.080572   16686 ssh_runner.go:195] Run: grep 192.168.39.89	control-plane.minikube.internal$ /etc/hosts
	I0920 16:44:50.085773   16686 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.89	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 16:44:50.098757   16686 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 16:44:50.240556   16686 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 16:44:50.258141   16686 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802 for IP: 192.168.39.89
	I0920 16:44:50.258209   16686 certs.go:194] generating shared ca certs ...
	I0920 16:44:50.258255   16686 certs.go:226] acquiring lock for ca certs: {Name:mkc7ef6c737c6bdc3fdd9dcff8f57029c020d8f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:44:50.258438   16686 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key
	I0920 16:44:50.381564   16686 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt ...
	I0920 16:44:50.381596   16686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt: {Name:mkba49b4d048d5af44df48f4edd690a694a33473 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:44:50.381797   16686 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key ...
	I0920 16:44:50.381808   16686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key: {Name:mk653576ff784ce50de2dfa9e3a0facde1d60271 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:44:50.381907   16686 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key
	I0920 16:44:50.546530   16686 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.crt ...
	I0920 16:44:50.546555   16686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.crt: {Name:mk67c6a6b77428ba0cdac9b9e34d49fcf308bb8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:44:50.546726   16686 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key ...
	I0920 16:44:50.546738   16686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key: {Name:mkd7ae4f2d01ceba146c4dc9b43c4a1a5ab41e93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:44:50.546824   16686 certs.go:256] generating profile certs ...
	I0920 16:44:50.546886   16686 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/client.key
	I0920 16:44:50.546900   16686 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/client.crt with IP's: []
	I0920 16:44:50.626758   16686 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/client.crt ...
	I0920 16:44:50.626785   16686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/client.crt: {Name:mkc5f095f711647000f5605c19ca0db353359e2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:44:50.626972   16686 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/client.key ...
	I0920 16:44:50.626986   16686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/client.key: {Name:mk3f0c684e304c5dc541f54b7034757bf95d7fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:44:50.627082   16686 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/apiserver.key.1bac25cc
	I0920 16:44:50.627100   16686 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/apiserver.crt.1bac25cc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.89]
	I0920 16:44:50.846521   16686 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/apiserver.crt.1bac25cc ...
	I0920 16:44:50.846553   16686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/apiserver.crt.1bac25cc: {Name:mkb99a44e1af5a4a578b6ff7445cbfc9f6d1c4b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:44:50.846716   16686 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/apiserver.key.1bac25cc ...
	I0920 16:44:50.846729   16686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/apiserver.key.1bac25cc: {Name:mk1ce5fd024a94836fd45952b6c3038de9bbeaae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:44:50.846799   16686 certs.go:381] copying /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/apiserver.crt.1bac25cc -> /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/apiserver.crt
	I0920 16:44:50.846874   16686 certs.go:385] copying /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/apiserver.key.1bac25cc -> /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/apiserver.key
	I0920 16:44:50.846919   16686 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/proxy-client.key
	I0920 16:44:50.846934   16686 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/proxy-client.crt with IP's: []
	I0920 16:44:51.074511   16686 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/proxy-client.crt ...
	I0920 16:44:51.074548   16686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/proxy-client.crt: {Name:mk593c697632b0437e75154f622f66ff162758f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:44:51.074697   16686 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/proxy-client.key ...
	I0920 16:44:51.074708   16686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/proxy-client.key: {Name:mkd7afdfda0e263fcdc4ad0882491ad3726f4657 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:44:51.074875   16686 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 16:44:51.074907   16686 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem (1082 bytes)
	I0920 16:44:51.074929   16686 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem (1123 bytes)
	I0920 16:44:51.074950   16686 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem (1675 bytes)
	I0920 16:44:51.075572   16686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 16:44:51.104195   16686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 16:44:51.128646   16686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 16:44:51.153291   16686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0920 16:44:51.177482   16686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0920 16:44:51.202143   16686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 16:44:51.226168   16686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 16:44:51.251069   16686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 16:44:51.274951   16686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 16:44:51.298272   16686 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 16:44:51.314508   16686 ssh_runner.go:195] Run: openssl version
	I0920 16:44:51.320418   16686 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 16:44:51.331616   16686 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 16:44:51.336211   16686 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0920 16:44:51.336270   16686 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 16:44:51.341681   16686 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 16:44:51.351994   16686 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 16:44:51.356403   16686 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 16:44:51.356470   16686 kubeadm.go:392] StartCluster: {Name:addons-489802 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-489802 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.89 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 16:44:51.356584   16686 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 16:44:51.356645   16686 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 16:44:51.396773   16686 cri.go:89] found id: ""
	I0920 16:44:51.396839   16686 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 16:44:51.407827   16686 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 16:44:51.417398   16686 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 16:44:51.426423   16686 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 16:44:51.426443   16686 kubeadm.go:157] found existing configuration files:
	
	I0920 16:44:51.426481   16686 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 16:44:51.435274   16686 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 16:44:51.435338   16686 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 16:44:51.444427   16686 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 16:44:51.453046   16686 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 16:44:51.453111   16686 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 16:44:51.462277   16686 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 16:44:51.470882   16686 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 16:44:51.470938   16686 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 16:44:51.480053   16686 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 16:44:51.488382   16686 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 16:44:51.488450   16686 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 16:44:51.497406   16686 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 16:44:51.541221   16686 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 16:44:51.541351   16686 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 16:44:51.633000   16686 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 16:44:51.633106   16686 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 16:44:51.633217   16686 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 16:44:51.641465   16686 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 16:44:51.643561   16686 out.go:235]   - Generating certificates and keys ...
	I0920 16:44:51.643637   16686 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 16:44:51.643707   16686 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 16:44:51.974976   16686 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0920 16:44:52.212429   16686 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0920 16:44:52.725412   16686 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0920 16:44:52.824449   16686 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0920 16:44:52.884139   16686 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0920 16:44:52.884436   16686 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-489802 localhost] and IPs [192.168.39.89 127.0.0.1 ::1]
	I0920 16:44:53.064017   16686 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0920 16:44:53.064225   16686 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-489802 localhost] and IPs [192.168.39.89 127.0.0.1 ::1]
	I0920 16:44:53.110684   16686 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0920 16:44:53.439405   16686 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0920 16:44:53.523372   16686 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0920 16:44:53.523450   16686 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 16:44:53.894835   16686 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 16:44:54.063405   16686 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 16:44:54.134012   16686 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 16:44:54.252802   16686 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 16:44:54.496063   16686 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 16:44:54.498352   16686 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 16:44:54.501105   16686 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 16:44:54.502882   16686 out.go:235]   - Booting up control plane ...
	I0920 16:44:54.503004   16686 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 16:44:54.503113   16686 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 16:44:54.503192   16686 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 16:44:54.517820   16686 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 16:44:54.525307   16686 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 16:44:54.525359   16686 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 16:44:54.642832   16686 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 16:44:54.642977   16686 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 16:44:55.143793   16686 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.346631ms
	I0920 16:44:55.143884   16686 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 16:45:00.142510   16686 kubeadm.go:310] [api-check] The API server is healthy after 5.001658723s
	I0920 16:45:00.161952   16686 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 16:45:00.199831   16686 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 16:45:00.237142   16686 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 16:45:00.237431   16686 kubeadm.go:310] [mark-control-plane] Marking the node addons-489802 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 16:45:00.267465   16686 kubeadm.go:310] [bootstrap-token] Using token: pxuown.8491ndv1zucibr8t
	I0920 16:45:00.269321   16686 out.go:235]   - Configuring RBAC rules ...
	I0920 16:45:00.269445   16686 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 16:45:00.277244   16686 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 16:45:00.297062   16686 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 16:45:00.303392   16686 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 16:45:00.310726   16686 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 16:45:00.317990   16686 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 16:45:00.550067   16686 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 16:45:00.983547   16686 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 16:45:01.549916   16686 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 16:45:01.549943   16686 kubeadm.go:310] 
	I0920 16:45:01.550082   16686 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 16:45:01.550165   16686 kubeadm.go:310] 
	I0920 16:45:01.550391   16686 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 16:45:01.550403   16686 kubeadm.go:310] 
	I0920 16:45:01.550435   16686 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 16:45:01.550520   16686 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 16:45:01.550590   16686 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 16:45:01.550601   16686 kubeadm.go:310] 
	I0920 16:45:01.550668   16686 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 16:45:01.550680   16686 kubeadm.go:310] 
	I0920 16:45:01.550751   16686 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 16:45:01.550761   16686 kubeadm.go:310] 
	I0920 16:45:01.550847   16686 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 16:45:01.550942   16686 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 16:45:01.551031   16686 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 16:45:01.551040   16686 kubeadm.go:310] 
	I0920 16:45:01.551130   16686 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 16:45:01.551241   16686 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 16:45:01.551252   16686 kubeadm.go:310] 
	I0920 16:45:01.551332   16686 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token pxuown.8491ndv1zucibr8t \
	I0920 16:45:01.551422   16686 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:736f3fb521855813decb4a7214f2b8beff5f81c3971b60decfa20a8807626a2d \
	I0920 16:45:01.551443   16686 kubeadm.go:310] 	--control-plane 
	I0920 16:45:01.551456   16686 kubeadm.go:310] 
	I0920 16:45:01.551575   16686 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 16:45:01.551586   16686 kubeadm.go:310] 
	I0920 16:45:01.551676   16686 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token pxuown.8491ndv1zucibr8t \
	I0920 16:45:01.551784   16686 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:736f3fb521855813decb4a7214f2b8beff5f81c3971b60decfa20a8807626a2d 
	I0920 16:45:01.552616   16686 kubeadm.go:310] W0920 16:44:51.520638     818 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 16:45:01.553045   16686 kubeadm.go:310] W0920 16:44:51.522103     818 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 16:45:01.553171   16686 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 16:45:01.553193   16686 cni.go:84] Creating CNI manager for ""
	I0920 16:45:01.553204   16686 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 16:45:01.554912   16686 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 16:45:01.556375   16686 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 16:45:01.567185   16686 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 16:45:01.590373   16686 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 16:45:01.590503   16686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 16:45:01.590518   16686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-489802 minikube.k8s.io/updated_at=2024_09_20T16_45_01_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=0626f22cf0d915d75e291a5bce701f94395056e1 minikube.k8s.io/name=addons-489802 minikube.k8s.io/primary=true
	I0920 16:45:01.611693   16686 ops.go:34] apiserver oom_adj: -16
	I0920 16:45:01.740445   16686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 16:45:02.241564   16686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 16:45:02.740509   16686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 16:45:03.241160   16686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 16:45:03.740876   16686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 16:45:04.241125   16686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 16:45:04.740796   16686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 16:45:05.241433   16686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 16:45:05.740524   16686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 16:45:05.862361   16686 kubeadm.go:1113] duration metric: took 4.271922428s to wait for elevateKubeSystemPrivileges
	I0920 16:45:05.862397   16686 kubeadm.go:394] duration metric: took 14.505940675s to StartCluster
	I0920 16:45:05.862414   16686 settings.go:142] acquiring lock: {Name:mk2a13c58cdc0faf8cddca5d6716175d45db9bfd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:45:05.862558   16686 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19672-8777/kubeconfig
	I0920 16:45:05.862903   16686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/kubeconfig: {Name:mkf32a4c736808e023459b2f0e40188618a38db1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:45:05.863101   16686 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.89 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 16:45:05.863138   16686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0920 16:45:05.863158   16686 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0920 16:45:05.863290   16686 addons.go:69] Setting yakd=true in profile "addons-489802"
	I0920 16:45:05.863282   16686 addons.go:69] Setting default-storageclass=true in profile "addons-489802"
	I0920 16:45:05.863308   16686 addons.go:234] Setting addon yakd=true in "addons-489802"
	I0920 16:45:05.863317   16686 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-489802"
	I0920 16:45:05.863312   16686 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-489802"
	I0920 16:45:05.863314   16686 addons.go:69] Setting cloud-spanner=true in profile "addons-489802"
	I0920 16:45:05.863340   16686 host.go:66] Checking if "addons-489802" exists ...
	I0920 16:45:05.863341   16686 addons.go:234] Setting addon cloud-spanner=true in "addons-489802"
	I0920 16:45:05.863342   16686 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-489802"
	I0920 16:45:05.863361   16686 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-489802"
	I0920 16:45:05.863363   16686 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-489802"
	I0920 16:45:05.863375   16686 host.go:66] Checking if "addons-489802" exists ...
	I0920 16:45:05.863390   16686 host.go:66] Checking if "addons-489802" exists ...
	I0920 16:45:05.863391   16686 host.go:66] Checking if "addons-489802" exists ...
	I0920 16:45:05.863391   16686 config.go:182] Loaded profile config "addons-489802": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 16:45:05.863448   16686 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-489802"
	I0920 16:45:05.863461   16686 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-489802"
	I0920 16:45:05.863793   16686 addons.go:69] Setting gcp-auth=true in profile "addons-489802"
	I0920 16:45:05.863800   16686 addons.go:69] Setting ingress-dns=true in profile "addons-489802"
	I0920 16:45:05.863804   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.863808   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.863821   16686 addons.go:69] Setting ingress=true in profile "addons-489802"
	I0920 16:45:05.863824   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.863831   16686 addons.go:69] Setting metrics-server=true in profile "addons-489802"
	I0920 16:45:05.863821   16686 addons.go:69] Setting inspektor-gadget=true in profile "addons-489802"
	I0920 16:45:05.863839   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.863843   16686 addons.go:234] Setting addon metrics-server=true in "addons-489802"
	I0920 16:45:05.863845   16686 addons.go:69] Setting volcano=true in profile "addons-489802"
	I0920 16:45:05.863812   16686 mustload.go:65] Loading cluster: addons-489802
	I0920 16:45:05.863852   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.863856   16686 addons.go:234] Setting addon volcano=true in "addons-489802"
	I0920 16:45:05.863865   16686 host.go:66] Checking if "addons-489802" exists ...
	I0920 16:45:05.863881   16686 host.go:66] Checking if "addons-489802" exists ...
	I0920 16:45:05.863918   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.863925   16686 addons.go:69] Setting registry=true in profile "addons-489802"
	I0920 16:45:05.863943   16686 addons.go:234] Setting addon registry=true in "addons-489802"
	I0920 16:45:05.863943   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.863955   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.863978   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.864003   16686 config.go:182] Loaded profile config "addons-489802": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 16:45:05.864008   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.864067   16686 addons.go:69] Setting storage-provisioner=true in profile "addons-489802"
	I0920 16:45:05.864077   16686 addons.go:234] Setting addon storage-provisioner=true in "addons-489802"
	I0920 16:45:05.864162   16686 addons.go:69] Setting volumesnapshots=true in profile "addons-489802"
	I0920 16:45:05.864180   16686 addons.go:234] Setting addon volumesnapshots=true in "addons-489802"
	I0920 16:45:05.864214   16686 host.go:66] Checking if "addons-489802" exists ...
	I0920 16:45:05.864241   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.864270   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.863833   16686 addons.go:234] Setting addon ingress=true in "addons-489802"
	I0920 16:45:05.864312   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.864337   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.863812   16686 addons.go:234] Setting addon ingress-dns=true in "addons-489802"
	I0920 16:45:05.864407   16686 host.go:66] Checking if "addons-489802" exists ...
	I0920 16:45:05.863847   16686 addons.go:234] Setting addon inspektor-gadget=true in "addons-489802"
	I0920 16:45:05.863810   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.864596   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.864641   16686 host.go:66] Checking if "addons-489802" exists ...
	I0920 16:45:05.864662   16686 host.go:66] Checking if "addons-489802" exists ...
	I0920 16:45:05.864741   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.864770   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.864799   16686 host.go:66] Checking if "addons-489802" exists ...
	I0920 16:45:05.864991   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.864993   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.865016   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.865021   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.865128   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.865158   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.865250   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.865287   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.865605   16686 host.go:66] Checking if "addons-489802" exists ...
	I0920 16:45:05.873149   16686 out.go:177] * Verifying Kubernetes components...
	I0920 16:45:05.875354   16686 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 16:45:05.886351   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.886408   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.886439   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.886493   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.886542   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36167
	I0920 16:45:05.886778   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45461
	I0920 16:45:05.886908   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40277
	I0920 16:45:05.887721   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:05.887867   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:05.887935   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:05.888511   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:05.888539   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:05.888665   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:05.888682   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:05.889051   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:05.889074   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:05.889168   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39407
	I0920 16:45:05.889340   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:05.889387   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:05.889430   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:05.889990   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.890030   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.890136   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.890165   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.894535   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:05.895113   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.895154   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.904311   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:05.904341   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:05.905034   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:05.905227   16686 main.go:141] libmachine: (addons-489802) Calling .GetState
	I0920 16:45:05.910612   16686 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-489802"
	I0920 16:45:05.910663   16686 host.go:66] Checking if "addons-489802" exists ...
	I0920 16:45:05.911040   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.911095   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.911196   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43009
	I0920 16:45:05.912127   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36389
	I0920 16:45:05.912633   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:05.913296   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:05.913317   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:05.913620   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43891
	I0920 16:45:05.913784   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41005
	I0920 16:45:05.913785   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:05.914527   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.914569   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.914814   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:05.914815   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:05.915345   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:05.915366   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:05.915470   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:05.915488   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:05.916370   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:05.916574   16686 main.go:141] libmachine: (addons-489802) Calling .GetState
	I0920 16:45:05.916621   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:05.917159   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.917200   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.917629   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:05.918192   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:05.918213   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:05.918613   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:05.918669   16686 host.go:66] Checking if "addons-489802" exists ...
	I0920 16:45:05.919045   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.919074   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.922095   16686 main.go:141] libmachine: (addons-489802) Calling .GetState
	I0920 16:45:05.925413   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39609
	I0920 16:45:05.926161   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:05.926895   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:05.926919   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:05.927445   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:05.928038   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.928083   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.930652   16686 addons.go:234] Setting addon default-storageclass=true in "addons-489802"
	I0920 16:45:05.930702   16686 host.go:66] Checking if "addons-489802" exists ...
	I0920 16:45:05.931084   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.931143   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.932706   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34593
	I0920 16:45:05.933363   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:05.934073   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:05.934093   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:05.934558   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:05.935171   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.935210   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.941706   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43133
	I0920 16:45:05.942347   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:05.943149   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:05.943173   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:05.943717   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:05.949811   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36123
	I0920 16:45:05.950710   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.950769   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.951083   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:05.951845   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:05.951868   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:05.952349   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:05.952538   16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
	I0920 16:45:05.953123   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33577
	I0920 16:45:05.954739   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:05.955577   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38365
	I0920 16:45:05.956118   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46489
	I0920 16:45:05.956311   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:05.956877   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:05.956902   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:05.957263   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:05.957283   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:05.958119   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:05.958195   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41117
	I0920 16:45:05.958880   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.958921   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.959186   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:05.959739   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:05.959761   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:05.959785   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:05.960399   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:05.960985   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.961025   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.961535   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:05.961729   16686 main.go:141] libmachine: (addons-489802) Calling .GetState
	I0920 16:45:05.961940   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:05.961958   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:05.962782   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:05.963365   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:05.963414   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:05.963800   16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
	I0920 16:45:05.966313   16686 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0920 16:45:05.967714   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36999
	I0920 16:45:05.967733   16686 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 16:45:05.967750   16686 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 16:45:05.967775   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
	I0920 16:45:05.971362   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42171
	I0920 16:45:05.972858   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33479
	I0920 16:45:05.974844   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:05.975487   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:45:05.975517   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:05.975763   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
	I0920 16:45:05.975965   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:45:05.976140   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
	I0920 16:45:05.976363   16686 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa Username:docker}
	I0920 16:45:05.977671   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42025
	I0920 16:45:05.978187   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:05.981448   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46501
	I0920 16:45:05.981604   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44741
	I0920 16:45:05.982424   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:05.982550   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:05.982830   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:05.982881   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:05.983467   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:05.983492   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:05.983551   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:05.983961   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:05.983979   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:05.984042   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:05.984224   16686 main.go:141] libmachine: (addons-489802) Calling .GetState
	I0920 16:45:05.984715   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35263
	I0920 16:45:05.984871   16686 main.go:141] libmachine: (addons-489802) Calling .GetState
	I0920 16:45:05.984923   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:05.985197   16686 main.go:141] libmachine: (addons-489802) Calling .GetState
	I0920 16:45:05.986711   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:05.987367   16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
	I0920 16:45:05.987635   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:05.987654   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:05.987994   16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
	I0920 16:45:05.988156   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:05.988566   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42959
	I0920 16:45:05.989594   16686 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0920 16:45:05.990395   16686 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0920 16:45:05.991212   16686 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 16:45:05.991233   16686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0920 16:45:05.991257   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
	I0920 16:45:05.991416   16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
	I0920 16:45:05.992716   16686 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0920 16:45:05.992737   16686 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0920 16:45:05.992760   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
	I0920 16:45:05.992873   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39923
	I0920 16:45:05.993699   16686 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0920 16:45:05.995293   16686 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0920 16:45:05.995314   16686 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0920 16:45:05.995337   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
	I0920 16:45:05.995421   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
	I0920 16:45:05.995474   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:05.995494   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:45:05.995520   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:05.995539   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39693
	I0920 16:45:06.002124   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:06.002163   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:45:06.002180   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:06.002186   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
	I0920 16:45:06.002226   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:45:06.002256   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
	I0920 16:45:06.002186   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:06.002304   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:45:06.002330   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:06.002392   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:06.002441   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:06.002794   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:06.002895   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:06.003001   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:06.003084   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:06.003168   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
	I0920 16:45:06.003348   16686 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa Username:docker}
	I0920 16:45:06.003599   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:06.003651   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:06.003661   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:06.003693   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:45:06.003693   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:06.003708   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:06.003715   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:45:06.003952   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:06.003969   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:06.004102   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:06.004235   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:06.004248   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:06.004312   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:06.004332   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:06.004348   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:06.004535   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:06.004574   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
	I0920 16:45:06.004738   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
	I0920 16:45:06.004727   16686 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa Username:docker}
	I0920 16:45:06.004793   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:06.005068   16686 main.go:141] libmachine: (addons-489802) Calling .GetState
	I0920 16:45:06.005104   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:06.005120   16686 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa Username:docker}
	I0920 16:45:06.005134   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:06.005135   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:06.005145   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:06.006374   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:06.006382   16686 main.go:141] libmachine: (addons-489802) Calling .GetState
	I0920 16:45:06.006398   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:06.006377   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:06.007189   16686 main.go:141] libmachine: (addons-489802) Calling .GetState
	I0920 16:45:06.007202   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36911
	I0920 16:45:06.007213   16686 main.go:141] libmachine: (addons-489802) Calling .GetState
	I0920 16:45:06.007251   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46439
	I0920 16:45:06.007358   16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
	I0920 16:45:06.007582   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:06.007618   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:06.008305   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:06.009013   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:06.009036   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:06.009097   16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
	I0920 16:45:06.009454   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:06.009483   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:06.011482   16686 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0920 16:45:06.011667   16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
	I0920 16:45:06.011700   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:06.011718   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:06.011719   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:06.011730   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:06.011738   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:06.011780   16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
	I0920 16:45:06.012083   16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
	I0920 16:45:06.012119   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:06.012127   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	W0920 16:45:06.012215   16686 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0920 16:45:06.013040   16686 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0920 16:45:06.013057   16686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0920 16:45:06.013076   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
	I0920 16:45:06.013854   16686 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0920 16:45:06.013875   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:06.014222   16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
	I0920 16:45:06.014278   16686 main.go:141] libmachine: (addons-489802) Calling .GetState
	I0920 16:45:06.015566   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:06.015585   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:06.016191   16686 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0920 16:45:06.016298   16686 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0920 16:45:06.016476   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:06.016889   16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
	I0920 16:45:06.017494   16686 main.go:141] libmachine: (addons-489802) Calling .GetState
	I0920 16:45:06.018839   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:06.019261   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:45:06.019283   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:06.019485   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
	I0920 16:45:06.019664   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:45:06.019716   16686 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0920 16:45:06.019816   16686 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 16:45:06.019996   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
	I0920 16:45:06.020051   16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
	I0920 16:45:06.020211   16686 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa Username:docker}
	I0920 16:45:06.020731   16686 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 16:45:06.021987   16686 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0920 16:45:06.022029   16686 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0920 16:45:06.022093   16686 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 16:45:06.022300   16686 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 16:45:06.022755   16686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 16:45:06.022776   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
	I0920 16:45:06.023143   16686 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0920 16:45:06.023160   16686 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0920 16:45:06.023177   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
	I0920 16:45:06.024174   16686 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0920 16:45:06.024191   16686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0920 16:45:06.024275   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
	I0920 16:45:06.024664   16686 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0920 16:45:06.025980   16686 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0920 16:45:06.027309   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:06.027785   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:45:06.027815   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:06.027929   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:06.028009   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
	I0920 16:45:06.028181   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:45:06.028474   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:45:06.028495   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:06.028615   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
	I0920 16:45:06.028701   16686 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0920 16:45:06.028891   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:06.028889   16686 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa Username:docker}
	I0920 16:45:06.028923   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
	I0920 16:45:06.029196   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:45:06.029192   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:45:06.029222   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:06.029483   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
	I0920 16:45:06.029709   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
	I0920 16:45:06.029887   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:45:06.029906   16686 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa Username:docker}
	I0920 16:45:06.030033   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
	I0920 16:45:06.030190   16686 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa Username:docker}
	I0920 16:45:06.031196   16686 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0920 16:45:06.032725   16686 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0920 16:45:06.032746   16686 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0920 16:45:06.032780   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
	I0920 16:45:06.034644   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42341
	I0920 16:45:06.035197   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37823
	I0920 16:45:06.035340   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:06.036022   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:06.036041   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:06.036112   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:06.036407   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:06.036475   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:06.036695   16686 main.go:141] libmachine: (addons-489802) Calling .GetState
	I0920 16:45:06.036796   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:06.036813   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:06.037369   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
	I0920 16:45:06.037379   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:06.037431   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46467
	I0920 16:45:06.037435   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:45:06.037447   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:06.037568   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:45:06.037633   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36745
	I0920 16:45:06.037767   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:06.037792   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
	I0920 16:45:06.037889   16686 main.go:141] libmachine: (addons-489802) Calling .GetState
	I0920 16:45:06.037985   16686 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa Username:docker}
	I0920 16:45:06.038291   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:06.038315   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:06.038531   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:06.038620   16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
	I0920 16:45:06.038675   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:06.038861   16686 main.go:141] libmachine: (addons-489802) Calling .GetState
	I0920 16:45:06.039491   16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
	I0920 16:45:06.039654   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:06.039669   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:06.040233   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:06.040465   16686 main.go:141] libmachine: (addons-489802) Calling .GetState
	I0920 16:45:06.040605   16686 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0920 16:45:06.040832   16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
	I0920 16:45:06.041303   16686 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 16:45:06.041318   16686 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 16:45:06.041334   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
	I0920 16:45:06.041615   16686 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0920 16:45:06.042140   16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
	I0920 16:45:06.043269   16686 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0920 16:45:06.043289   16686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0920 16:45:06.043306   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
	I0920 16:45:06.044349   16686 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0920 16:45:06.044617   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:06.044625   16686 out.go:177]   - Using image docker.io/busybox:stable
	I0920 16:45:06.045036   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:45:06.045057   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:06.045261   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
	I0920 16:45:06.045420   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:45:06.045924   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
	I0920 16:45:06.046045   16686 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 16:45:06.046062   16686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0920 16:45:06.046076   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
	I0920 16:45:06.046233   16686 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa Username:docker}
	I0920 16:45:06.046927   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:06.047431   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:45:06.047463   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:06.047597   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
	I0920 16:45:06.047765   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:45:06.047891   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
	I0920 16:45:06.048008   16686 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa Username:docker}
	I0920 16:45:06.048154   16686 out.go:177]   - Using image docker.io/registry:2.8.3
	I0920 16:45:06.049631   16686 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0920 16:45:06.049649   16686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0920 16:45:06.049663   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:06.049676   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
	I0920 16:45:06.050129   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:45:06.050156   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:06.050430   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
	I0920 16:45:06.050586   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:45:06.050750   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
	I0920 16:45:06.050868   16686 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa Username:docker}
	I0920 16:45:06.052498   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:06.052871   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:45:06.052900   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:06.053033   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
	I0920 16:45:06.053170   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:45:06.053326   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
	I0920 16:45:06.053496   16686 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa Username:docker}
	I0920 16:45:06.353051   16686 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 16:45:06.353074   16686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0920 16:45:06.375750   16686 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 16:45:06.375808   16686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0920 16:45:06.391326   16686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 16:45:06.493613   16686 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0920 16:45:06.493638   16686 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0920 16:45:06.505773   16686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0920 16:45:06.532977   16686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0920 16:45:06.533515   16686 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0920 16:45:06.533534   16686 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0920 16:45:06.540683   16686 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0920 16:45:06.540708   16686 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0920 16:45:06.543084   16686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 16:45:06.544984   16686 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0920 16:45:06.545000   16686 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0920 16:45:06.551458   16686 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0920 16:45:06.551479   16686 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0920 16:45:06.556172   16686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0920 16:45:06.557507   16686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 16:45:06.566682   16686 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0920 16:45:06.566703   16686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0920 16:45:06.627313   16686 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0920 16:45:06.627340   16686 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0920 16:45:06.640927   16686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 16:45:06.670548   16686 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 16:45:06.670574   16686 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 16:45:06.763522   16686 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0920 16:45:06.763549   16686 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0920 16:45:06.783481   16686 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0920 16:45:06.783521   16686 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0920 16:45:06.819177   16686 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0920 16:45:06.819204   16686 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0920 16:45:06.839272   16686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0920 16:45:06.896200   16686 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0920 16:45:06.896230   16686 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0920 16:45:06.910579   16686 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0920 16:45:06.910614   16686 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0920 16:45:06.930437   16686 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0920 16:45:06.930463   16686 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0920 16:45:06.940831   16686 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 16:45:06.940867   16686 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 16:45:07.047035   16686 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0920 16:45:07.047062   16686 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0920 16:45:07.215806   16686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 16:45:07.218901   16686 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0920 16:45:07.218932   16686 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0920 16:45:07.223882   16686 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0920 16:45:07.223905   16686 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0920 16:45:07.227082   16686 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0920 16:45:07.227103   16686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0920 16:45:07.256340   16686 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0920 16:45:07.256375   16686 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0920 16:45:07.464044   16686 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0920 16:45:07.464078   16686 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0920 16:45:07.493814   16686 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0920 16:45:07.493851   16686 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0920 16:45:07.582458   16686 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 16:45:07.582479   16686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0920 16:45:07.603848   16686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0920 16:45:07.828047   16686 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0920 16:45:07.828070   16686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0920 16:45:07.844298   16686 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0920 16:45:07.844335   16686 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0920 16:45:08.029971   16686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 16:45:08.174001   16686 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 16:45:08.174023   16686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0920 16:45:08.192445   16686 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0920 16:45:08.192475   16686 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0920 16:45:08.510930   16686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 16:45:08.524911   16686 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0920 16:45:08.524942   16686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0920 16:45:08.726846   16686 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0920 16:45:08.726879   16686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0920 16:45:09.009410   16686 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 16:45:09.009447   16686 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0920 16:45:09.024627   16686 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.648835712s)
	I0920 16:45:09.024679   16686 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.648847664s)
	I0920 16:45:09.024704   16686 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0920 16:45:09.024765   16686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.633411979s)
	I0920 16:45:09.024811   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:09.024825   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:09.025119   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:09.025144   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:09.025153   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:09.025161   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:09.025404   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:09.025445   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:09.025920   16686 node_ready.go:35] waiting up to 6m0s for node "addons-489802" to be "Ready" ...
	I0920 16:45:09.035518   16686 node_ready.go:49] node "addons-489802" has status "Ready":"True"
	I0920 16:45:09.035609   16686 node_ready.go:38] duration metric: took 9.661904ms for node "addons-489802" to be "Ready" ...
	I0920 16:45:09.035637   16686 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 16:45:09.051148   16686 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-nqbzq" in "kube-system" namespace to be "Ready" ...
	I0920 16:45:09.322288   16686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 16:45:09.534546   16686 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-489802" context rescaled to 1 replicas
	I0920 16:45:11.158586   16686 pod_ready.go:103] pod "coredns-7c65d6cfc9-nqbzq" in "kube-system" namespace has status "Ready":"False"
	I0920 16:45:12.692545   16686 pod_ready.go:93] pod "coredns-7c65d6cfc9-nqbzq" in "kube-system" namespace has status "Ready":"True"
	I0920 16:45:12.692574   16686 pod_ready.go:82] duration metric: took 3.641395186s for pod "coredns-7c65d6cfc9-nqbzq" in "kube-system" namespace to be "Ready" ...
	I0920 16:45:12.692587   16686 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-tm9vr" in "kube-system" namespace to be "Ready" ...
	I0920 16:45:12.993726   16686 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0920 16:45:12.993782   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
	I0920 16:45:12.997095   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:12.997468   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:45:12.997509   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:12.997646   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
	I0920 16:45:12.997868   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:45:12.998029   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
	I0920 16:45:12.998260   16686 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa Username:docker}
	I0920 16:45:13.539202   16686 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0920 16:45:13.682847   16686 addons.go:234] Setting addon gcp-auth=true in "addons-489802"
	I0920 16:45:13.682906   16686 host.go:66] Checking if "addons-489802" exists ...
	I0920 16:45:13.683199   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:13.683239   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:13.702441   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33011
	I0920 16:45:13.702905   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:13.703420   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:13.703442   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:13.703814   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:13.704438   16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 16:45:13.704485   16686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 16:45:13.722380   16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33875
	I0920 16:45:13.723033   16686 main.go:141] libmachine: () Calling .GetVersion
	I0920 16:45:13.723749   16686 main.go:141] libmachine: Using API Version  1
	I0920 16:45:13.723776   16686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 16:45:13.724178   16686 main.go:141] libmachine: () Calling .GetMachineName
	I0920 16:45:13.724416   16686 main.go:141] libmachine: (addons-489802) Calling .GetState
	I0920 16:45:13.726164   16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
	I0920 16:45:13.726406   16686 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0920 16:45:13.726432   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
	I0920 16:45:13.729255   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:13.729760   16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
	I0920 16:45:13.729791   16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
	I0920 16:45:13.729945   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
	I0920 16:45:13.730109   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
	I0920 16:45:13.730294   16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
	I0920 16:45:13.730440   16686 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa Username:docker}
	I0920 16:45:13.776226   16686 pod_ready.go:98] pod "coredns-7c65d6cfc9-tm9vr" in "kube-system" namespace has status phase "Failed" (skipping!): {Phase:Failed Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 16:45:13 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 16:45:06 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 16:45:06 +0000 UTC Reason:PodFailed Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 16:45:06 +0000 UTC Reason:PodFailed Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 16:45:06 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.89 HostIPs:[{IP:192.168.39.89}] PodIP:10.244.0.3 Po
dIPs:[{IP:10.244.0.3}] StartTime:2024-09-20 16:45:06 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2024-09-20 16:45:10 +0000 UTC,FinishedAt:2024-09-20 16:45:11 +0000 UTC,ContainerID:cri-o://918bc5fe873828ba31e8b226c084835ff9648d49d56fd967df98c04026fcd9c4,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://918bc5fe873828ba31e8b226c084835ff9648d49d56fd967df98c04026fcd9c4 Started:0xc00179695c AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001a1e830} {Name:kube-api-access-l4wh8 MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc001a1e840}] User:nil AllocatedResourcesStatus:[]}
] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0920 16:45:13.776273   16686 pod_ready.go:82] duration metric: took 1.083676607s for pod "coredns-7c65d6cfc9-tm9vr" in "kube-system" namespace to be "Ready" ...
	E0920 16:45:13.776285   16686 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-tm9vr" in "kube-system" namespace has status phase "Failed" (skipping!): {Phase:Failed Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 16:45:13 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 16:45:06 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 16:45:06 +0000 UTC Reason:PodFailed Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 16:45:06 +0000 UTC Reason:PodFailed Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 16:45:06 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.89 HostIPs:[{IP:192.16
8.39.89}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-09-20 16:45:06 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2024-09-20 16:45:10 +0000 UTC,FinishedAt:2024-09-20 16:45:11 +0000 UTC,ContainerID:cri-o://918bc5fe873828ba31e8b226c084835ff9648d49d56fd967df98c04026fcd9c4,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://918bc5fe873828ba31e8b226c084835ff9648d49d56fd967df98c04026fcd9c4 Started:0xc00179695c AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001a1e830} {Name:kube-api-access-l4wh8 MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc001a1e840}] User:nil
AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0920 16:45:13.776297   16686 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-489802" in "kube-system" namespace to be "Ready" ...
	I0920 16:45:13.895071   16686 pod_ready.go:93] pod "etcd-addons-489802" in "kube-system" namespace has status "Ready":"True"
	I0920 16:45:13.895098   16686 pod_ready.go:82] duration metric: took 118.793361ms for pod "etcd-addons-489802" in "kube-system" namespace to be "Ready" ...
	I0920 16:45:13.895111   16686 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-489802" in "kube-system" namespace to be "Ready" ...
	I0920 16:45:14.014764   16686 pod_ready.go:93] pod "kube-apiserver-addons-489802" in "kube-system" namespace has status "Ready":"True"
	I0920 16:45:14.014787   16686 pod_ready.go:82] duration metric: took 119.668585ms for pod "kube-apiserver-addons-489802" in "kube-system" namespace to be "Ready" ...
	I0920 16:45:14.014841   16686 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-489802" in "kube-system" namespace to be "Ready" ...
	I0920 16:45:14.127671   16686 pod_ready.go:93] pod "kube-controller-manager-addons-489802" in "kube-system" namespace has status "Ready":"True"
	I0920 16:45:14.127694   16686 pod_ready.go:82] duration metric: took 112.838527ms for pod "kube-controller-manager-addons-489802" in "kube-system" namespace to be "Ready" ...
	I0920 16:45:14.127705   16686 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xr4bt" in "kube-system" namespace to be "Ready" ...
	I0920 16:45:14.150341   16686 pod_ready.go:93] pod "kube-proxy-xr4bt" in "kube-system" namespace has status "Ready":"True"
	I0920 16:45:14.150367   16686 pod_ready.go:82] duration metric: took 22.655966ms for pod "kube-proxy-xr4bt" in "kube-system" namespace to be "Ready" ...
	I0920 16:45:14.150376   16686 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-489802" in "kube-system" namespace to be "Ready" ...
	I0920 16:45:14.206202   16686 pod_ready.go:93] pod "kube-scheduler-addons-489802" in "kube-system" namespace has status "Ready":"True"
	I0920 16:45:14.206226   16686 pod_ready.go:82] duration metric: took 55.843139ms for pod "kube-scheduler-addons-489802" in "kube-system" namespace to be "Ready" ...
	I0920 16:45:14.206238   16686 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-54hhx" in "kube-system" namespace to be "Ready" ...
	I0920 16:45:15.135704   16686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.629885928s)
	I0920 16:45:15.135777   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:15.135782   16686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.602774066s)
	I0920 16:45:15.135815   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:15.135832   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:15.135837   16686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.592733845s)
	I0920 16:45:15.135860   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:15.135874   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:15.135791   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:15.135976   16686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.579777747s)
	I0920 16:45:15.136071   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:15.136137   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:15.136165   16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
	I0920 16:45:15.136165   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:15.136176   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:15.136187   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:15.136191   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:15.136195   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:15.136195   16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
	I0920 16:45:15.136202   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:15.136241   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:15.136199   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:15.136269   16686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.578731979s)
	I0920 16:45:15.136290   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:15.136196   16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
	I0920 16:45:15.136312   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:15.136322   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:15.136299   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:15.136332   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:15.136345   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:15.136388   16686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.297083849s)
	I0920 16:45:15.136410   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:15.136420   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:15.136467   16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
	I0920 16:45:15.136492   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:15.136499   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:15.136506   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:15.136513   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:15.136540   16686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.920700025s)
	I0920 16:45:15.136560   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:15.136569   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:15.136342   16686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.495383696s)
	I0920 16:45:15.136654   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:15.136666   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:15.136665   16686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.532769315s)
	I0920 16:45:15.136718   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:15.136726   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:15.136765   16686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.106759371s)
	I0920 16:45:15.136781   16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
	W0920 16:45:15.136792   16686 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0920 16:45:15.136807   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:15.136815   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:15.136815   16686 retry.go:31] will retry after 374.579066ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0920 16:45:15.136939   16686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.625889401s)
	I0920 16:45:15.136963   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:15.136976   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:15.137039   16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
	I0920 16:45:15.137050   16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
	I0920 16:45:15.137071   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:15.137102   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:15.137131   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:15.137137   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:15.137152   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:15.137158   16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
	I0920 16:45:15.137108   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:15.137170   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:15.137178   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:15.137186   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:15.137875   16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
	I0920 16:45:15.137908   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:15.137915   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:15.137922   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:15.137929   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:15.137975   16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
	I0920 16:45:15.137994   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:15.137999   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:15.138006   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:15.138013   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:15.138047   16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
	I0920 16:45:15.138061   16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
	I0920 16:45:15.138078   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:15.138084   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:15.138093   16686 addons.go:475] Verifying addon registry=true in "addons-489802"
	I0920 16:45:15.138895   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:15.138916   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:15.138927   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:15.138936   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:15.139035   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:15.139050   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:15.137073   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:15.139271   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:15.137144   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:15.139348   16686 addons.go:475] Verifying addon ingress=true in "addons-489802"
	I0920 16:45:15.139477   16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
	I0920 16:45:15.137089   16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
	I0920 16:45:15.139526   16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
	I0920 16:45:15.139550   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:15.139564   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:15.139719   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:15.139735   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:15.139509   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:15.139873   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:15.139884   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:15.139894   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:15.140278   16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
	I0920 16:45:15.140316   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:15.140328   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:15.141359   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:15.141378   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:15.141387   16686 addons.go:475] Verifying addon metrics-server=true in "addons-489802"
	I0920 16:45:15.141742   16686 out.go:177] * Verifying ingress addon...
	I0920 16:45:15.141861   16686 out.go:177] * Verifying registry addon...
	I0920 16:45:15.142395   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:15.142416   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:15.142438   16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
	I0920 16:45:15.144272   16686 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-489802 service yakd-dashboard -n yakd-dashboard
	
	I0920 16:45:15.144625   16686 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0920 16:45:15.144652   16686 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0920 16:45:15.182676   16686 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0920 16:45:15.182707   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:15.183762   16686 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0920 16:45:15.183790   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:15.473454   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:15.473474   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:15.473959   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:15.473976   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:15.479442   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:15.479466   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:15.479704   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:15.479721   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	W0920 16:45:15.479879   16686 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0920 16:45:15.512325   16686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 16:45:15.658712   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:15.659607   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:16.155622   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:16.160001   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:16.241480   16686 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-54hhx" in "kube-system" namespace has status "Ready":"False"
	I0920 16:45:16.517442   16686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.195100107s)
	I0920 16:45:16.517489   16686 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.791061379s)
	I0920 16:45:16.517497   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:16.517513   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:16.517795   16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
	I0920 16:45:16.517795   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:16.517817   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:16.517843   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:16.517851   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:16.518062   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:16.518079   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:16.518089   16686 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-489802"
	I0920 16:45:16.519716   16686 out.go:177] * Verifying csi-hostpath-driver addon...
	I0920 16:45:16.519723   16686 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 16:45:16.521078   16686 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0920 16:45:16.521713   16686 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0920 16:45:16.522238   16686 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0920 16:45:16.522258   16686 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0920 16:45:16.561413   16686 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0920 16:45:16.561441   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:16.652853   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:16.654932   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:16.670493   16686 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0920 16:45:16.670518   16686 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0920 16:45:16.788959   16686 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 16:45:16.788986   16686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0920 16:45:16.869081   16686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 16:45:17.027599   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:17.156633   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:17.157163   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:17.527462   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:17.650521   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:17.650643   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:17.734897   16686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.222504857s)
	I0920 16:45:17.734961   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:17.734978   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:17.735373   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:17.735395   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:17.735414   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:17.735423   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:17.735676   16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
	I0920 16:45:17.735715   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:17.735732   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:18.039389   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:18.191248   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:18.192032   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:18.226929   16686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.357782077s)
	I0920 16:45:18.227006   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:18.227027   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:18.227352   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:18.227371   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:18.227380   16686 main.go:141] libmachine: Making call to close driver server
	I0920 16:45:18.227388   16686 main.go:141] libmachine: (addons-489802) Calling .Close
	I0920 16:45:18.227596   16686 main.go:141] libmachine: Successfully made call to close driver server
	I0920 16:45:18.227608   16686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 16:45:18.229117   16686 addons.go:475] Verifying addon gcp-auth=true in "addons-489802"
	I0920 16:45:18.230928   16686 out.go:177] * Verifying gcp-auth addon...
	I0920 16:45:18.233132   16686 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0920 16:45:18.302814   16686 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-54hhx" in "kube-system" namespace has status "Ready":"False"
	I0920 16:45:18.303833   16686 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0920 16:45:18.303849   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:18.526206   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:18.650162   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:18.650906   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:18.737130   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:19.027359   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:19.151083   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:19.152167   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:19.237097   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:19.530489   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:19.651552   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:19.651799   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:19.737916   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:20.027552   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:20.150028   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:20.150617   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:20.237634   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:20.527445   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:20.651604   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:20.652378   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:20.712902   16686 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-54hhx" in "kube-system" namespace has status "Ready":"False"
	I0920 16:45:20.736944   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:21.029114   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:21.149408   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:21.150699   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:21.236999   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:21.527442   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:21.967907   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:21.968174   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:22.070927   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:22.072675   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:22.149613   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:22.150237   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:22.237824   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:22.531579   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:22.650997   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:22.651735   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:22.714124   16686 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-54hhx" in "kube-system" namespace has status "Ready":"False"
	I0920 16:45:22.738003   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:23.036430   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:23.154161   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:23.155271   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:23.274914   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:23.528959   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:23.662172   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:23.665690   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:23.747609   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:24.028698   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:24.163651   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:24.164456   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:24.248826   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:24.526972   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:24.652716   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:24.653397   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:24.715653   16686 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-54hhx" in "kube-system" namespace has status "Ready":"False"
	I0920 16:45:24.740107   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:25.028341   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:25.150991   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:25.153743   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:25.634814   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:25.635566   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:25.651776   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:25.652748   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:25.736431   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:26.032193   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:26.150517   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:26.150967   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:26.238433   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:26.527250   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:26.650016   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:26.650451   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:26.737952   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:27.027290   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:27.150220   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:27.150405   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:27.213074   16686 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-54hhx" in "kube-system" namespace has status "Ready":"True"
	I0920 16:45:27.213099   16686 pod_ready.go:82] duration metric: took 13.006853784s for pod "nvidia-device-plugin-daemonset-54hhx" in "kube-system" namespace to be "Ready" ...
	I0920 16:45:27.213106   16686 pod_ready.go:39] duration metric: took 18.177423912s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 16:45:27.213122   16686 api_server.go:52] waiting for apiserver process to appear ...
	I0920 16:45:27.213169   16686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 16:45:27.236400   16686 api_server.go:72] duration metric: took 21.373270823s to wait for apiserver process to appear ...
	I0920 16:45:27.236426   16686 api_server.go:88] waiting for apiserver healthz status ...
	I0920 16:45:27.236445   16686 api_server.go:253] Checking apiserver healthz at https://192.168.39.89:8443/healthz ...
	I0920 16:45:27.239701   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:27.242110   16686 api_server.go:279] https://192.168.39.89:8443/healthz returned 200:
	ok
	I0920 16:45:27.243105   16686 api_server.go:141] control plane version: v1.31.1
	I0920 16:45:27.243132   16686 api_server.go:131] duration metric: took 6.699495ms to wait for apiserver health ...
	I0920 16:45:27.243142   16686 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 16:45:27.251414   16686 system_pods.go:59] 17 kube-system pods found
	I0920 16:45:27.251443   16686 system_pods.go:61] "coredns-7c65d6cfc9-nqbzq" [734f1782-975a-486b-adf3-32f60c376a9a] Running
	I0920 16:45:27.251451   16686 system_pods.go:61] "csi-hostpath-attacher-0" [8fc733e6-4135-418b-a554-490bd25dabe7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0920 16:45:27.251458   16686 system_pods.go:61] "csi-hostpath-resizer-0" [85755d16-e8fa-4878-9184-45658ba8d8ac] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0920 16:45:27.251465   16686 system_pods.go:61] "csi-hostpathplugin-hglqr" [0aeb8bcc-1f9f-40f6-8aa1-4822a64115f2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0920 16:45:27.251469   16686 system_pods.go:61] "etcd-addons-489802" [7f35387b-c7f1-4436-8369-77849b9c2383] Running
	I0920 16:45:27.251475   16686 system_pods.go:61] "kube-apiserver-addons-489802" [9c4029f4-8e01-4d5d-a866-518e553ac713] Running
	I0920 16:45:27.251481   16686 system_pods.go:61] "kube-controller-manager-addons-489802" [30219691-4d43-476d-8720-80aa4f2b6b54] Running
	I0920 16:45:27.251488   16686 system_pods.go:61] "kube-ingress-dns-minikube" [1f722d5e-9dee-4b0e-8661-9c4181ea4f9b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0920 16:45:27.251495   16686 system_pods.go:61] "kube-proxy-xr4bt" [7a20cb9e-3e82-4bda-9529-7e024f9681a4] Running
	I0920 16:45:27.251504   16686 system_pods.go:61] "kube-scheduler-addons-489802" [8b17a764-82bc-4003-8b0c-9d46c614e15d] Running
	I0920 16:45:27.251512   16686 system_pods.go:61] "metrics-server-84c5f94fbc-txlrn" [b6d2625e-ba6e-44e1-b245-0edc2adaa243] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 16:45:27.251518   16686 system_pods.go:61] "nvidia-device-plugin-daemonset-54hhx" [b022f644-f7de-4d74-aed4-63ad47ef0b71] Running
	I0920 16:45:27.251526   16686 system_pods.go:61] "registry-66c9cd494c-7swkh" [1e3cfba8-c77f-46f3-b6b1-46c7a36ae3a4] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0920 16:45:27.251534   16686 system_pods.go:61] "registry-proxy-ggl6q" [a467b141-5827-4440-b11f-9203739b4a10] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0920 16:45:27.251542   16686 system_pods.go:61] "snapshot-controller-56fcc65765-2hz6g" [0d531a52-cced-4b3d-adfd-5d62357591e8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 16:45:27.251549   16686 system_pods.go:61] "snapshot-controller-56fcc65765-4l9hv" [eccfc252-ad9c-4b70-bb1c-d81a71214556] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 16:45:27.251553   16686 system_pods.go:61] "storage-provisioner" [1e04b7e0-a0fe-4e65-9ba5-63be2690da1d] Running
	I0920 16:45:27.251561   16686 system_pods.go:74] duration metric: took 8.412514ms to wait for pod list to return data ...
	I0920 16:45:27.251568   16686 default_sa.go:34] waiting for default service account to be created ...
	I0920 16:45:27.254735   16686 default_sa.go:45] found service account: "default"
	I0920 16:45:27.254760   16686 default_sa.go:55] duration metric: took 3.185589ms for default service account to be created ...
	I0920 16:45:27.254770   16686 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 16:45:27.261725   16686 system_pods.go:86] 17 kube-system pods found
	I0920 16:45:27.261752   16686 system_pods.go:89] "coredns-7c65d6cfc9-nqbzq" [734f1782-975a-486b-adf3-32f60c376a9a] Running
	I0920 16:45:27.261759   16686 system_pods.go:89] "csi-hostpath-attacher-0" [8fc733e6-4135-418b-a554-490bd25dabe7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0920 16:45:27.261766   16686 system_pods.go:89] "csi-hostpath-resizer-0" [85755d16-e8fa-4878-9184-45658ba8d8ac] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0920 16:45:27.261772   16686 system_pods.go:89] "csi-hostpathplugin-hglqr" [0aeb8bcc-1f9f-40f6-8aa1-4822a64115f2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0920 16:45:27.261776   16686 system_pods.go:89] "etcd-addons-489802" [7f35387b-c7f1-4436-8369-77849b9c2383] Running
	I0920 16:45:27.261780   16686 system_pods.go:89] "kube-apiserver-addons-489802" [9c4029f4-8e01-4d5d-a866-518e553ac713] Running
	I0920 16:45:27.261784   16686 system_pods.go:89] "kube-controller-manager-addons-489802" [30219691-4d43-476d-8720-80aa4f2b6b54] Running
	I0920 16:45:27.261791   16686 system_pods.go:89] "kube-ingress-dns-minikube" [1f722d5e-9dee-4b0e-8661-9c4181ea4f9b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0920 16:45:27.261795   16686 system_pods.go:89] "kube-proxy-xr4bt" [7a20cb9e-3e82-4bda-9529-7e024f9681a4] Running
	I0920 16:45:27.261799   16686 system_pods.go:89] "kube-scheduler-addons-489802" [8b17a764-82bc-4003-8b0c-9d46c614e15d] Running
	I0920 16:45:27.261805   16686 system_pods.go:89] "metrics-server-84c5f94fbc-txlrn" [b6d2625e-ba6e-44e1-b245-0edc2adaa243] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 16:45:27.261809   16686 system_pods.go:89] "nvidia-device-plugin-daemonset-54hhx" [b022f644-f7de-4d74-aed4-63ad47ef0b71] Running
	I0920 16:45:27.261815   16686 system_pods.go:89] "registry-66c9cd494c-7swkh" [1e3cfba8-c77f-46f3-b6b1-46c7a36ae3a4] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0920 16:45:27.261820   16686 system_pods.go:89] "registry-proxy-ggl6q" [a467b141-5827-4440-b11f-9203739b4a10] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0920 16:45:27.261828   16686 system_pods.go:89] "snapshot-controller-56fcc65765-2hz6g" [0d531a52-cced-4b3d-adfd-5d62357591e8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 16:45:27.261858   16686 system_pods.go:89] "snapshot-controller-56fcc65765-4l9hv" [eccfc252-ad9c-4b70-bb1c-d81a71214556] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 16:45:27.261868   16686 system_pods.go:89] "storage-provisioner" [1e04b7e0-a0fe-4e65-9ba5-63be2690da1d] Running
	I0920 16:45:27.261877   16686 system_pods.go:126] duration metric: took 7.099706ms to wait for k8s-apps to be running ...
	I0920 16:45:27.261887   16686 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 16:45:27.261932   16686 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 16:45:27.276406   16686 system_svc.go:56] duration metric: took 14.508978ms WaitForService to wait for kubelet
	I0920 16:45:27.276438   16686 kubeadm.go:582] duration metric: took 21.413312681s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 16:45:27.276460   16686 node_conditions.go:102] verifying NodePressure condition ...
	I0920 16:45:27.280248   16686 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 16:45:27.280278   16686 node_conditions.go:123] node cpu capacity is 2
	I0920 16:45:27.280291   16686 node_conditions.go:105] duration metric: took 3.825237ms to run NodePressure ...
	I0920 16:45:27.280304   16686 start.go:241] waiting for startup goroutines ...
	I0920 16:45:27.526718   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:27.649095   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:27.649421   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:27.737354   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:28.027233   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:28.150225   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:28.150730   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:28.236702   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:28.528434   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:28.650325   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:28.650405   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:28.740070   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:29.026096   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:29.149445   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:29.150058   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:29.237452   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:29.527135   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:29.649902   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:29.649932   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:29.737559   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:30.026698   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:30.150115   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:30.150769   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:30.238484   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:30.527374   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:30.648850   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:30.649272   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:30.738810   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:31.028473   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:31.150589   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:31.156282   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:31.237373   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:31.527393   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:31.649166   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:31.650780   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:31.736824   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:32.027837   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:32.152463   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:32.153143   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:32.237068   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:32.528272   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:32.649079   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:32.650818   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:32.738352   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:33.026553   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:33.149902   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:33.150275   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:33.237524   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:33.537491   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:33.649781   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:33.650261   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:33.737265   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:34.028817   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:34.150791   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:34.152125   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:34.237490   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:34.526864   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:34.649685   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:34.650181   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:34.736977   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:35.029888   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:35.150945   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:35.155795   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:35.240335   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:35.527786   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:35.654336   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:35.655062   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:35.737485   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:36.027635   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:36.151566   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:36.152493   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:36.238231   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:36.527246   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:36.655057   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:36.655723   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:36.738138   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:37.030365   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:37.150592   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:37.150821   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:37.236830   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:37.526749   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:37.650962   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:37.652318   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:37.738164   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:38.031402   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:38.155846   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:38.156510   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:38.252531   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:38.528674   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:38.655016   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:38.658754   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:38.739024   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:39.026715   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:39.151013   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:39.154202   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:39.238586   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:39.527713   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:39.649075   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:39.649203   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:39.737480   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:40.027567   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:40.150474   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:40.151696   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:40.250888   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:40.526616   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:40.652188   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:40.652389   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:40.736985   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:41.026770   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:41.150827   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:41.151842   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:41.237101   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:41.526185   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:41.650288   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:41.650519   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:41.737186   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:42.027683   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:42.149240   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:42.150504   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:42.491904   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:42.592635   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:42.650756   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:42.651320   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:42.737069   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:43.029825   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:43.149551   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:43.149935   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:43.237114   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:43.528788   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:43.650325   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:43.650461   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:43.739219   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:44.027085   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:44.150296   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:44.150650   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:44.238279   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:44.527675   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:44.649728   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:44.650268   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:44.737823   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:45.028181   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:45.150501   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:45.151145   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:45.237285   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:45.527586   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:45.649593   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:45.650452   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:45.738407   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:46.030564   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:46.150486   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:46.150734   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:46.237087   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:46.551259   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:46.651342   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:46.653245   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:46.737384   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:47.029654   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:47.150343   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:47.150347   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:47.238187   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:47.535430   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:47.650178   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:47.651863   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:47.739041   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:48.029210   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:48.150091   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:48.154252   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:48.240363   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:48.529142   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:48.653143   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:48.655833   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:48.738746   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:49.027666   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:49.150751   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:49.151834   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:49.236647   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:49.530861   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:49.651140   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:49.651675   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:49.740617   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:50.026729   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:50.159867   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:50.160090   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:50.239757   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:50.527622   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:50.654766   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:50.655361   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:50.737483   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:51.027995   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:51.149643   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:51.149801   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:51.237905   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:51.526411   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:51.649489   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:51.650326   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:51.738210   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:52.036253   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:52.149599   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:52.151253   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:52.237057   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:52.527569   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:52.648975   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:52.650153   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:52.737191   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:53.027592   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:53.150060   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:53.150479   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:53.236403   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:53.526504   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:53.649297   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:53.651436   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:53.737405   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:54.028487   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:54.150980   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:54.151321   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:54.237711   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:54.527354   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:54.650301   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:54.650677   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:54.737955   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:55.031032   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:55.149243   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:55.150181   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:55.238167   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:55.528915   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:55.649892   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:55.650313   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:55.738797   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:56.028783   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:56.151114   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:56.151294   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:56.237410   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:56.527498   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:56.650436   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:56.650776   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:56.736898   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:57.026952   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:57.149669   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:57.150915   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:57.237031   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:57.526939   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:57.648982   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:57.650547   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:57.737696   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:58.026729   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:58.150041   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:58.150968   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:58.237146   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:58.527288   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:58.651780   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:58.652013   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:58.738908   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:59.026605   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:59.149437   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:59.149648   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:59.237722   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:59.527090   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:59.650035   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:59.651041   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:59.737351   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:00.027912   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:00.558370   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:46:00.561620   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:00.563942   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:00.565779   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:00.661977   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:46:00.662874   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:00.739219   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:01.029865   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:01.154749   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:46:01.155165   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:01.237401   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:01.530045   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:01.649221   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:46:01.649554   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:01.740003   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:02.026763   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:02.150502   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:02.150590   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:46:02.236863   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:02.529068   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:02.650888   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:46:02.651000   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:02.750263   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:03.026716   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:03.149149   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:46:03.149545   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:03.237369   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:03.534553   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:03.650442   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:46:03.650862   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:03.737614   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:04.026913   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:04.149387   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:46:04.149593   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:04.243360   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:04.527336   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:04.650842   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:46:04.651139   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:04.739255   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:05.027878   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:05.150204   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:46:05.150545   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:05.244231   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:05.529349   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:05.652867   16686 kapi.go:107] duration metric: took 50.508229978s to wait for kubernetes.io/minikube-addons=registry ...
	I0920 16:46:05.652925   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:05.739640   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:06.033981   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:06.149185   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:06.237046   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:06.528004   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:06.649435   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:06.895278   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:07.026949   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:07.149429   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:07.237034   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:07.526452   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:07.649544   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:07.737620   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:08.028390   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:08.150933   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:08.237962   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:08.529026   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:08.650034   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:08.737105   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:09.027687   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:09.149020   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:09.239286   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:09.529929   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:09.666377   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:09.746102   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:10.030699   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:10.155669   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:10.239033   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:10.530724   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:10.651556   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:10.737407   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:11.027890   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:11.149069   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:11.236960   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:11.527373   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:11.649887   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:11.737323   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:12.027469   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:12.149540   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:12.237298   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:12.527280   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:12.650565   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:12.750782   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:13.027210   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:13.149266   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:13.236795   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:13.527089   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:13.650076   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:13.739568   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:14.028427   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:14.150142   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:14.238716   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:14.529618   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:14.649719   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:14.737439   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:15.029527   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:15.149916   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:15.236871   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:15.527484   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:15.660993   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:15.737550   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:16.027986   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:16.149414   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:16.237560   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:16.528143   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:16.649180   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:16.749844   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:17.027012   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:17.149822   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:17.237094   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:17.527302   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:17.650815   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:17.737697   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:18.027958   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:18.151414   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:18.237081   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:18.755707   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:18.756298   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:18.756334   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:19.027579   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:19.149746   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:19.237870   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:19.532636   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:19.649362   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:19.743684   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:20.029394   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:20.152735   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:20.238771   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:20.528220   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:20.650381   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:20.739497   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:21.028952   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:21.149828   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:21.238039   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:21.532796   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:21.648825   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:21.736739   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:22.025994   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:22.149742   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:22.237902   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:22.526869   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:22.651053   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:22.754073   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:23.029507   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:23.150844   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:23.236975   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:23.530954   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:23.649940   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:23.737663   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:24.027816   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:24.149027   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:24.236905   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:24.528126   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:24.649610   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:24.737256   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:25.029079   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:25.168465   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:25.279560   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:25.529941   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:25.649862   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:25.738675   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:26.031710   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:26.149047   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:26.237178   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:26.527079   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:26.649467   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:26.737219   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:27.027260   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:27.150392   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:27.237951   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:27.526593   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:27.649815   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:27.738065   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:28.026169   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:28.150226   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:28.237640   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:28.526680   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:28.649544   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:28.737407   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:29.027688   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:29.150021   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:29.236763   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:29.563052   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:29.652576   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:29.739028   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:30.029796   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:30.150520   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:30.240233   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:30.526626   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:30.651044   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:30.739007   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:31.027062   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:31.541329   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:31.546535   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:31.546967   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:31.652149   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:31.736761   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:32.026342   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:32.149699   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:32.238624   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:32.526975   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:32.650436   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:32.740112   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:33.028897   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:33.150155   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:33.250978   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:33.528932   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:33.649886   16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:33.743165   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:34.028352   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:34.150042   16686 kapi.go:107] duration metric: took 1m19.005386454s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0920 16:46:34.237404   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:34.526686   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:34.740025   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:35.033014   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:35.241504   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:35.527579   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:35.738045   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:36.034900   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:36.242839   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:36.528649   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:36.738556   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:37.027713   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:37.237641   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:37.527114   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:37.736812   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:38.027753   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:38.240755   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:38.526552   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:38.739220   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:39.027014   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:39.240347   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:39.534783   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:39.739002   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:40.032069   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:40.239670   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:40.527751   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:40.742044   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:41.026894   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:41.237898   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:41.526185   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:41.737861   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:42.026935   16686 kapi.go:107] duration metric: took 1m25.505217334s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0920 16:46:42.236807   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:42.738034   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:43.237393   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:43.739267   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:44.237884   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:44.738051   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:45.236733   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:45.737720   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:46.236788   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:46.739281   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:47.237290   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:47.737521   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:48.237326   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:48.737915   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:49.238707   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:49.738314   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:50.237798   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:50.737959   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:51.237197   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:51.737289   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:52.236949   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:52.737530   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:53.237179   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:53.737635   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:54.237901   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:54.737648   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:55.238274   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:55.738085   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:56.237671   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:56.737704   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:57.237419   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:57.737353   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:58.237702   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:58.737197   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:59.237153   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:59.737583   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:00.238191   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:00.737084   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:01.237072   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:01.737245   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:02.237128   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:02.737215   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:03.237530   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:03.737290   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:04.237086   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:04.737817   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:05.237856   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:05.738321   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:06.237429   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:06.737202   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:07.236740   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:07.738137   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:08.237395   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:08.738090   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:09.237251   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:09.847229   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:10.237467   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:10.737639   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:11.237672   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:11.737856   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:12.237892   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:12.737947   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:13.236851   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:13.737127   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:14.236749   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:14.737645   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:15.240515   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:15.737944   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:16.236760   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:16.737628   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:17.237203   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:17.736930   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:18.237666   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:18.737293   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:19.253355   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:19.738180   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:20.239996   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:20.737102   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:21.239307   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:21.737634   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:22.237896   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:22.738438   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:23.237672   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:23.737184   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:24.239150   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:24.737464   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:25.237351   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:25.737539   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:26.237905   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:26.737559   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:27.237704   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:27.738056   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:28.237766   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:28.737159   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:29.237477   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:29.737337   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:30.238578   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:30.737543   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:31.237419   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:31.737583   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:32.237893   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:32.737619   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:33.237679   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:33.737168   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:34.237268   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:34.737264   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:35.237495   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:35.738039   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:36.238149   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:36.737649   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:37.237524   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:37.737017   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:38.238138   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:38.737568   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:39.237391   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:39.736477   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:40.238059   16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:40.738010   16686 kapi.go:107] duration metric: took 2m22.504874191s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0920 16:47:40.740079   16686 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-489802 cluster.
	I0920 16:47:40.741424   16686 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0920 16:47:40.742789   16686 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0920 16:47:40.744449   16686 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, inspektor-gadget, ingress-dns, storage-provisioner, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0920 16:47:40.745981   16686 addons.go:510] duration metric: took 2m34.882823136s for enable addons: enabled=[nvidia-device-plugin cloud-spanner inspektor-gadget ingress-dns storage-provisioner metrics-server yakd default-storageclass volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0920 16:47:40.746064   16686 start.go:246] waiting for cluster config update ...
	I0920 16:47:40.746085   16686 start.go:255] writing updated cluster config ...
	I0920 16:47:40.746667   16686 ssh_runner.go:195] Run: rm -f paused
	I0920 16:47:40.832742   16686 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 16:47:40.834777   16686 out.go:177] * Done! kubectl is now configured to use "addons-489802" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 20 17:01:50 addons-489802 crio[664]: time="2024-09-20 17:01:50.510129433Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726851710510101900,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559237,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9eb4a813-5a68-4e54-afa4-74cd5672d365 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 17:01:50 addons-489802 crio[664]: time="2024-09-20 17:01:50.510865079Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6037f358-f6b8-486d-a5da-537b5cedb57e name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:01:50 addons-489802 crio[664]: time="2024-09-20 17:01:50.510920728Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6037f358-f6b8-486d-a5da-537b5cedb57e name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:01:50 addons-489802 crio[664]: time="2024-09-20 17:01:50.511193867Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9e143aa05bf909cb432c7c457640fb3b37fe1ea1e59b7f3bd1b4e187f5d0306b,PodSandboxId:98723fa66f58f86ebafded473a3572b8e63a415dd3812cbd4bab3b75153880fc,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1726851536437119830,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-fcflm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e43058a4-2b7d-46c4-874f-da5ebfcc43a0,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3b98df31c510e2b7d8467304c108770bfabad7ebb2494f12313d8f912b2482c,PodSandboxId:ddccd18e28f19bcd554a80347c0802f4ddf6d7bad08d4b2ac6f27eb3e102b20d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726851396918964948,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4c34572d-1118-4bb3-8265-b67b3104bc59,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c1fd10705c644580fdef2fc3075d7c9349c8e4b44d3899910dd41e40c87e2ce,PodSandboxId:66f4ad3477a6cc6a00655cc193d28db01097870bd3585db50c33f9e7cc96f8cf,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726850860245098258,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-wzvr2,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: 9688d654-b2f1-4e67-b21f-737c57cb6d4f,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0690e87ddb4f9357eefd739e00c9cec1ec022eda1379279b535ba4678c33b26,PodSandboxId:36aedadeb2582fe5df950ad8776d82e03104ab51a608a29ba00e9113b19e678e,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1726850772239877059,Labels:map[string]string{io.kubernetes.container.name: l
ocal-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-rhmqb,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: e5f1d3f8-1767-4ad2-b5b8-eb5bf18bc163,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a0d036505e72ed6b62c09226aa4d219c30e6e162e73ebffc595f568b216931d,PodSandboxId:1ae7bada2f668e2292fc48d3426bfa34e41215d4336864f62a4f90b4ee95709f,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726850737
987301783,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-txlrn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6d2625e-ba6e-44e1-b245-0edc2adaa243,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a981c68e927108571692d174ebc0cf47e600882543d6dd401c23cbcd805d49d,PodSandboxId:11b2a45f795d401fe4c78cf74478d3d702eff22fef4bdd814d8198ee5072d604,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db300
2f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726850713649115674,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e04b7e0-a0fe-4e65-9ba5-63be2690da1d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70c74f4f1e0bde75fc553a034aa664a515c218d2b72725850921f92314b6ec06,PodSandboxId:cfda686abf7f1fef69de6a34f633f44ac3c87637d6ec92d05dc4a45a4d5652b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f
91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726850710125894103,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nqbzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 734f1782-975a-486b-adf3-32f60c376a9a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c60a90d5ed294c1a5015ea6f6b5c5259e8d437a6b5dd0f9dd758bb62d91c7b7,PodSandboxId:b53a284c395cfb6bdea6622b664327da6733b43d0375a7570cfa3dac443563e5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&Im
ageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726850707153952752,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xr4bt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a20cb9e-3e82-4bda-9529-7e024f9681a4,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44c347dc4cb2326d9ce7eef959abf86dcaee69ecf824e59fbe44600500e8a0f4,PodSandboxId:0ccdde3d3e8e30fec62b1f315de346cf5989b81e93276bfcf9792ae014efb9d5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd
71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726850695786767603,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-489802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8db84d1368c7024c014f2f2f0d973aae,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79fb233450407c6fdf879eb55124a5840bf49aaa572c10f7add06512d38df264,PodSandboxId:b3c515c903cd8c54cc3829530f8702fa82f07287a4bcae50433ffb0e6100c34b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7
719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726850695761844946,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-489802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de814a9694fb61ae23ac46f9b9deb6e7,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ebda0675cfbe9e7b3e6c1ca40351339db78cf3954608a12cc779850ee452a23,PodSandboxId:ce3e5a61bc6e6a8044b701e61a79b033d814fb58851347acc4b4eaab63045047,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca50
48cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726850695741523021,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-489802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 016cfe34770e4cbd59f73407149e44ff,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53631bbb5fc199153283953dffaf83c3d2a2b4cdbda98ab81770b42af5dfe30e,PodSandboxId:c9a4930506bbb11794aa02ab9a68cfe8370b91453dd7ab2cce5eac61a155cacf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Anno
tations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726850695699198150,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-489802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50faea81a2001503e00d2a0be1ceba9e,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6037f358-f6b8-486d-a5da-537b5cedb57e name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:01:50 addons-489802 crio[664]: time="2024-09-20 17:01:50.551253193Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e38f3f87-3d75-4e7e-8058-63eff1cb4d50 name=/runtime.v1.RuntimeService/Version
	Sep 20 17:01:50 addons-489802 crio[664]: time="2024-09-20 17:01:50.551377139Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e38f3f87-3d75-4e7e-8058-63eff1cb4d50 name=/runtime.v1.RuntimeService/Version
	Sep 20 17:01:50 addons-489802 crio[664]: time="2024-09-20 17:01:50.552565056Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=abc8551a-2ec2-45a8-b198-dc9a21904987 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 17:01:50 addons-489802 crio[664]: time="2024-09-20 17:01:50.553700551Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726851710553671055,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559237,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=abc8551a-2ec2-45a8-b198-dc9a21904987 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 17:01:50 addons-489802 crio[664]: time="2024-09-20 17:01:50.554323513Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b770ae0d-720d-4f31-a922-885c13856c24 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:01:50 addons-489802 crio[664]: time="2024-09-20 17:01:50.554432694Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b770ae0d-720d-4f31-a922-885c13856c24 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:01:50 addons-489802 crio[664]: time="2024-09-20 17:01:50.554693137Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9e143aa05bf909cb432c7c457640fb3b37fe1ea1e59b7f3bd1b4e187f5d0306b,PodSandboxId:98723fa66f58f86ebafded473a3572b8e63a415dd3812cbd4bab3b75153880fc,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1726851536437119830,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-fcflm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e43058a4-2b7d-46c4-874f-da5ebfcc43a0,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3b98df31c510e2b7d8467304c108770bfabad7ebb2494f12313d8f912b2482c,PodSandboxId:ddccd18e28f19bcd554a80347c0802f4ddf6d7bad08d4b2ac6f27eb3e102b20d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726851396918964948,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4c34572d-1118-4bb3-8265-b67b3104bc59,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c1fd10705c644580fdef2fc3075d7c9349c8e4b44d3899910dd41e40c87e2ce,PodSandboxId:66f4ad3477a6cc6a00655cc193d28db01097870bd3585db50c33f9e7cc96f8cf,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726850860245098258,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-wzvr2,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: 9688d654-b2f1-4e67-b21f-737c57cb6d4f,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0690e87ddb4f9357eefd739e00c9cec1ec022eda1379279b535ba4678c33b26,PodSandboxId:36aedadeb2582fe5df950ad8776d82e03104ab51a608a29ba00e9113b19e678e,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1726850772239877059,Labels:map[string]string{io.kubernetes.container.name: l
ocal-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-rhmqb,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: e5f1d3f8-1767-4ad2-b5b8-eb5bf18bc163,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a0d036505e72ed6b62c09226aa4d219c30e6e162e73ebffc595f568b216931d,PodSandboxId:1ae7bada2f668e2292fc48d3426bfa34e41215d4336864f62a4f90b4ee95709f,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726850737
987301783,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-txlrn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6d2625e-ba6e-44e1-b245-0edc2adaa243,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a981c68e927108571692d174ebc0cf47e600882543d6dd401c23cbcd805d49d,PodSandboxId:11b2a45f795d401fe4c78cf74478d3d702eff22fef4bdd814d8198ee5072d604,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db300
2f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726850713649115674,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e04b7e0-a0fe-4e65-9ba5-63be2690da1d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70c74f4f1e0bde75fc553a034aa664a515c218d2b72725850921f92314b6ec06,PodSandboxId:cfda686abf7f1fef69de6a34f633f44ac3c87637d6ec92d05dc4a45a4d5652b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f
91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726850710125894103,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nqbzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 734f1782-975a-486b-adf3-32f60c376a9a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c60a90d5ed294c1a5015ea6f6b5c5259e8d437a6b5dd0f9dd758bb62d91c7b7,PodSandboxId:b53a284c395cfb6bdea6622b664327da6733b43d0375a7570cfa3dac443563e5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&Im
ageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726850707153952752,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xr4bt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a20cb9e-3e82-4bda-9529-7e024f9681a4,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44c347dc4cb2326d9ce7eef959abf86dcaee69ecf824e59fbe44600500e8a0f4,PodSandboxId:0ccdde3d3e8e30fec62b1f315de346cf5989b81e93276bfcf9792ae014efb9d5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd
71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726850695786767603,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-489802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8db84d1368c7024c014f2f2f0d973aae,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79fb233450407c6fdf879eb55124a5840bf49aaa572c10f7add06512d38df264,PodSandboxId:b3c515c903cd8c54cc3829530f8702fa82f07287a4bcae50433ffb0e6100c34b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7
719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726850695761844946,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-489802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de814a9694fb61ae23ac46f9b9deb6e7,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ebda0675cfbe9e7b3e6c1ca40351339db78cf3954608a12cc779850ee452a23,PodSandboxId:ce3e5a61bc6e6a8044b701e61a79b033d814fb58851347acc4b4eaab63045047,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca50
48cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726850695741523021,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-489802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 016cfe34770e4cbd59f73407149e44ff,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53631bbb5fc199153283953dffaf83c3d2a2b4cdbda98ab81770b42af5dfe30e,PodSandboxId:c9a4930506bbb11794aa02ab9a68cfe8370b91453dd7ab2cce5eac61a155cacf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Anno
tations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726850695699198150,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-489802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50faea81a2001503e00d2a0be1ceba9e,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b770ae0d-720d-4f31-a922-885c13856c24 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:01:50 addons-489802 crio[664]: time="2024-09-20 17:01:50.589625183Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=03eabad7-2186-463c-95ad-cd84fe1df7a9 name=/runtime.v1.RuntimeService/Version
	Sep 20 17:01:50 addons-489802 crio[664]: time="2024-09-20 17:01:50.589711989Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=03eabad7-2186-463c-95ad-cd84fe1df7a9 name=/runtime.v1.RuntimeService/Version
	Sep 20 17:01:50 addons-489802 crio[664]: time="2024-09-20 17:01:50.590721366Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e766bae4-344b-418f-9b75-277bdd981eaf name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 17:01:50 addons-489802 crio[664]: time="2024-09-20 17:01:50.591984840Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726851710591952171,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559237,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e766bae4-344b-418f-9b75-277bdd981eaf name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 17:01:50 addons-489802 crio[664]: time="2024-09-20 17:01:50.592575115Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=eded3fff-e4cd-4336-868d-dab4e49fc328 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:01:50 addons-489802 crio[664]: time="2024-09-20 17:01:50.592627862Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=eded3fff-e4cd-4336-868d-dab4e49fc328 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:01:50 addons-489802 crio[664]: time="2024-09-20 17:01:50.592920989Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9e143aa05bf909cb432c7c457640fb3b37fe1ea1e59b7f3bd1b4e187f5d0306b,PodSandboxId:98723fa66f58f86ebafded473a3572b8e63a415dd3812cbd4bab3b75153880fc,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1726851536437119830,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-fcflm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e43058a4-2b7d-46c4-874f-da5ebfcc43a0,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3b98df31c510e2b7d8467304c108770bfabad7ebb2494f12313d8f912b2482c,PodSandboxId:ddccd18e28f19bcd554a80347c0802f4ddf6d7bad08d4b2ac6f27eb3e102b20d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726851396918964948,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4c34572d-1118-4bb3-8265-b67b3104bc59,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c1fd10705c644580fdef2fc3075d7c9349c8e4b44d3899910dd41e40c87e2ce,PodSandboxId:66f4ad3477a6cc6a00655cc193d28db01097870bd3585db50c33f9e7cc96f8cf,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726850860245098258,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-wzvr2,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: 9688d654-b2f1-4e67-b21f-737c57cb6d4f,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0690e87ddb4f9357eefd739e00c9cec1ec022eda1379279b535ba4678c33b26,PodSandboxId:36aedadeb2582fe5df950ad8776d82e03104ab51a608a29ba00e9113b19e678e,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1726850772239877059,Labels:map[string]string{io.kubernetes.container.name: l
ocal-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-rhmqb,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: e5f1d3f8-1767-4ad2-b5b8-eb5bf18bc163,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a0d036505e72ed6b62c09226aa4d219c30e6e162e73ebffc595f568b216931d,PodSandboxId:1ae7bada2f668e2292fc48d3426bfa34e41215d4336864f62a4f90b4ee95709f,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726850737
987301783,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-txlrn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6d2625e-ba6e-44e1-b245-0edc2adaa243,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a981c68e927108571692d174ebc0cf47e600882543d6dd401c23cbcd805d49d,PodSandboxId:11b2a45f795d401fe4c78cf74478d3d702eff22fef4bdd814d8198ee5072d604,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db300
2f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726850713649115674,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e04b7e0-a0fe-4e65-9ba5-63be2690da1d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70c74f4f1e0bde75fc553a034aa664a515c218d2b72725850921f92314b6ec06,PodSandboxId:cfda686abf7f1fef69de6a34f633f44ac3c87637d6ec92d05dc4a45a4d5652b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f
91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726850710125894103,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nqbzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 734f1782-975a-486b-adf3-32f60c376a9a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c60a90d5ed294c1a5015ea6f6b5c5259e8d437a6b5dd0f9dd758bb62d91c7b7,PodSandboxId:b53a284c395cfb6bdea6622b664327da6733b43d0375a7570cfa3dac443563e5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&Im
ageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726850707153952752,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xr4bt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a20cb9e-3e82-4bda-9529-7e024f9681a4,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44c347dc4cb2326d9ce7eef959abf86dcaee69ecf824e59fbe44600500e8a0f4,PodSandboxId:0ccdde3d3e8e30fec62b1f315de346cf5989b81e93276bfcf9792ae014efb9d5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd
71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726850695786767603,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-489802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8db84d1368c7024c014f2f2f0d973aae,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79fb233450407c6fdf879eb55124a5840bf49aaa572c10f7add06512d38df264,PodSandboxId:b3c515c903cd8c54cc3829530f8702fa82f07287a4bcae50433ffb0e6100c34b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7
719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726850695761844946,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-489802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de814a9694fb61ae23ac46f9b9deb6e7,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ebda0675cfbe9e7b3e6c1ca40351339db78cf3954608a12cc779850ee452a23,PodSandboxId:ce3e5a61bc6e6a8044b701e61a79b033d814fb58851347acc4b4eaab63045047,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca50
48cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726850695741523021,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-489802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 016cfe34770e4cbd59f73407149e44ff,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53631bbb5fc199153283953dffaf83c3d2a2b4cdbda98ab81770b42af5dfe30e,PodSandboxId:c9a4930506bbb11794aa02ab9a68cfe8370b91453dd7ab2cce5eac61a155cacf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Anno
tations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726850695699198150,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-489802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50faea81a2001503e00d2a0be1ceba9e,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=eded3fff-e4cd-4336-868d-dab4e49fc328 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:01:50 addons-489802 crio[664]: time="2024-09-20 17:01:50.633851264Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f1b5296a-0e9b-4f9e-9ca4-da2b410a14a5 name=/runtime.v1.RuntimeService/Version
	Sep 20 17:01:50 addons-489802 crio[664]: time="2024-09-20 17:01:50.633942450Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f1b5296a-0e9b-4f9e-9ca4-da2b410a14a5 name=/runtime.v1.RuntimeService/Version
	Sep 20 17:01:50 addons-489802 crio[664]: time="2024-09-20 17:01:50.635595045Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a8dca61e-3b61-483f-b2dc-f07729155829 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 17:01:50 addons-489802 crio[664]: time="2024-09-20 17:01:50.636801153Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726851710636770032,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559237,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a8dca61e-3b61-483f-b2dc-f07729155829 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 17:01:50 addons-489802 crio[664]: time="2024-09-20 17:01:50.637468476Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=56441158-9d52-4078-a6ef-c63b405c02e9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:01:50 addons-489802 crio[664]: time="2024-09-20 17:01:50.637527434Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=56441158-9d52-4078-a6ef-c63b405c02e9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:01:50 addons-489802 crio[664]: time="2024-09-20 17:01:50.637793170Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9e143aa05bf909cb432c7c457640fb3b37fe1ea1e59b7f3bd1b4e187f5d0306b,PodSandboxId:98723fa66f58f86ebafded473a3572b8e63a415dd3812cbd4bab3b75153880fc,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1726851536437119830,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-fcflm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e43058a4-2b7d-46c4-874f-da5ebfcc43a0,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3b98df31c510e2b7d8467304c108770bfabad7ebb2494f12313d8f912b2482c,PodSandboxId:ddccd18e28f19bcd554a80347c0802f4ddf6d7bad08d4b2ac6f27eb3e102b20d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726851396918964948,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4c34572d-1118-4bb3-8265-b67b3104bc59,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c1fd10705c644580fdef2fc3075d7c9349c8e4b44d3899910dd41e40c87e2ce,PodSandboxId:66f4ad3477a6cc6a00655cc193d28db01097870bd3585db50c33f9e7cc96f8cf,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726850860245098258,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-wzvr2,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: 9688d654-b2f1-4e67-b21f-737c57cb6d4f,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0690e87ddb4f9357eefd739e00c9cec1ec022eda1379279b535ba4678c33b26,PodSandboxId:36aedadeb2582fe5df950ad8776d82e03104ab51a608a29ba00e9113b19e678e,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1726850772239877059,Labels:map[string]string{io.kubernetes.container.name: l
ocal-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-rhmqb,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: e5f1d3f8-1767-4ad2-b5b8-eb5bf18bc163,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a0d036505e72ed6b62c09226aa4d219c30e6e162e73ebffc595f568b216931d,PodSandboxId:1ae7bada2f668e2292fc48d3426bfa34e41215d4336864f62a4f90b4ee95709f,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726850737
987301783,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-txlrn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6d2625e-ba6e-44e1-b245-0edc2adaa243,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a981c68e927108571692d174ebc0cf47e600882543d6dd401c23cbcd805d49d,PodSandboxId:11b2a45f795d401fe4c78cf74478d3d702eff22fef4bdd814d8198ee5072d604,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db300
2f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726850713649115674,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e04b7e0-a0fe-4e65-9ba5-63be2690da1d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70c74f4f1e0bde75fc553a034aa664a515c218d2b72725850921f92314b6ec06,PodSandboxId:cfda686abf7f1fef69de6a34f633f44ac3c87637d6ec92d05dc4a45a4d5652b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f
91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726850710125894103,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nqbzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 734f1782-975a-486b-adf3-32f60c376a9a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c60a90d5ed294c1a5015ea6f6b5c5259e8d437a6b5dd0f9dd758bb62d91c7b7,PodSandboxId:b53a284c395cfb6bdea6622b664327da6733b43d0375a7570cfa3dac443563e5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&Im
ageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726850707153952752,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xr4bt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a20cb9e-3e82-4bda-9529-7e024f9681a4,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44c347dc4cb2326d9ce7eef959abf86dcaee69ecf824e59fbe44600500e8a0f4,PodSandboxId:0ccdde3d3e8e30fec62b1f315de346cf5989b81e93276bfcf9792ae014efb9d5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd
71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726850695786767603,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-489802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8db84d1368c7024c014f2f2f0d973aae,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79fb233450407c6fdf879eb55124a5840bf49aaa572c10f7add06512d38df264,PodSandboxId:b3c515c903cd8c54cc3829530f8702fa82f07287a4bcae50433ffb0e6100c34b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7
719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726850695761844946,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-489802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de814a9694fb61ae23ac46f9b9deb6e7,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ebda0675cfbe9e7b3e6c1ca40351339db78cf3954608a12cc779850ee452a23,PodSandboxId:ce3e5a61bc6e6a8044b701e61a79b033d814fb58851347acc4b4eaab63045047,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca50
48cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726850695741523021,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-489802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 016cfe34770e4cbd59f73407149e44ff,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53631bbb5fc199153283953dffaf83c3d2a2b4cdbda98ab81770b42af5dfe30e,PodSandboxId:c9a4930506bbb11794aa02ab9a68cfe8370b91453dd7ab2cce5eac61a155cacf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Anno
tations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726850695699198150,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-489802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50faea81a2001503e00d2a0be1ceba9e,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=56441158-9d52-4078-a6ef-c63b405c02e9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9e143aa05bf90       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   98723fa66f58f       hello-world-app-55bf9c44b4-fcflm
	b3b98df31c510       docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56                         5 minutes ago       Running             nginx                     0                   ddccd18e28f19       nginx
	1c1fd10705c64       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b            14 minutes ago      Running             gcp-auth                  0                   66f4ad3477a6c       gcp-auth-89d5ffd79-wzvr2
	b0690e87ddb4f       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef        15 minutes ago      Running             local-path-provisioner    0                   36aedadeb2582       local-path-provisioner-86d989889c-rhmqb
	3a0d036505e72       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a   16 minutes ago      Running             metrics-server            0                   1ae7bada2f668       metrics-server-84c5f94fbc-txlrn
	5a981c68e9271       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        16 minutes ago      Running             storage-provisioner       0                   11b2a45f795d4       storage-provisioner
	70c74f4f1e0bd       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                        16 minutes ago      Running             coredns                   0                   cfda686abf7f1       coredns-7c65d6cfc9-nqbzq
	7c60a90d5ed29       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                        16 minutes ago      Running             kube-proxy                0                   b53a284c395cf       kube-proxy-xr4bt
	44c347dc4cb23       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                        16 minutes ago      Running             kube-controller-manager   0                   0ccdde3d3e8e3       kube-controller-manager-addons-489802
	79fb233450407       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                        16 minutes ago      Running             kube-apiserver            0                   b3c515c903cd8       kube-apiserver-addons-489802
	5ebda0675cfbe       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        16 minutes ago      Running             etcd                      0                   ce3e5a61bc6e6       etcd-addons-489802
	53631bbb5fc19       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                        16 minutes ago      Running             kube-scheduler            0                   c9a4930506bbb       kube-scheduler-addons-489802
	
	
	==> coredns [70c74f4f1e0bde75fc553a034aa664a515c218d2b72725850921f92314b6ec06] <==
	[INFO] 127.0.0.1:51784 - 8829 "HINFO IN 5160120906343044549.4812313304468353436. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012102619s
	[INFO] 10.244.0.7:49904 - 44683 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000739291s
	[INFO] 10.244.0.7:49904 - 13446 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000838879s
	[INFO] 10.244.0.7:37182 - 17696 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000137198s
	[INFO] 10.244.0.7:37182 - 29725 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000120771s
	[INFO] 10.244.0.7:40785 - 12767 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00012186s
	[INFO] 10.244.0.7:40785 - 24273 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000223065s
	[INFO] 10.244.0.7:54049 - 5032 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000122634s
	[INFO] 10.244.0.7:54049 - 51625 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000075286s
	[INFO] 10.244.0.7:57416 - 8811 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000080693s
	[INFO] 10.244.0.7:57416 - 56406 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000038363s
	[INFO] 10.244.0.7:59797 - 29819 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000040968s
	[INFO] 10.244.0.7:59797 - 16249 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000038791s
	[INFO] 10.244.0.7:39368 - 3897 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000045812s
	[INFO] 10.244.0.7:39368 - 53818 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000034439s
	[INFO] 10.244.0.7:57499 - 43541 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000049958s
	[INFO] 10.244.0.7:57499 - 15379 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000036533s
	[INFO] 10.244.0.21:51858 - 31367 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000847603s
	[INFO] 10.244.0.21:33579 - 64948 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000139841s
	[INFO] 10.244.0.21:48527 - 40604 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000280976s
	[INFO] 10.244.0.21:52717 - 13930 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000169344s
	[INFO] 10.244.0.21:58755 - 3796 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000147676s
	[INFO] 10.244.0.21:51813 - 12818 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000082135s
	[INFO] 10.244.0.21:51795 - 17985 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.004530788s
	[INFO] 10.244.0.21:47998 - 23926 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002659458s
	
	
	==> describe nodes <==
	Name:               addons-489802
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-489802
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0626f22cf0d915d75e291a5bce701f94395056e1
	                    minikube.k8s.io/name=addons-489802
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T16_45_01_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-489802
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 16:44:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-489802
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 17:01:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 16:59:06 +0000   Fri, 20 Sep 2024 16:44:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 16:59:06 +0000   Fri, 20 Sep 2024 16:44:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 16:59:06 +0000   Fri, 20 Sep 2024 16:44:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 16:59:06 +0000   Fri, 20 Sep 2024 16:45:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.89
	  Hostname:    addons-489802
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 fd813db21ac84502aef251a6893e0027
	  System UUID:                fd813db2-1ac8-4502-aef2-51a6893e0027
	  Boot ID:                    ed0a3698-272d-483a-ba56-acac4def529a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  default                     hello-world-app-55bf9c44b4-fcflm           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m58s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m22s
	  gcp-auth                    gcp-auth-89d5ffd79-wzvr2                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 coredns-7c65d6cfc9-nqbzq                   100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     16m
	  kube-system                 etcd-addons-489802                         100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         16m
	  kube-system                 kube-apiserver-addons-489802               250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-addons-489802      200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-xr4bt                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-addons-489802               100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  local-path-storage          local-path-provisioner-86d989889c-rhmqb    0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 16m                kube-proxy       
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m (x2 over 16m)  kubelet          Node addons-489802 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m (x2 over 16m)  kubelet          Node addons-489802 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m (x2 over 16m)  kubelet          Node addons-489802 status is now: NodeHasSufficientPID
	  Normal  NodeReady                16m                kubelet          Node addons-489802 status is now: NodeReady
	  Normal  RegisteredNode           16m                node-controller  Node addons-489802 event: Registered Node addons-489802 in Controller
	
	
	==> dmesg <==
	[ +10.203071] kauditd_printk_skb: 70 callbacks suppressed
	[ +17.983286] kauditd_printk_skb: 2 callbacks suppressed
	[Sep20 16:46] kauditd_printk_skb: 4 callbacks suppressed
	[ +13.042505] kauditd_printk_skb: 42 callbacks suppressed
	[  +6.124032] kauditd_printk_skb: 45 callbacks suppressed
	[ +10.494816] kauditd_printk_skb: 64 callbacks suppressed
	[  +5.981422] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.234675] kauditd_printk_skb: 34 callbacks suppressed
	[Sep20 16:47] kauditd_printk_skb: 28 callbacks suppressed
	[  +7.543099] kauditd_printk_skb: 9 callbacks suppressed
	[Sep20 16:48] kauditd_printk_skb: 2 callbacks suppressed
	[Sep20 16:49] kauditd_printk_skb: 28 callbacks suppressed
	[Sep20 16:52] kauditd_printk_skb: 28 callbacks suppressed
	[Sep20 16:55] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.170883] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.280461] kauditd_printk_skb: 17 callbacks suppressed
	[Sep20 16:56] kauditd_printk_skb: 24 callbacks suppressed
	[ +10.067719] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.043461] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.256575] kauditd_printk_skb: 14 callbacks suppressed
	[  +9.179843] kauditd_printk_skb: 27 callbacks suppressed
	[ +15.573697] kauditd_printk_skb: 7 callbacks suppressed
	[Sep20 16:57] kauditd_printk_skb: 61 callbacks suppressed
	[Sep20 16:58] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.225825] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [5ebda0675cfbe9e7b3e6c1ca40351339db78cf3954608a12cc779850ee452a23] <==
	{"level":"info","ts":"2024-09-20T16:46:31.522662Z","caller":"traceutil/trace.go:171","msg":"trace[397127513] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1072; }","duration":"285.60775ms","start":"2024-09-20T16:46:31.237046Z","end":"2024-09-20T16:46:31.522653Z","steps":["trace[397127513] 'agreement among raft nodes before linearized reading'  (duration: 285.506056ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T16:46:31.523097Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"389.094744ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T16:46:31.521069Z","caller":"traceutil/trace.go:171","msg":"trace[1366548052] transaction","detail":"{read_only:false; response_revision:1072; number_of_response:1; }","duration":"451.994343ms","start":"2024-09-20T16:46:31.069059Z","end":"2024-09-20T16:46:31.521053Z","steps":["trace[1366548052] 'process raft request'  (duration: 450.539479ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T16:46:31.523185Z","caller":"traceutil/trace.go:171","msg":"trace[1958014936] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1072; }","duration":"389.189661ms","start":"2024-09-20T16:46:31.133988Z","end":"2024-09-20T16:46:31.523178Z","steps":["trace[1958014936] 'agreement among raft nodes before linearized reading'  (duration: 388.742689ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T16:46:31.523315Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T16:46:31.133949Z","time spent":"389.346336ms","remote":"127.0.0.1:44644","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-09-20T16:46:31.523518Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T16:46:31.069043Z","time spent":"454.199637ms","remote":"127.0.0.1:44626","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1066 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-09-20T16:46:34.697548Z","caller":"traceutil/trace.go:171","msg":"trace[1773063632] transaction","detail":"{read_only:false; response_revision:1087; number_of_response:1; }","duration":"138.671352ms","start":"2024-09-20T16:46:34.558854Z","end":"2024-09-20T16:46:34.697526Z","steps":["trace[1773063632] 'process raft request'  (duration: 138.455302ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T16:47:09.828412Z","caller":"traceutil/trace.go:171","msg":"trace[1350480991] linearizableReadLoop","detail":"{readStateIndex:1234; appliedIndex:1233; }","duration":"107.953401ms","start":"2024-09-20T16:47:09.720376Z","end":"2024-09-20T16:47:09.828329Z","steps":["trace[1350480991] 'read index received'  (duration: 107.782449ms)","trace[1350480991] 'applied index is now lower than readState.Index'  (duration: 170.357µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-20T16:47:09.828591Z","caller":"traceutil/trace.go:171","msg":"trace[1677279500] transaction","detail":"{read_only:false; response_revision:1192; number_of_response:1; }","duration":"108.710691ms","start":"2024-09-20T16:47:09.719867Z","end":"2024-09-20T16:47:09.828578Z","steps":["trace[1677279500] 'process raft request'  (duration: 108.343763ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T16:47:09.828834Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.468877ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T16:47:09.828877Z","caller":"traceutil/trace.go:171","msg":"trace[823583891] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1192; }","duration":"108.573167ms","start":"2024-09-20T16:47:09.720295Z","end":"2024-09-20T16:47:09.828868Z","steps":["trace[823583891] 'agreement among raft nodes before linearized reading'  (duration: 108.427543ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T16:54:56.686206Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1494}
	{"level":"info","ts":"2024-09-20T16:54:56.732913Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1494,"took":"45.95642ms","hash":3143060453,"current-db-size-bytes":6316032,"current-db-size":"6.3 MB","current-db-size-in-use-bytes":3231744,"current-db-size-in-use":"3.2 MB"}
	{"level":"info","ts":"2024-09-20T16:54:56.733061Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3143060453,"revision":1494,"compact-revision":-1}
	{"level":"info","ts":"2024-09-20T16:55:52.021318Z","caller":"traceutil/trace.go:171","msg":"trace[2100115174] transaction","detail":"{read_only:false; response_revision:2018; number_of_response:1; }","duration":"379.66185ms","start":"2024-09-20T16:55:51.641590Z","end":"2024-09-20T16:55:52.021252Z","steps":["trace[2100115174] 'process raft request'  (duration: 379.545504ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T16:55:52.021786Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T16:55:51.641574Z","time spent":"380.006071ms","remote":"127.0.0.1:44742","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":538,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:1986 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:451 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >"}
	{"level":"info","ts":"2024-09-20T16:55:52.022293Z","caller":"traceutil/trace.go:171","msg":"trace[35214985] linearizableReadLoop","detail":"{readStateIndex:2175; appliedIndex:2174; }","duration":"196.804789ms","start":"2024-09-20T16:55:51.825473Z","end":"2024-09-20T16:55:52.022278Z","steps":["trace[35214985] 'read index received'  (duration: 196.433504ms)","trace[35214985] 'applied index is now lower than readState.Index'  (duration: 370.887µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-20T16:55:52.022475Z","caller":"traceutil/trace.go:171","msg":"trace[1790896376] transaction","detail":"{read_only:false; response_revision:2019; number_of_response:1; }","duration":"211.987025ms","start":"2024-09-20T16:55:51.810476Z","end":"2024-09-20T16:55:52.022463Z","steps":["trace[1790896376] 'process raft request'  (duration: 211.729812ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T16:55:52.022604Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"197.118957ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T16:55:52.022641Z","caller":"traceutil/trace.go:171","msg":"trace[1794876456] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2019; }","duration":"197.165972ms","start":"2024-09-20T16:55:51.825467Z","end":"2024-09-20T16:55:52.022633Z","steps":["trace[1794876456] 'agreement among raft nodes before linearized reading'  (duration: 197.096047ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T16:56:32.273552Z","caller":"traceutil/trace.go:171","msg":"trace[1806753974] transaction","detail":"{read_only:false; response_revision:2278; number_of_response:1; }","duration":"138.283014ms","start":"2024-09-20T16:56:32.135255Z","end":"2024-09-20T16:56:32.273538Z","steps":["trace[1806753974] 'process raft request'  (duration: 137.851209ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T16:56:36.295953Z","caller":"traceutil/trace.go:171","msg":"trace[1488171244] transaction","detail":"{read_only:false; response_revision:2301; number_of_response:1; }","duration":"162.589325ms","start":"2024-09-20T16:56:36.131622Z","end":"2024-09-20T16:56:36.294211Z","steps":["trace[1488171244] 'process raft request'  (duration: 162.248073ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T16:59:56.695576Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1917}
	{"level":"info","ts":"2024-09-20T16:59:56.717998Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1917,"took":"21.268446ms","hash":962342191,"current-db-size-bytes":6316032,"current-db-size":"6.3 MB","current-db-size-in-use-bytes":4808704,"current-db-size-in-use":"4.8 MB"}
	{"level":"info","ts":"2024-09-20T16:59:56.718071Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":962342191,"revision":1917,"compact-revision":1494}
	
	
	==> gcp-auth [1c1fd10705c644580fdef2fc3075d7c9349c8e4b44d3899910dd41e40c87e2ce] <==
	2024/09/20 16:47:43 Ready to write response ...
	2024/09/20 16:47:43 Ready to marshal response ...
	2024/09/20 16:47:43 Ready to write response ...
	2024/09/20 16:55:47 Ready to marshal response ...
	2024/09/20 16:55:47 Ready to write response ...
	2024/09/20 16:55:47 Ready to marshal response ...
	2024/09/20 16:55:47 Ready to write response ...
	2024/09/20 16:55:47 Ready to marshal response ...
	2024/09/20 16:55:47 Ready to write response ...
	2024/09/20 16:55:57 Ready to marshal response ...
	2024/09/20 16:55:57 Ready to write response ...
	2024/09/20 16:56:16 Ready to marshal response ...
	2024/09/20 16:56:16 Ready to write response ...
	2024/09/20 16:56:16 Ready to marshal response ...
	2024/09/20 16:56:16 Ready to write response ...
	2024/09/20 16:56:23 Ready to marshal response ...
	2024/09/20 16:56:23 Ready to write response ...
	2024/09/20 16:56:28 Ready to marshal response ...
	2024/09/20 16:56:28 Ready to write response ...
	2024/09/20 16:56:29 Ready to marshal response ...
	2024/09/20 16:56:29 Ready to write response ...
	2024/09/20 16:56:50 Ready to marshal response ...
	2024/09/20 16:56:50 Ready to write response ...
	2024/09/20 16:58:53 Ready to marshal response ...
	2024/09/20 16:58:53 Ready to write response ...
	
	
	==> kernel <==
	 17:01:51 up 17 min,  0 users,  load average: 0.16, 0.37, 0.38
	Linux addons-489802 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [79fb233450407c6fdf879eb55124a5840bf49aaa572c10f7add06512d38df264] <==
	E0920 16:47:19.399216       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"context canceled\"}: context canceled" logger="UnhandledError"
	E0920 16:47:19.400809       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E0920 16:47:19.402902       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E0920 16:47:19.404104       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0920 16:47:19.412494       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="13.458229ms" method="GET" path="/apis/apps/v1/namespaces/yakd-dashboard/replicasets/yakd-dashboard-67d98fc6b" result=null
	I0920 16:55:47.034722       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.110.200.88"}
	I0920 16:56:11.192249       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0920 16:56:12.228711       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0920 16:56:29.568621       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0920 16:56:29.873321       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.104.88.195"}
	I0920 16:56:40.306913       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0920 16:57:06.651926       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 16:57:06.652138       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 16:57:06.689330       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 16:57:06.689633       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 16:57:06.726410       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 16:57:06.732004       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 16:57:06.763090       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 16:57:06.763215       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 16:57:06.917264       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 16:57:06.917824       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0920 16:57:07.763897       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0920 16:57:07.917564       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0920 16:57:07.952201       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0920 16:58:53.706479       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.96.236.222"}
	
	
	==> kube-controller-manager [44c347dc4cb2326d9ce7eef959abf86dcaee69ecf824e59fbe44600500e8a0f4] <==
	W0920 16:59:38.178506       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 16:59:38.178749       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 16:59:42.065287       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 16:59:42.065416       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 16:59:55.861136       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 16:59:55.861265       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 17:00:12.913281       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 17:00:12.913492       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 17:00:12.933915       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 17:00:12.933967       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 17:00:26.047671       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 17:00:26.047806       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 17:00:48.819187       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 17:00:48.819407       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 17:01:00.643082       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 17:01:00.643325       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 17:01:03.452611       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 17:01:03.452736       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 17:01:21.465163       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 17:01:21.465412       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 17:01:22.148312       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 17:01:22.148411       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 17:01:47.428050       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 17:01:47.428172       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0920 17:01:49.558752       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-84c5f94fbc" duration="27.702µs"
	
	
	==> kube-proxy [7c60a90d5ed294c1a5015ea6f6b5c5259e8d437a6b5dd0f9dd758bb62d91c7b7] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 16:45:07.927443       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0920 16:45:07.961049       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.89"]
	E0920 16:45:07.961134       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 16:45:08.130722       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 16:45:08.130762       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 16:45:08.130790       1 server_linux.go:169] "Using iptables Proxier"
	I0920 16:45:08.135726       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 16:45:08.136036       1 server.go:483] "Version info" version="v1.31.1"
	I0920 16:45:08.136059       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 16:45:08.137263       1 config.go:199] "Starting service config controller"
	I0920 16:45:08.137318       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 16:45:08.137400       1 config.go:105] "Starting endpoint slice config controller"
	I0920 16:45:08.137405       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 16:45:08.137933       1 config.go:328] "Starting node config controller"
	I0920 16:45:08.137953       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 16:45:08.237708       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 16:45:08.237750       1 shared_informer.go:320] Caches are synced for service config
	I0920 16:45:08.239006       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [53631bbb5fc199153283953dffaf83c3d2a2b4cdbda98ab81770b42af5dfe30e] <==
	W0920 16:44:58.228924       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0920 16:44:58.230396       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 16:44:58.228968       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0920 16:44:58.230429       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 16:44:59.045447       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0920 16:44:59.045496       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 16:44:59.126233       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0920 16:44:59.126435       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 16:44:59.147240       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0920 16:44:59.147292       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 16:44:59.277135       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0920 16:44:59.278460       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 16:44:59.296223       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0920 16:44:59.296273       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0920 16:44:59.348771       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0920 16:44:59.348828       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 16:44:59.368238       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0920 16:44:59.368290       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 16:44:59.411207       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0920 16:44:59.411256       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 16:44:59.475030       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0920 16:44:59.475087       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 16:44:59.605643       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0920 16:44:59.605806       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0920 16:45:02.104787       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 20 17:01:00 addons-489802 kubelet[1210]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 20 17:01:01 addons-489802 kubelet[1210]: E0920 17:01:01.542582    1210 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726851661542022126,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559237,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:01:01 addons-489802 kubelet[1210]: E0920 17:01:01.542661    1210 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726851661542022126,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559237,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:01:03 addons-489802 kubelet[1210]: E0920 17:01:03.883017    1210 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="9a99a392-d151-4f13-b9fa-105113d19455"
	Sep 20 17:01:11 addons-489802 kubelet[1210]: E0920 17:01:11.546444    1210 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726851671545853412,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559237,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:01:11 addons-489802 kubelet[1210]: E0920 17:01:11.546762    1210 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726851671545853412,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559237,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:01:16 addons-489802 kubelet[1210]: E0920 17:01:16.883072    1210 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="9a99a392-d151-4f13-b9fa-105113d19455"
	Sep 20 17:01:21 addons-489802 kubelet[1210]: E0920 17:01:21.553050    1210 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726851681552045315,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559237,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:01:21 addons-489802 kubelet[1210]: E0920 17:01:21.553394    1210 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726851681552045315,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559237,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:01:27 addons-489802 kubelet[1210]: E0920 17:01:27.882681    1210 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="9a99a392-d151-4f13-b9fa-105113d19455"
	Sep 20 17:01:31 addons-489802 kubelet[1210]: E0920 17:01:31.556146    1210 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726851691555679788,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559237,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:01:31 addons-489802 kubelet[1210]: E0920 17:01:31.556226    1210 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726851691555679788,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559237,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:01:41 addons-489802 kubelet[1210]: E0920 17:01:41.560807    1210 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726851701560168456,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559237,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:01:41 addons-489802 kubelet[1210]: E0920 17:01:41.561406    1210 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726851701560168456,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559237,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:01:41 addons-489802 kubelet[1210]: E0920 17:01:41.883411    1210 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="9a99a392-d151-4f13-b9fa-105113d19455"
	Sep 20 17:01:50 addons-489802 kubelet[1210]: I0920 17:01:50.955425    1210 scope.go:117] "RemoveContainer" containerID="3a0d036505e72ed6b62c09226aa4d219c30e6e162e73ebffc595f568b216931d"
	Sep 20 17:01:50 addons-489802 kubelet[1210]: I0920 17:01:50.987375    1210 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/b6d2625e-ba6e-44e1-b245-0edc2adaa243-tmp-dir\") pod \"b6d2625e-ba6e-44e1-b245-0edc2adaa243\" (UID: \"b6d2625e-ba6e-44e1-b245-0edc2adaa243\") "
	Sep 20 17:01:50 addons-489802 kubelet[1210]: I0920 17:01:50.987443    1210 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-znd2k\" (UniqueName: \"kubernetes.io/projected/b6d2625e-ba6e-44e1-b245-0edc2adaa243-kube-api-access-znd2k\") pod \"b6d2625e-ba6e-44e1-b245-0edc2adaa243\" (UID: \"b6d2625e-ba6e-44e1-b245-0edc2adaa243\") "
	Sep 20 17:01:50 addons-489802 kubelet[1210]: I0920 17:01:50.989750    1210 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6d2625e-ba6e-44e1-b245-0edc2adaa243-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "b6d2625e-ba6e-44e1-b245-0edc2adaa243" (UID: "b6d2625e-ba6e-44e1-b245-0edc2adaa243"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Sep 20 17:01:51 addons-489802 kubelet[1210]: I0920 17:01:51.000601    1210 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6d2625e-ba6e-44e1-b245-0edc2adaa243-kube-api-access-znd2k" (OuterVolumeSpecName: "kube-api-access-znd2k") pod "b6d2625e-ba6e-44e1-b245-0edc2adaa243" (UID: "b6d2625e-ba6e-44e1-b245-0edc2adaa243"). InnerVolumeSpecName "kube-api-access-znd2k". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 20 17:01:51 addons-489802 kubelet[1210]: I0920 17:01:51.002322    1210 scope.go:117] "RemoveContainer" containerID="3a0d036505e72ed6b62c09226aa4d219c30e6e162e73ebffc595f568b216931d"
	Sep 20 17:01:51 addons-489802 kubelet[1210]: E0920 17:01:51.004503    1210 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a0d036505e72ed6b62c09226aa4d219c30e6e162e73ebffc595f568b216931d\": container with ID starting with 3a0d036505e72ed6b62c09226aa4d219c30e6e162e73ebffc595f568b216931d not found: ID does not exist" containerID="3a0d036505e72ed6b62c09226aa4d219c30e6e162e73ebffc595f568b216931d"
	Sep 20 17:01:51 addons-489802 kubelet[1210]: I0920 17:01:51.004552    1210 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a0d036505e72ed6b62c09226aa4d219c30e6e162e73ebffc595f568b216931d"} err="failed to get container status \"3a0d036505e72ed6b62c09226aa4d219c30e6e162e73ebffc595f568b216931d\": rpc error: code = NotFound desc = could not find container \"3a0d036505e72ed6b62c09226aa4d219c30e6e162e73ebffc595f568b216931d\": container with ID starting with 3a0d036505e72ed6b62c09226aa4d219c30e6e162e73ebffc595f568b216931d not found: ID does not exist"
	Sep 20 17:01:51 addons-489802 kubelet[1210]: I0920 17:01:51.088510    1210 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-znd2k\" (UniqueName: \"kubernetes.io/projected/b6d2625e-ba6e-44e1-b245-0edc2adaa243-kube-api-access-znd2k\") on node \"addons-489802\" DevicePath \"\""
	Sep 20 17:01:51 addons-489802 kubelet[1210]: I0920 17:01:51.088543    1210 reconciler_common.go:288] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/b6d2625e-ba6e-44e1-b245-0edc2adaa243-tmp-dir\") on node \"addons-489802\" DevicePath \"\""
	
	
	==> storage-provisioner [5a981c68e927108571692d174ebc0cf47e600882543d6dd401c23cbcd805d49d] <==
	I0920 16:45:14.933598       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0920 16:45:15.129203       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0920 16:45:15.129288       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0920 16:45:15.469563       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0920 16:45:15.471781       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-489802_9f119035-fb3e-4caa-852b-e718c04f6499!
	I0920 16:45:15.471465       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"47834956-e67b-4561-9f20-a2c3f45edc3a", APIVersion:"v1", ResourceVersion:"708", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-489802_9f119035-fb3e-4caa-852b-e718c04f6499 became leader
	I0920 16:45:15.594691       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-489802_9f119035-fb3e-4caa-852b-e718c04f6499!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-489802 -n addons-489802
helpers_test.go:261: (dbg) Run:  kubectl --context addons-489802 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/MetricsServer]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-489802 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-489802 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-489802/192.168.39.89
	Start Time:       Fri, 20 Sep 2024 16:47:43 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.22
	IPs:
	  IP:  10.244.0.22
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lh4vn (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-lh4vn:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  14m                  default-scheduler  Successfully assigned default/busybox to addons-489802
	  Normal   Pulling    12m (x4 over 14m)    kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     12m (x4 over 14m)    kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     12m (x4 over 14m)    kubelet            Error: ErrImagePull
	  Warning  Failed     12m (x6 over 14m)    kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m8s (x41 over 14m)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (359.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (7.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:409: (dbg) Done: out/minikube-linux-amd64 -p functional-945494 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (4.776168328s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 image ls
functional_test.go:451: (dbg) Done: out/minikube-linux-amd64 -p functional-945494 image ls: (2.238858179s)
functional_test.go:446: expected "kicbase/echo-server:functional-945494" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (7.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-135993 node stop m02 -v=7 --alsologtostderr
E0920 17:12:20.909798   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/functional-945494/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:12:43.197232   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:13:01.871528   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/functional-945494/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:13:10.900804   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-135993 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.471275343s)

                                                
                                                
-- stdout --
	* Stopping node "ha-135993-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 17:12:01.417022   32021 out.go:345] Setting OutFile to fd 1 ...
	I0920 17:12:01.417184   32021 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:12:01.417196   32021 out.go:358] Setting ErrFile to fd 2...
	I0920 17:12:01.417204   32021 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:12:01.417500   32021 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-8777/.minikube/bin
	I0920 17:12:01.417876   32021 mustload.go:65] Loading cluster: ha-135993
	I0920 17:12:01.418310   32021 config.go:182] Loaded profile config "ha-135993": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 17:12:01.418330   32021 stop.go:39] StopHost: ha-135993-m02
	I0920 17:12:01.418740   32021 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:12:01.418785   32021 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:12:01.434970   32021 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38657
	I0920 17:12:01.435472   32021 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:12:01.436086   32021 main.go:141] libmachine: Using API Version  1
	I0920 17:12:01.436106   32021 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:12:01.436490   32021 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:12:01.438852   32021 out.go:177] * Stopping node "ha-135993-m02"  ...
	I0920 17:12:01.440334   32021 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0920 17:12:01.440377   32021 main.go:141] libmachine: (ha-135993-m02) Calling .DriverName
	I0920 17:12:01.440592   32021 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0920 17:12:01.440617   32021 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHHostname
	I0920 17:12:01.443874   32021 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:12:01.444271   32021 main.go:141] libmachine: (ha-135993-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:dc:24", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:08:28 +0000 UTC Type:0 Mac:52:54:00:87:dc:24 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-135993-m02 Clientid:01:52:54:00:87:dc:24}
	I0920 17:12:01.444313   32021 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:12:01.444501   32021 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHPort
	I0920 17:12:01.444667   32021 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHKeyPath
	I0920 17:12:01.444833   32021 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHUsername
	I0920 17:12:01.444963   32021 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m02/id_rsa Username:docker}
	I0920 17:12:01.530947   32021 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0920 17:12:01.585669   32021 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0920 17:12:01.641366   32021 main.go:141] libmachine: Stopping "ha-135993-m02"...
	I0920 17:12:01.641396   32021 main.go:141] libmachine: (ha-135993-m02) Calling .GetState
	I0920 17:12:01.643042   32021 main.go:141] libmachine: (ha-135993-m02) Calling .Stop
	I0920 17:12:01.646779   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 0/120
	I0920 17:12:02.648342   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 1/120
	I0920 17:12:03.649709   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 2/120
	I0920 17:12:04.650966   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 3/120
	I0920 17:12:05.652383   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 4/120
	I0920 17:12:06.654153   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 5/120
	I0920 17:12:07.656255   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 6/120
	I0920 17:12:08.657766   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 7/120
	I0920 17:12:09.659157   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 8/120
	I0920 17:12:10.660373   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 9/120
	I0920 17:12:11.661646   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 10/120
	I0920 17:12:12.663448   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 11/120
	I0920 17:12:13.664799   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 12/120
	I0920 17:12:14.666675   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 13/120
	I0920 17:12:15.668242   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 14/120
	I0920 17:12:16.670211   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 15/120
	I0920 17:12:17.672311   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 16/120
	I0920 17:12:18.673859   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 17/120
	I0920 17:12:19.675333   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 18/120
	I0920 17:12:20.677267   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 19/120
	I0920 17:12:21.679281   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 20/120
	I0920 17:12:22.680803   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 21/120
	I0920 17:12:23.682158   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 22/120
	I0920 17:12:24.684510   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 23/120
	I0920 17:12:25.686191   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 24/120
	I0920 17:12:26.688338   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 25/120
	I0920 17:12:27.689757   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 26/120
	I0920 17:12:28.691688   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 27/120
	I0920 17:12:29.693189   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 28/120
	I0920 17:12:30.695121   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 29/120
	I0920 17:12:31.697238   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 30/120
	I0920 17:12:32.698914   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 31/120
	I0920 17:12:33.700311   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 32/120
	I0920 17:12:34.701988   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 33/120
	I0920 17:12:35.703436   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 34/120
	I0920 17:12:36.705380   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 35/120
	I0920 17:12:37.706774   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 36/120
	I0920 17:12:38.708224   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 37/120
	I0920 17:12:39.709565   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 38/120
	I0920 17:12:40.710925   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 39/120
	I0920 17:12:41.713093   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 40/120
	I0920 17:12:42.714401   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 41/120
	I0920 17:12:43.715857   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 42/120
	I0920 17:12:44.717246   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 43/120
	I0920 17:12:45.718643   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 44/120
	I0920 17:12:46.720418   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 45/120
	I0920 17:12:47.721742   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 46/120
	I0920 17:12:48.723172   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 47/120
	I0920 17:12:49.724594   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 48/120
	I0920 17:12:50.726134   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 49/120
	I0920 17:12:51.728418   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 50/120
	I0920 17:12:52.729700   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 51/120
	I0920 17:12:53.731990   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 52/120
	I0920 17:12:54.733695   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 53/120
	I0920 17:12:55.734920   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 54/120
	I0920 17:12:56.736372   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 55/120
	I0920 17:12:57.737627   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 56/120
	I0920 17:12:58.739046   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 57/120
	I0920 17:12:59.740596   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 58/120
	I0920 17:13:00.741944   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 59/120
	I0920 17:13:01.744005   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 60/120
	I0920 17:13:02.745164   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 61/120
	I0920 17:13:03.747513   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 62/120
	I0920 17:13:04.748647   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 63/120
	I0920 17:13:05.750467   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 64/120
	I0920 17:13:06.752315   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 65/120
	I0920 17:13:07.754051   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 66/120
	I0920 17:13:08.756783   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 67/120
	I0920 17:13:09.758257   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 68/120
	I0920 17:13:10.760417   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 69/120
	I0920 17:13:11.762988   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 70/120
	I0920 17:13:12.764446   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 71/120
	I0920 17:13:13.765966   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 72/120
	I0920 17:13:14.768421   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 73/120
	I0920 17:13:15.769905   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 74/120
	I0920 17:13:16.772226   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 75/120
	I0920 17:13:17.774298   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 76/120
	I0920 17:13:18.776313   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 77/120
	I0920 17:13:19.778137   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 78/120
	I0920 17:13:20.780338   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 79/120
	I0920 17:13:21.782286   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 80/120
	I0920 17:13:22.784182   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 81/120
	I0920 17:13:23.785656   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 82/120
	I0920 17:13:24.786890   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 83/120
	I0920 17:13:25.788480   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 84/120
	I0920 17:13:26.790091   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 85/120
	I0920 17:13:27.791549   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 86/120
	I0920 17:13:28.792937   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 87/120
	I0920 17:13:29.794304   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 88/120
	I0920 17:13:30.796333   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 89/120
	I0920 17:13:31.798449   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 90/120
	I0920 17:13:32.799801   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 91/120
	I0920 17:13:33.801292   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 92/120
	I0920 17:13:34.802651   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 93/120
	I0920 17:13:35.804139   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 94/120
	I0920 17:13:36.806338   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 95/120
	I0920 17:13:37.807693   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 96/120
	I0920 17:13:38.809124   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 97/120
	I0920 17:13:39.810936   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 98/120
	I0920 17:13:40.813269   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 99/120
	I0920 17:13:41.815502   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 100/120
	I0920 17:13:42.816908   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 101/120
	I0920 17:13:43.818473   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 102/120
	I0920 17:13:44.820488   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 103/120
	I0920 17:13:45.821871   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 104/120
	I0920 17:13:46.823740   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 105/120
	I0920 17:13:47.825281   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 106/120
	I0920 17:13:48.826750   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 107/120
	I0920 17:13:49.828143   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 108/120
	I0920 17:13:50.829511   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 109/120
	I0920 17:13:51.831704   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 110/120
	I0920 17:13:52.833105   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 111/120
	I0920 17:13:53.834518   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 112/120
	I0920 17:13:54.836331   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 113/120
	I0920 17:13:55.837872   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 114/120
	I0920 17:13:56.839397   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 115/120
	I0920 17:13:57.840863   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 116/120
	I0920 17:13:58.842180   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 117/120
	I0920 17:13:59.843584   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 118/120
	I0920 17:14:00.844916   32021 main.go:141] libmachine: (ha-135993-m02) Waiting for machine to stop 119/120
	I0920 17:14:01.846002   32021 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0920 17:14:01.846130   32021 out.go:270] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-135993 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-135993 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Done: out/minikube-linux-amd64 -p ha-135993 status -v=7 --alsologtostderr: (18.695080335s)
ha_test.go:375: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-135993 status -v=7 --alsologtostderr": 
ha_test.go:378: status says not three hosts are running: args "out/minikube-linux-amd64 -p ha-135993 status -v=7 --alsologtostderr": 
ha_test.go:381: status says not three kubelets are running: args "out/minikube-linux-amd64 -p ha-135993 status -v=7 --alsologtostderr": 
ha_test.go:384: status says not two apiservers are running: args "out/minikube-linux-amd64 -p ha-135993 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-135993 -n ha-135993
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-135993 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-135993 logs -n 25: (1.438445151s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-135993 cp ha-135993-m03:/home/docker/cp-test.txt                              | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:11 UTC | 20 Sep 24 17:11 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2672215621/001/cp-test_ha-135993-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-135993 ssh -n                                                                 | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:11 UTC | 20 Sep 24 17:11 UTC |
	|         | ha-135993-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-135993 cp ha-135993-m03:/home/docker/cp-test.txt                              | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:11 UTC | 20 Sep 24 17:11 UTC |
	|         | ha-135993:/home/docker/cp-test_ha-135993-m03_ha-135993.txt                       |           |         |         |                     |                     |
	| ssh     | ha-135993 ssh -n                                                                 | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:11 UTC | 20 Sep 24 17:11 UTC |
	|         | ha-135993-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-135993 ssh -n ha-135993 sudo cat                                              | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:11 UTC | 20 Sep 24 17:11 UTC |
	|         | /home/docker/cp-test_ha-135993-m03_ha-135993.txt                                 |           |         |         |                     |                     |
	| cp      | ha-135993 cp ha-135993-m03:/home/docker/cp-test.txt                              | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:11 UTC | 20 Sep 24 17:11 UTC |
	|         | ha-135993-m02:/home/docker/cp-test_ha-135993-m03_ha-135993-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-135993 ssh -n                                                                 | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:11 UTC | 20 Sep 24 17:11 UTC |
	|         | ha-135993-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-135993 ssh -n ha-135993-m02 sudo cat                                          | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:11 UTC | 20 Sep 24 17:11 UTC |
	|         | /home/docker/cp-test_ha-135993-m03_ha-135993-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-135993 cp ha-135993-m03:/home/docker/cp-test.txt                              | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:11 UTC | 20 Sep 24 17:11 UTC |
	|         | ha-135993-m04:/home/docker/cp-test_ha-135993-m03_ha-135993-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-135993 ssh -n                                                                 | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:11 UTC | 20 Sep 24 17:11 UTC |
	|         | ha-135993-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-135993 ssh -n ha-135993-m04 sudo cat                                          | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:11 UTC | 20 Sep 24 17:11 UTC |
	|         | /home/docker/cp-test_ha-135993-m03_ha-135993-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-135993 cp testdata/cp-test.txt                                                | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:11 UTC | 20 Sep 24 17:11 UTC |
	|         | ha-135993-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-135993 ssh -n                                                                 | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:11 UTC | 20 Sep 24 17:11 UTC |
	|         | ha-135993-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-135993 cp ha-135993-m04:/home/docker/cp-test.txt                              | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:11 UTC | 20 Sep 24 17:11 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2672215621/001/cp-test_ha-135993-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-135993 ssh -n                                                                 | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:11 UTC | 20 Sep 24 17:11 UTC |
	|         | ha-135993-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-135993 cp ha-135993-m04:/home/docker/cp-test.txt                              | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:11 UTC | 20 Sep 24 17:11 UTC |
	|         | ha-135993:/home/docker/cp-test_ha-135993-m04_ha-135993.txt                       |           |         |         |                     |                     |
	| ssh     | ha-135993 ssh -n                                                                 | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:11 UTC | 20 Sep 24 17:11 UTC |
	|         | ha-135993-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-135993 ssh -n ha-135993 sudo cat                                              | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:11 UTC | 20 Sep 24 17:11 UTC |
	|         | /home/docker/cp-test_ha-135993-m04_ha-135993.txt                                 |           |         |         |                     |                     |
	| cp      | ha-135993 cp ha-135993-m04:/home/docker/cp-test.txt                              | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:11 UTC | 20 Sep 24 17:12 UTC |
	|         | ha-135993-m02:/home/docker/cp-test_ha-135993-m04_ha-135993-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-135993 ssh -n                                                                 | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:12 UTC | 20 Sep 24 17:12 UTC |
	|         | ha-135993-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-135993 ssh -n ha-135993-m02 sudo cat                                          | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:12 UTC | 20 Sep 24 17:12 UTC |
	|         | /home/docker/cp-test_ha-135993-m04_ha-135993-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-135993 cp ha-135993-m04:/home/docker/cp-test.txt                              | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:12 UTC | 20 Sep 24 17:12 UTC |
	|         | ha-135993-m03:/home/docker/cp-test_ha-135993-m04_ha-135993-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-135993 ssh -n                                                                 | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:12 UTC | 20 Sep 24 17:12 UTC |
	|         | ha-135993-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-135993 ssh -n ha-135993-m03 sudo cat                                          | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:12 UTC | 20 Sep 24 17:12 UTC |
	|         | /home/docker/cp-test_ha-135993-m04_ha-135993-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-135993 node stop m02 -v=7                                                     | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:12 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 17:07:28
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 17:07:28.224109   27962 out.go:345] Setting OutFile to fd 1 ...
	I0920 17:07:28.224206   27962 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:07:28.224213   27962 out.go:358] Setting ErrFile to fd 2...
	I0920 17:07:28.224218   27962 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:07:28.224387   27962 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-8777/.minikube/bin
	I0920 17:07:28.224982   27962 out.go:352] Setting JSON to false
	I0920 17:07:28.225784   27962 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2991,"bootTime":1726849057,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 17:07:28.225901   27962 start.go:139] virtualization: kvm guest
	I0920 17:07:28.228074   27962 out.go:177] * [ha-135993] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 17:07:28.229408   27962 notify.go:220] Checking for updates...
	I0920 17:07:28.229444   27962 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 17:07:28.230821   27962 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 17:07:28.231979   27962 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-8777/kubeconfig
	I0920 17:07:28.233045   27962 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-8777/.minikube
	I0920 17:07:28.234136   27962 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 17:07:28.235151   27962 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 17:07:28.236602   27962 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 17:07:28.271877   27962 out.go:177] * Using the kvm2 driver based on user configuration
	I0920 17:07:28.273222   27962 start.go:297] selected driver: kvm2
	I0920 17:07:28.273240   27962 start.go:901] validating driver "kvm2" against <nil>
	I0920 17:07:28.273253   27962 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 17:07:28.274045   27962 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 17:07:28.274154   27962 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19672-8777/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 17:07:28.289424   27962 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 17:07:28.289473   27962 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 17:07:28.289714   27962 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 17:07:28.289743   27962 cni.go:84] Creating CNI manager for ""
	I0920 17:07:28.289789   27962 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0920 17:07:28.289814   27962 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0920 17:07:28.289902   27962 start.go:340] cluster config:
	{Name:ha-135993 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-135993 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0920 17:07:28.290006   27962 iso.go:125] acquiring lock: {Name:mkba95ef0488e46f622333e9f317f43def93040b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 17:07:28.291840   27962 out.go:177] * Starting "ha-135993" primary control-plane node in "ha-135993" cluster
	I0920 17:07:28.292971   27962 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 17:07:28.293012   27962 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0920 17:07:28.293022   27962 cache.go:56] Caching tarball of preloaded images
	I0920 17:07:28.293121   27962 preload.go:172] Found /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 17:07:28.293135   27962 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 17:07:28.293509   27962 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/config.json ...
	I0920 17:07:28.293532   27962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/config.json: {Name:mk8c38de8f77a94cd04edafc97e1e3e5f16f67aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:07:28.293702   27962 start.go:360] acquireMachinesLock for ha-135993: {Name:mkfeedb385cf08b5d2aa00913e85815d02a180c2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 17:07:28.293739   27962 start.go:364] duration metric: took 21.191µs to acquireMachinesLock for "ha-135993"
	I0920 17:07:28.293762   27962 start.go:93] Provisioning new machine with config: &{Name:ha-135993 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-135993 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 17:07:28.293816   27962 start.go:125] createHost starting for "" (driver="kvm2")
	I0920 17:07:28.295606   27962 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 17:07:28.295844   27962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:07:28.295897   27962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:07:28.310515   27962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38263
	I0920 17:07:28.311021   27962 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:07:28.311565   27962 main.go:141] libmachine: Using API Version  1
	I0920 17:07:28.311587   27962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:07:28.311884   27962 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:07:28.312062   27962 main.go:141] libmachine: (ha-135993) Calling .GetMachineName
	I0920 17:07:28.312230   27962 main.go:141] libmachine: (ha-135993) Calling .DriverName
	I0920 17:07:28.312390   27962 start.go:159] libmachine.API.Create for "ha-135993" (driver="kvm2")
	I0920 17:07:28.312423   27962 client.go:168] LocalClient.Create starting
	I0920 17:07:28.312451   27962 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem
	I0920 17:07:28.312493   27962 main.go:141] libmachine: Decoding PEM data...
	I0920 17:07:28.312531   27962 main.go:141] libmachine: Parsing certificate...
	I0920 17:07:28.312583   27962 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem
	I0920 17:07:28.312603   27962 main.go:141] libmachine: Decoding PEM data...
	I0920 17:07:28.312616   27962 main.go:141] libmachine: Parsing certificate...
	I0920 17:07:28.312634   27962 main.go:141] libmachine: Running pre-create checks...
	I0920 17:07:28.312641   27962 main.go:141] libmachine: (ha-135993) Calling .PreCreateCheck
	I0920 17:07:28.313012   27962 main.go:141] libmachine: (ha-135993) Calling .GetConfigRaw
	I0920 17:07:28.313345   27962 main.go:141] libmachine: Creating machine...
	I0920 17:07:28.313358   27962 main.go:141] libmachine: (ha-135993) Calling .Create
	I0920 17:07:28.313496   27962 main.go:141] libmachine: (ha-135993) Creating KVM machine...
	I0920 17:07:28.314784   27962 main.go:141] libmachine: (ha-135993) DBG | found existing default KVM network
	I0920 17:07:28.315382   27962 main.go:141] libmachine: (ha-135993) DBG | I0920 17:07:28.315245   27985 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002211e0}
	I0920 17:07:28.315406   27962 main.go:141] libmachine: (ha-135993) DBG | created network xml: 
	I0920 17:07:28.315419   27962 main.go:141] libmachine: (ha-135993) DBG | <network>
	I0920 17:07:28.315429   27962 main.go:141] libmachine: (ha-135993) DBG |   <name>mk-ha-135993</name>
	I0920 17:07:28.315440   27962 main.go:141] libmachine: (ha-135993) DBG |   <dns enable='no'/>
	I0920 17:07:28.315450   27962 main.go:141] libmachine: (ha-135993) DBG |   
	I0920 17:07:28.315469   27962 main.go:141] libmachine: (ha-135993) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0920 17:07:28.315477   27962 main.go:141] libmachine: (ha-135993) DBG |     <dhcp>
	I0920 17:07:28.315483   27962 main.go:141] libmachine: (ha-135993) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0920 17:07:28.315496   27962 main.go:141] libmachine: (ha-135993) DBG |     </dhcp>
	I0920 17:07:28.315507   27962 main.go:141] libmachine: (ha-135993) DBG |   </ip>
	I0920 17:07:28.315519   27962 main.go:141] libmachine: (ha-135993) DBG |   
	I0920 17:07:28.315530   27962 main.go:141] libmachine: (ha-135993) DBG | </network>
	I0920 17:07:28.315542   27962 main.go:141] libmachine: (ha-135993) DBG | 
	I0920 17:07:28.320907   27962 main.go:141] libmachine: (ha-135993) DBG | trying to create private KVM network mk-ha-135993 192.168.39.0/24...
	I0920 17:07:28.387245   27962 main.go:141] libmachine: (ha-135993) DBG | private KVM network mk-ha-135993 192.168.39.0/24 created
	I0920 17:07:28.387277   27962 main.go:141] libmachine: (ha-135993) DBG | I0920 17:07:28.387214   27985 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19672-8777/.minikube
	I0920 17:07:28.387292   27962 main.go:141] libmachine: (ha-135993) Setting up store path in /home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993 ...
	I0920 17:07:28.387307   27962 main.go:141] libmachine: (ha-135993) Building disk image from file:///home/jenkins/minikube-integration/19672-8777/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso
	I0920 17:07:28.387375   27962 main.go:141] libmachine: (ha-135993) Downloading /home/jenkins/minikube-integration/19672-8777/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19672-8777/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso...
	I0920 17:07:28.647940   27962 main.go:141] libmachine: (ha-135993) DBG | I0920 17:07:28.647805   27985 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993/id_rsa...
	I0920 17:07:28.842374   27962 main.go:141] libmachine: (ha-135993) DBG | I0920 17:07:28.842220   27985 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993/ha-135993.rawdisk...
	I0920 17:07:28.842416   27962 main.go:141] libmachine: (ha-135993) DBG | Writing magic tar header
	I0920 17:07:28.842425   27962 main.go:141] libmachine: (ha-135993) DBG | Writing SSH key tar header
	I0920 17:07:28.842433   27962 main.go:141] libmachine: (ha-135993) DBG | I0920 17:07:28.842377   27985 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993 ...
	I0920 17:07:28.842562   27962 main.go:141] libmachine: (ha-135993) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993
	I0920 17:07:28.842579   27962 main.go:141] libmachine: (ha-135993) Setting executable bit set on /home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993 (perms=drwx------)
	I0920 17:07:28.842585   27962 main.go:141] libmachine: (ha-135993) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-8777/.minikube/machines
	I0920 17:07:28.842594   27962 main.go:141] libmachine: (ha-135993) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-8777/.minikube
	I0920 17:07:28.842600   27962 main.go:141] libmachine: (ha-135993) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-8777
	I0920 17:07:28.842608   27962 main.go:141] libmachine: (ha-135993) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0920 17:07:28.842615   27962 main.go:141] libmachine: (ha-135993) Setting executable bit set on /home/jenkins/minikube-integration/19672-8777/.minikube/machines (perms=drwxr-xr-x)
	I0920 17:07:28.842628   27962 main.go:141] libmachine: (ha-135993) Setting executable bit set on /home/jenkins/minikube-integration/19672-8777/.minikube (perms=drwxr-xr-x)
	I0920 17:07:28.842634   27962 main.go:141] libmachine: (ha-135993) Setting executable bit set on /home/jenkins/minikube-integration/19672-8777 (perms=drwxrwxr-x)
	I0920 17:07:28.842641   27962 main.go:141] libmachine: (ha-135993) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0920 17:07:28.842659   27962 main.go:141] libmachine: (ha-135993) DBG | Checking permissions on dir: /home/jenkins
	I0920 17:07:28.842667   27962 main.go:141] libmachine: (ha-135993) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0920 17:07:28.842678   27962 main.go:141] libmachine: (ha-135993) Creating domain...
	I0920 17:07:28.842684   27962 main.go:141] libmachine: (ha-135993) DBG | Checking permissions on dir: /home
	I0920 17:07:28.842691   27962 main.go:141] libmachine: (ha-135993) DBG | Skipping /home - not owner
	I0920 17:07:28.843894   27962 main.go:141] libmachine: (ha-135993) define libvirt domain using xml: 
	I0920 17:07:28.843929   27962 main.go:141] libmachine: (ha-135993) <domain type='kvm'>
	I0920 17:07:28.843939   27962 main.go:141] libmachine: (ha-135993)   <name>ha-135993</name>
	I0920 17:07:28.843946   27962 main.go:141] libmachine: (ha-135993)   <memory unit='MiB'>2200</memory>
	I0920 17:07:28.843953   27962 main.go:141] libmachine: (ha-135993)   <vcpu>2</vcpu>
	I0920 17:07:28.843960   27962 main.go:141] libmachine: (ha-135993)   <features>
	I0920 17:07:28.843968   27962 main.go:141] libmachine: (ha-135993)     <acpi/>
	I0920 17:07:28.843974   27962 main.go:141] libmachine: (ha-135993)     <apic/>
	I0920 17:07:28.843981   27962 main.go:141] libmachine: (ha-135993)     <pae/>
	I0920 17:07:28.844000   27962 main.go:141] libmachine: (ha-135993)     
	I0920 17:07:28.844009   27962 main.go:141] libmachine: (ha-135993)   </features>
	I0920 17:07:28.844018   27962 main.go:141] libmachine: (ha-135993)   <cpu mode='host-passthrough'>
	I0920 17:07:28.844024   27962 main.go:141] libmachine: (ha-135993)   
	I0920 17:07:28.844044   27962 main.go:141] libmachine: (ha-135993)   </cpu>
	I0920 17:07:28.844054   27962 main.go:141] libmachine: (ha-135993)   <os>
	I0920 17:07:28.844083   27962 main.go:141] libmachine: (ha-135993)     <type>hvm</type>
	I0920 17:07:28.844103   27962 main.go:141] libmachine: (ha-135993)     <boot dev='cdrom'/>
	I0920 17:07:28.844109   27962 main.go:141] libmachine: (ha-135993)     <boot dev='hd'/>
	I0920 17:07:28.844113   27962 main.go:141] libmachine: (ha-135993)     <bootmenu enable='no'/>
	I0920 17:07:28.844118   27962 main.go:141] libmachine: (ha-135993)   </os>
	I0920 17:07:28.844121   27962 main.go:141] libmachine: (ha-135993)   <devices>
	I0920 17:07:28.844128   27962 main.go:141] libmachine: (ha-135993)     <disk type='file' device='cdrom'>
	I0920 17:07:28.844137   27962 main.go:141] libmachine: (ha-135993)       <source file='/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993/boot2docker.iso'/>
	I0920 17:07:28.844142   27962 main.go:141] libmachine: (ha-135993)       <target dev='hdc' bus='scsi'/>
	I0920 17:07:28.844146   27962 main.go:141] libmachine: (ha-135993)       <readonly/>
	I0920 17:07:28.844151   27962 main.go:141] libmachine: (ha-135993)     </disk>
	I0920 17:07:28.844157   27962 main.go:141] libmachine: (ha-135993)     <disk type='file' device='disk'>
	I0920 17:07:28.844164   27962 main.go:141] libmachine: (ha-135993)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0920 17:07:28.844172   27962 main.go:141] libmachine: (ha-135993)       <source file='/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993/ha-135993.rawdisk'/>
	I0920 17:07:28.844194   27962 main.go:141] libmachine: (ha-135993)       <target dev='hda' bus='virtio'/>
	I0920 17:07:28.844214   27962 main.go:141] libmachine: (ha-135993)     </disk>
	I0920 17:07:28.844234   27962 main.go:141] libmachine: (ha-135993)     <interface type='network'>
	I0920 17:07:28.844247   27962 main.go:141] libmachine: (ha-135993)       <source network='mk-ha-135993'/>
	I0920 17:07:28.844256   27962 main.go:141] libmachine: (ha-135993)       <model type='virtio'/>
	I0920 17:07:28.844274   27962 main.go:141] libmachine: (ha-135993)     </interface>
	I0920 17:07:28.844298   27962 main.go:141] libmachine: (ha-135993)     <interface type='network'>
	I0920 17:07:28.844316   27962 main.go:141] libmachine: (ha-135993)       <source network='default'/>
	I0920 17:07:28.844331   27962 main.go:141] libmachine: (ha-135993)       <model type='virtio'/>
	I0920 17:07:28.844342   27962 main.go:141] libmachine: (ha-135993)     </interface>
	I0920 17:07:28.844351   27962 main.go:141] libmachine: (ha-135993)     <serial type='pty'>
	I0920 17:07:28.844360   27962 main.go:141] libmachine: (ha-135993)       <target port='0'/>
	I0920 17:07:28.844366   27962 main.go:141] libmachine: (ha-135993)     </serial>
	I0920 17:07:28.844373   27962 main.go:141] libmachine: (ha-135993)     <console type='pty'>
	I0920 17:07:28.844381   27962 main.go:141] libmachine: (ha-135993)       <target type='serial' port='0'/>
	I0920 17:07:28.844400   27962 main.go:141] libmachine: (ha-135993)     </console>
	I0920 17:07:28.844411   27962 main.go:141] libmachine: (ha-135993)     <rng model='virtio'>
	I0920 17:07:28.844423   27962 main.go:141] libmachine: (ha-135993)       <backend model='random'>/dev/random</backend>
	I0920 17:07:28.844437   27962 main.go:141] libmachine: (ha-135993)     </rng>
	I0920 17:07:28.844445   27962 main.go:141] libmachine: (ha-135993)     
	I0920 17:07:28.844456   27962 main.go:141] libmachine: (ha-135993)     
	I0920 17:07:28.844462   27962 main.go:141] libmachine: (ha-135993)   </devices>
	I0920 17:07:28.844471   27962 main.go:141] libmachine: (ha-135993) </domain>
	I0920 17:07:28.844477   27962 main.go:141] libmachine: (ha-135993) 
	I0920 17:07:28.849080   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:80:85:3f in network default
	I0920 17:07:28.849710   27962 main.go:141] libmachine: (ha-135993) Ensuring networks are active...
	I0920 17:07:28.849730   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:28.850712   27962 main.go:141] libmachine: (ha-135993) Ensuring network default is active
	I0920 17:07:28.850972   27962 main.go:141] libmachine: (ha-135993) Ensuring network mk-ha-135993 is active
	I0920 17:07:28.851547   27962 main.go:141] libmachine: (ha-135993) Getting domain xml...
	I0920 17:07:28.852218   27962 main.go:141] libmachine: (ha-135993) Creating domain...
	I0920 17:07:30.058549   27962 main.go:141] libmachine: (ha-135993) Waiting to get IP...
	I0920 17:07:30.059436   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:30.059857   27962 main.go:141] libmachine: (ha-135993) DBG | unable to find current IP address of domain ha-135993 in network mk-ha-135993
	I0920 17:07:30.059875   27962 main.go:141] libmachine: (ha-135993) DBG | I0920 17:07:30.059831   27985 retry.go:31] will retry after 273.871147ms: waiting for machine to come up
	I0920 17:07:30.335232   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:30.335705   27962 main.go:141] libmachine: (ha-135993) DBG | unable to find current IP address of domain ha-135993 in network mk-ha-135993
	I0920 17:07:30.335727   27962 main.go:141] libmachine: (ha-135993) DBG | I0920 17:07:30.335673   27985 retry.go:31] will retry after 312.261403ms: waiting for machine to come up
	I0920 17:07:30.649140   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:30.649587   27962 main.go:141] libmachine: (ha-135993) DBG | unable to find current IP address of domain ha-135993 in network mk-ha-135993
	I0920 17:07:30.649616   27962 main.go:141] libmachine: (ha-135993) DBG | I0920 17:07:30.649539   27985 retry.go:31] will retry after 394.960563ms: waiting for machine to come up
	I0920 17:07:31.046134   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:31.046737   27962 main.go:141] libmachine: (ha-135993) DBG | unable to find current IP address of domain ha-135993 in network mk-ha-135993
	I0920 17:07:31.046803   27962 main.go:141] libmachine: (ha-135993) DBG | I0920 17:07:31.046706   27985 retry.go:31] will retry after 406.180853ms: waiting for machine to come up
	I0920 17:07:31.454086   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:31.454470   27962 main.go:141] libmachine: (ha-135993) DBG | unable to find current IP address of domain ha-135993 in network mk-ha-135993
	I0920 17:07:31.454493   27962 main.go:141] libmachine: (ha-135993) DBG | I0920 17:07:31.454441   27985 retry.go:31] will retry after 507.991566ms: waiting for machine to come up
	I0920 17:07:31.964134   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:31.964550   27962 main.go:141] libmachine: (ha-135993) DBG | unable to find current IP address of domain ha-135993 in network mk-ha-135993
	I0920 17:07:31.964579   27962 main.go:141] libmachine: (ha-135993) DBG | I0920 17:07:31.964520   27985 retry.go:31] will retry after 921.386836ms: waiting for machine to come up
	I0920 17:07:32.887074   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:32.887532   27962 main.go:141] libmachine: (ha-135993) DBG | unable to find current IP address of domain ha-135993 in network mk-ha-135993
	I0920 17:07:32.887576   27962 main.go:141] libmachine: (ha-135993) DBG | I0920 17:07:32.887477   27985 retry.go:31] will retry after 836.533379ms: waiting for machine to come up
	I0920 17:07:33.725040   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:33.725632   27962 main.go:141] libmachine: (ha-135993) DBG | unable to find current IP address of domain ha-135993 in network mk-ha-135993
	I0920 17:07:33.725663   27962 main.go:141] libmachine: (ha-135993) DBG | I0920 17:07:33.725548   27985 retry.go:31] will retry after 1.249731704s: waiting for machine to come up
	I0920 17:07:34.976928   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:34.977332   27962 main.go:141] libmachine: (ha-135993) DBG | unable to find current IP address of domain ha-135993 in network mk-ha-135993
	I0920 17:07:34.977363   27962 main.go:141] libmachine: (ha-135993) DBG | I0920 17:07:34.977281   27985 retry.go:31] will retry after 1.538905112s: waiting for machine to come up
	I0920 17:07:36.517997   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:36.518523   27962 main.go:141] libmachine: (ha-135993) DBG | unable to find current IP address of domain ha-135993 in network mk-ha-135993
	I0920 17:07:36.518558   27962 main.go:141] libmachine: (ha-135993) DBG | I0920 17:07:36.518494   27985 retry.go:31] will retry after 1.90472576s: waiting for machine to come up
	I0920 17:07:38.424570   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:38.424987   27962 main.go:141] libmachine: (ha-135993) DBG | unable to find current IP address of domain ha-135993 in network mk-ha-135993
	I0920 17:07:38.425014   27962 main.go:141] libmachine: (ha-135993) DBG | I0920 17:07:38.424942   27985 retry.go:31] will retry after 2.741058611s: waiting for machine to come up
	I0920 17:07:41.169975   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:41.170341   27962 main.go:141] libmachine: (ha-135993) DBG | unable to find current IP address of domain ha-135993 in network mk-ha-135993
	I0920 17:07:41.170384   27962 main.go:141] libmachine: (ha-135993) DBG | I0920 17:07:41.170291   27985 retry.go:31] will retry after 3.268233116s: waiting for machine to come up
	I0920 17:07:44.440089   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:44.440457   27962 main.go:141] libmachine: (ha-135993) DBG | unable to find current IP address of domain ha-135993 in network mk-ha-135993
	I0920 17:07:44.440479   27962 main.go:141] libmachine: (ha-135993) DBG | I0920 17:07:44.440421   27985 retry.go:31] will retry after 4.54359632s: waiting for machine to come up
	I0920 17:07:48.986065   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:48.986437   27962 main.go:141] libmachine: (ha-135993) Found IP for machine: 192.168.39.60
	I0920 17:07:48.986462   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has current primary IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:48.986471   27962 main.go:141] libmachine: (ha-135993) Reserving static IP address...
	I0920 17:07:48.986867   27962 main.go:141] libmachine: (ha-135993) DBG | unable to find host DHCP lease matching {name: "ha-135993", mac: "52:54:00:99:26:09", ip: "192.168.39.60"} in network mk-ha-135993
	I0920 17:07:49.060367   27962 main.go:141] libmachine: (ha-135993) DBG | Getting to WaitForSSH function...
	I0920 17:07:49.060399   27962 main.go:141] libmachine: (ha-135993) Reserved static IP address: 192.168.39.60
	I0920 17:07:49.060416   27962 main.go:141] libmachine: (ha-135993) Waiting for SSH to be available...
	I0920 17:07:49.063301   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:49.063688   27962 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:minikube Clientid:01:52:54:00:99:26:09}
	I0920 17:07:49.063720   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:49.063827   27962 main.go:141] libmachine: (ha-135993) DBG | Using SSH client type: external
	I0920 17:07:49.063851   27962 main.go:141] libmachine: (ha-135993) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993/id_rsa (-rw-------)
	I0920 17:07:49.063904   27962 main.go:141] libmachine: (ha-135993) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.60 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 17:07:49.063928   27962 main.go:141] libmachine: (ha-135993) DBG | About to run SSH command:
	I0920 17:07:49.063942   27962 main.go:141] libmachine: (ha-135993) DBG | exit 0
	I0920 17:07:49.193721   27962 main.go:141] libmachine: (ha-135993) DBG | SSH cmd err, output: <nil>: 
	I0920 17:07:49.194050   27962 main.go:141] libmachine: (ha-135993) KVM machine creation complete!
	I0920 17:07:49.194374   27962 main.go:141] libmachine: (ha-135993) Calling .GetConfigRaw
	I0920 17:07:49.195018   27962 main.go:141] libmachine: (ha-135993) Calling .DriverName
	I0920 17:07:49.195196   27962 main.go:141] libmachine: (ha-135993) Calling .DriverName
	I0920 17:07:49.195368   27962 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0920 17:07:49.195383   27962 main.go:141] libmachine: (ha-135993) Calling .GetState
	I0920 17:07:49.196554   27962 main.go:141] libmachine: Detecting operating system of created instance...
	I0920 17:07:49.196568   27962 main.go:141] libmachine: Waiting for SSH to be available...
	I0920 17:07:49.196573   27962 main.go:141] libmachine: Getting to WaitForSSH function...
	I0920 17:07:49.196578   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHHostname
	I0920 17:07:49.199132   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:49.199593   27962 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:07:49.199612   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:49.199789   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHPort
	I0920 17:07:49.199931   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:07:49.200061   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:07:49.200187   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHUsername
	I0920 17:07:49.200332   27962 main.go:141] libmachine: Using SSH client type: native
	I0920 17:07:49.200544   27962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0920 17:07:49.200555   27962 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0920 17:07:49.309150   27962 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 17:07:49.309171   27962 main.go:141] libmachine: Detecting the provisioner...
	I0920 17:07:49.309178   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHHostname
	I0920 17:07:49.311937   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:49.312313   27962 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:07:49.312340   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:49.312539   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHPort
	I0920 17:07:49.312760   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:07:49.312905   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:07:49.313028   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHUsername
	I0920 17:07:49.313214   27962 main.go:141] libmachine: Using SSH client type: native
	I0920 17:07:49.313445   27962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0920 17:07:49.313459   27962 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0920 17:07:49.422616   27962 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0920 17:07:49.422713   27962 main.go:141] libmachine: found compatible host: buildroot
	I0920 17:07:49.422725   27962 main.go:141] libmachine: Provisioning with buildroot...
	I0920 17:07:49.422735   27962 main.go:141] libmachine: (ha-135993) Calling .GetMachineName
	I0920 17:07:49.422993   27962 buildroot.go:166] provisioning hostname "ha-135993"
	I0920 17:07:49.423024   27962 main.go:141] libmachine: (ha-135993) Calling .GetMachineName
	I0920 17:07:49.423217   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHHostname
	I0920 17:07:49.425983   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:49.426356   27962 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:07:49.426386   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:49.426537   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHPort
	I0920 17:07:49.426731   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:07:49.426884   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:07:49.427002   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHUsername
	I0920 17:07:49.427182   27962 main.go:141] libmachine: Using SSH client type: native
	I0920 17:07:49.427358   27962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0920 17:07:49.427369   27962 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-135993 && echo "ha-135993" | sudo tee /etc/hostname
	I0920 17:07:49.546887   27962 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-135993
	
	I0920 17:07:49.546939   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHHostname
	I0920 17:07:49.549688   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:49.550074   27962 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:07:49.550101   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:49.550275   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHPort
	I0920 17:07:49.550460   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:07:49.550617   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:07:49.550748   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHUsername
	I0920 17:07:49.550889   27962 main.go:141] libmachine: Using SSH client type: native
	I0920 17:07:49.551094   27962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0920 17:07:49.551110   27962 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-135993' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-135993/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-135993' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 17:07:49.666876   27962 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 17:07:49.666908   27962 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-8777/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-8777/.minikube}
	I0920 17:07:49.666933   27962 buildroot.go:174] setting up certificates
	I0920 17:07:49.666946   27962 provision.go:84] configureAuth start
	I0920 17:07:49.666956   27962 main.go:141] libmachine: (ha-135993) Calling .GetMachineName
	I0920 17:07:49.667278   27962 main.go:141] libmachine: (ha-135993) Calling .GetIP
	I0920 17:07:49.670314   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:49.670647   27962 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:07:49.670670   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:49.670822   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHHostname
	I0920 17:07:49.672840   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:49.673146   27962 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:07:49.673169   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:49.673340   27962 provision.go:143] copyHostCerts
	I0920 17:07:49.673366   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem
	I0920 17:07:49.673396   27962 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem, removing ...
	I0920 17:07:49.673411   27962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem
	I0920 17:07:49.673481   27962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem (1082 bytes)
	I0920 17:07:49.673583   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem
	I0920 17:07:49.673609   27962 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem, removing ...
	I0920 17:07:49.673619   27962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem
	I0920 17:07:49.673659   27962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem (1123 bytes)
	I0920 17:07:49.673727   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem
	I0920 17:07:49.673743   27962 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem, removing ...
	I0920 17:07:49.673749   27962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem
	I0920 17:07:49.673771   27962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem (1675 bytes)
	I0920 17:07:49.673820   27962 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem org=jenkins.ha-135993 san=[127.0.0.1 192.168.39.60 ha-135993 localhost minikube]
	I0920 17:07:49.869795   27962 provision.go:177] copyRemoteCerts
	I0920 17:07:49.869886   27962 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 17:07:49.869910   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHHostname
	I0920 17:07:49.872957   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:49.873263   27962 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:07:49.873287   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:49.873619   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHPort
	I0920 17:07:49.874014   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:07:49.874211   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHUsername
	I0920 17:07:49.874372   27962 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993/id_rsa Username:docker}
	I0920 17:07:49.959921   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0920 17:07:49.960005   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0920 17:07:49.984738   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0920 17:07:49.984817   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0920 17:07:50.008778   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0920 17:07:50.008846   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0920 17:07:50.031838   27962 provision.go:87] duration metric: took 364.880224ms to configureAuth
	I0920 17:07:50.031867   27962 buildroot.go:189] setting minikube options for container-runtime
	I0920 17:07:50.032039   27962 config.go:182] Loaded profile config "ha-135993": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 17:07:50.032140   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHHostname
	I0920 17:07:50.034890   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:50.035323   27962 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:07:50.035358   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:50.035520   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHPort
	I0920 17:07:50.035689   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:07:50.035831   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:07:50.035997   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHUsername
	I0920 17:07:50.036173   27962 main.go:141] libmachine: Using SSH client type: native
	I0920 17:07:50.036358   27962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0920 17:07:50.036378   27962 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 17:07:50.251754   27962 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 17:07:50.251780   27962 main.go:141] libmachine: Checking connection to Docker...
	I0920 17:07:50.251789   27962 main.go:141] libmachine: (ha-135993) Calling .GetURL
	I0920 17:07:50.253114   27962 main.go:141] libmachine: (ha-135993) DBG | Using libvirt version 6000000
	I0920 17:07:50.254998   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:50.255262   27962 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:07:50.255284   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:50.255431   27962 main.go:141] libmachine: Docker is up and running!
	I0920 17:07:50.255453   27962 main.go:141] libmachine: Reticulating splines...
	I0920 17:07:50.255462   27962 client.go:171] duration metric: took 21.943029238s to LocalClient.Create
	I0920 17:07:50.255485   27962 start.go:167] duration metric: took 21.94309612s to libmachine.API.Create "ha-135993"
	I0920 17:07:50.255496   27962 start.go:293] postStartSetup for "ha-135993" (driver="kvm2")
	I0920 17:07:50.255512   27962 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 17:07:50.255535   27962 main.go:141] libmachine: (ha-135993) Calling .DriverName
	I0920 17:07:50.255798   27962 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 17:07:50.255830   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHHostname
	I0920 17:07:50.258006   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:50.258354   27962 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:07:50.258386   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:50.258536   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHPort
	I0920 17:07:50.258726   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:07:50.258853   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHUsername
	I0920 17:07:50.259008   27962 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993/id_rsa Username:docker}
	I0920 17:07:50.343779   27962 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 17:07:50.347644   27962 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 17:07:50.347675   27962 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/addons for local assets ...
	I0920 17:07:50.347738   27962 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/files for local assets ...
	I0920 17:07:50.347830   27962 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem -> 159732.pem in /etc/ssl/certs
	I0920 17:07:50.347842   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem -> /etc/ssl/certs/159732.pem
	I0920 17:07:50.347940   27962 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 17:07:50.356818   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /etc/ssl/certs/159732.pem (1708 bytes)
	I0920 17:07:50.380005   27962 start.go:296] duration metric: took 124.491428ms for postStartSetup
	I0920 17:07:50.380073   27962 main.go:141] libmachine: (ha-135993) Calling .GetConfigRaw
	I0920 17:07:50.380667   27962 main.go:141] libmachine: (ha-135993) Calling .GetIP
	I0920 17:07:50.383411   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:50.383719   27962 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:07:50.383749   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:50.384003   27962 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/config.json ...
	I0920 17:07:50.384196   27962 start.go:128] duration metric: took 22.090370371s to createHost
	I0920 17:07:50.384222   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHHostname
	I0920 17:07:50.386519   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:50.386950   27962 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:07:50.386966   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:50.387165   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHPort
	I0920 17:07:50.387336   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:07:50.387480   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:07:50.387623   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHUsername
	I0920 17:07:50.387744   27962 main.go:141] libmachine: Using SSH client type: native
	I0920 17:07:50.387905   27962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0920 17:07:50.387916   27962 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 17:07:50.498520   27962 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726852070.471027061
	
	I0920 17:07:50.498552   27962 fix.go:216] guest clock: 1726852070.471027061
	I0920 17:07:50.498562   27962 fix.go:229] Guest: 2024-09-20 17:07:50.471027061 +0000 UTC Remote: 2024-09-20 17:07:50.384207902 +0000 UTC m=+22.194917586 (delta=86.819159ms)
	I0920 17:07:50.498623   27962 fix.go:200] guest clock delta is within tolerance: 86.819159ms
	I0920 17:07:50.498637   27962 start.go:83] releasing machines lock for "ha-135993", held for 22.204885202s
	I0920 17:07:50.498672   27962 main.go:141] libmachine: (ha-135993) Calling .DriverName
	I0920 17:07:50.498937   27962 main.go:141] libmachine: (ha-135993) Calling .GetIP
	I0920 17:07:50.501692   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:50.502068   27962 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:07:50.502095   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:50.502251   27962 main.go:141] libmachine: (ha-135993) Calling .DriverName
	I0920 17:07:50.502720   27962 main.go:141] libmachine: (ha-135993) Calling .DriverName
	I0920 17:07:50.502881   27962 main.go:141] libmachine: (ha-135993) Calling .DriverName
	I0920 17:07:50.502969   27962 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 17:07:50.503024   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHHostname
	I0920 17:07:50.503115   27962 ssh_runner.go:195] Run: cat /version.json
	I0920 17:07:50.503135   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHHostname
	I0920 17:07:50.505769   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:50.506399   27962 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:07:50.506780   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:50.506810   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:50.507015   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHPort
	I0920 17:07:50.507188   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:07:50.507286   27962 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:07:50.507312   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:50.507447   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHUsername
	I0920 17:07:50.507463   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHPort
	I0920 17:07:50.507586   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:07:50.507587   27962 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993/id_rsa Username:docker}
	I0920 17:07:50.507682   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHUsername
	I0920 17:07:50.507776   27962 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993/id_rsa Username:docker}
	I0920 17:07:50.586773   27962 ssh_runner.go:195] Run: systemctl --version
	I0920 17:07:50.621546   27962 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 17:07:50.780598   27962 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 17:07:50.786517   27962 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 17:07:50.786583   27962 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 17:07:50.802071   27962 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 17:07:50.802094   27962 start.go:495] detecting cgroup driver to use...
	I0920 17:07:50.802161   27962 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 17:07:50.818377   27962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 17:07:50.832630   27962 docker.go:217] disabling cri-docker service (if available) ...
	I0920 17:07:50.832707   27962 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 17:07:50.846087   27962 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 17:07:50.860151   27962 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 17:07:50.975426   27962 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 17:07:51.126213   27962 docker.go:233] disabling docker service ...
	I0920 17:07:51.126291   27962 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 17:07:51.140089   27962 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 17:07:51.152679   27962 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 17:07:51.283500   27962 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 17:07:51.390304   27962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 17:07:51.403627   27962 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 17:07:51.421174   27962 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 17:07:51.421242   27962 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:07:51.431235   27962 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 17:07:51.431310   27962 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:07:51.442561   27962 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:07:51.452862   27962 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:07:51.463189   27962 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 17:07:51.473283   27962 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:07:51.483302   27962 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:07:51.500456   27962 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:07:51.510444   27962 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 17:07:51.519365   27962 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 17:07:51.519445   27962 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 17:07:51.532282   27962 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 17:07:51.541316   27962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:07:51.653648   27962 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 17:07:51.739658   27962 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 17:07:51.739747   27962 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 17:07:51.744441   27962 start.go:563] Will wait 60s for crictl version
	I0920 17:07:51.744510   27962 ssh_runner.go:195] Run: which crictl
	I0920 17:07:51.747928   27962 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 17:07:51.785033   27962 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 17:07:51.785130   27962 ssh_runner.go:195] Run: crio --version
	I0920 17:07:51.813367   27962 ssh_runner.go:195] Run: crio --version
	I0920 17:07:51.843606   27962 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 17:07:51.844877   27962 main.go:141] libmachine: (ha-135993) Calling .GetIP
	I0920 17:07:51.847711   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:51.848041   27962 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:07:51.848067   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:51.848302   27962 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 17:07:51.852330   27962 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 17:07:51.865291   27962 kubeadm.go:883] updating cluster {Name:ha-135993 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-135993 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 17:07:51.865398   27962 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 17:07:51.865449   27962 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 17:07:51.899883   27962 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 17:07:51.899943   27962 ssh_runner.go:195] Run: which lz4
	I0920 17:07:51.903807   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0920 17:07:51.903901   27962 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 17:07:51.907726   27962 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 17:07:51.907767   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0920 17:07:53.234059   27962 crio.go:462] duration metric: took 1.330180344s to copy over tarball
	I0920 17:07:53.234125   27962 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 17:07:55.407532   27962 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.173354398s)
	I0920 17:07:55.407570   27962 crio.go:469] duration metric: took 2.173487919s to extract the tarball
	I0920 17:07:55.407579   27962 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 17:07:55.444916   27962 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 17:07:55.491028   27962 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 17:07:55.491053   27962 cache_images.go:84] Images are preloaded, skipping loading
	I0920 17:07:55.491061   27962 kubeadm.go:934] updating node { 192.168.39.60 8443 v1.31.1 crio true true} ...
	I0920 17:07:55.491157   27962 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-135993 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.60
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-135993 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 17:07:55.491229   27962 ssh_runner.go:195] Run: crio config
	I0920 17:07:55.542472   27962 cni.go:84] Creating CNI manager for ""
	I0920 17:07:55.542496   27962 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0920 17:07:55.542509   27962 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 17:07:55.542534   27962 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.60 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-135993 NodeName:ha-135993 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.60"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.60 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 17:07:55.542711   27962 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.60
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-135993"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.60
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.60"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 17:07:55.542744   27962 kube-vip.go:115] generating kube-vip config ...
	I0920 17:07:55.542799   27962 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0920 17:07:55.561052   27962 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0920 17:07:55.561147   27962 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0920 17:07:55.561195   27962 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 17:07:55.571044   27962 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 17:07:55.571106   27962 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0920 17:07:55.580660   27962 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0920 17:07:55.598713   27962 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 17:07:55.616229   27962 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0920 17:07:55.634067   27962 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0920 17:07:55.651892   27962 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0920 17:07:55.655923   27962 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 17:07:55.667484   27962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:07:55.788088   27962 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 17:07:55.804588   27962 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993 for IP: 192.168.39.60
	I0920 17:07:55.804611   27962 certs.go:194] generating shared ca certs ...
	I0920 17:07:55.804631   27962 certs.go:226] acquiring lock for ca certs: {Name:mkc7ef6c737c6bdc3fdd9dcff8f57029c020d8f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:07:55.804804   27962 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key
	I0920 17:07:55.804860   27962 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key
	I0920 17:07:55.804874   27962 certs.go:256] generating profile certs ...
	I0920 17:07:55.804946   27962 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/client.key
	I0920 17:07:55.804963   27962 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/client.crt with IP's: []
	I0920 17:07:56.041638   27962 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/client.crt ...
	I0920 17:07:56.041670   27962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/client.crt: {Name:mk77b02a314748d6817683dcddc9e50a9706a3fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:07:56.041866   27962 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/client.key ...
	I0920 17:07:56.041881   27962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/client.key: {Name:mkce8a68ad81e086e143b0882e17cc856a54adae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:07:56.042064   27962 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.key.be3ca380
	I0920 17:07:56.042085   27962 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.crt.be3ca380 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.60 192.168.39.254]
	I0920 17:07:56.245960   27962 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.crt.be3ca380 ...
	I0920 17:07:56.245992   27962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.crt.be3ca380: {Name:mka9503983e8ca6a4d05f68e1a88c79ee07a7913 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:07:56.246164   27962 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.key.be3ca380 ...
	I0920 17:07:56.246181   27962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.key.be3ca380: {Name:mk892756342d52e742959b6836b3a7605e9575d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:07:56.246306   27962 certs.go:381] copying /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.crt.be3ca380 -> /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.crt
	I0920 17:07:56.246416   27962 certs.go:385] copying /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.key.be3ca380 -> /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.key
	I0920 17:07:56.246500   27962 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/proxy-client.key
	I0920 17:07:56.246524   27962 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/proxy-client.crt with IP's: []
	I0920 17:07:56.401234   27962 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/proxy-client.crt ...
	I0920 17:07:56.401270   27962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/proxy-client.crt: {Name:mk970b226fef3a4347b937972fcb4fd73f00dc38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:07:56.401441   27962 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/proxy-client.key ...
	I0920 17:07:56.401452   27962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/proxy-client.key: {Name:mke4168ed8a5ff16fb6768d15dd8e4f984e56621 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:07:56.401519   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0920 17:07:56.401536   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0920 17:07:56.401547   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0920 17:07:56.401558   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0920 17:07:56.401568   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0920 17:07:56.401579   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0920 17:07:56.401588   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0920 17:07:56.401600   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0920 17:07:56.401644   27962 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem (1338 bytes)
	W0920 17:07:56.401677   27962 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973_empty.pem, impossibly tiny 0 bytes
	I0920 17:07:56.401684   27962 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 17:07:56.401706   27962 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem (1082 bytes)
	I0920 17:07:56.401730   27962 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem (1123 bytes)
	I0920 17:07:56.401754   27962 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem (1675 bytes)
	I0920 17:07:56.401789   27962 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem (1708 bytes)
	I0920 17:07:56.401817   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem -> /usr/share/ca-certificates/15973.pem
	I0920 17:07:56.401847   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem -> /usr/share/ca-certificates/159732.pem
	I0920 17:07:56.401862   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:07:56.402409   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 17:07:56.427996   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 17:07:56.451855   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 17:07:56.475801   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0920 17:07:56.499662   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0920 17:07:56.522944   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 17:07:56.548908   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 17:07:56.575686   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 17:07:56.604616   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem --> /usr/share/ca-certificates/15973.pem (1338 bytes)
	I0920 17:07:56.627314   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /usr/share/ca-certificates/159732.pem (1708 bytes)
	I0920 17:07:56.649875   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 17:07:56.673591   27962 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 17:07:56.694627   27962 ssh_runner.go:195] Run: openssl version
	I0920 17:07:56.700654   27962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 17:07:56.711864   27962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:07:56.716521   27962 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:07:56.716587   27962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:07:56.722355   27962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 17:07:56.733975   27962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15973.pem && ln -fs /usr/share/ca-certificates/15973.pem /etc/ssl/certs/15973.pem"
	I0920 17:07:56.745449   27962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15973.pem
	I0920 17:07:56.749937   27962 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:04 /usr/share/ca-certificates/15973.pem
	I0920 17:07:56.750010   27962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15973.pem
	I0920 17:07:56.755845   27962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15973.pem /etc/ssl/certs/51391683.0"
	I0920 17:07:56.766910   27962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/159732.pem && ln -fs /usr/share/ca-certificates/159732.pem /etc/ssl/certs/159732.pem"
	I0920 17:07:56.777908   27962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/159732.pem
	I0920 17:07:56.782437   27962 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:04 /usr/share/ca-certificates/159732.pem
	I0920 17:07:56.782504   27962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/159732.pem
	I0920 17:07:56.788567   27962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/159732.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 17:07:56.800002   27962 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 17:07:56.804473   27962 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 17:07:56.804532   27962 kubeadm.go:392] StartCluster: {Name:ha-135993 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-135993 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 17:07:56.804601   27962 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 17:07:56.804641   27962 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 17:07:56.847709   27962 cri.go:89] found id: ""
	I0920 17:07:56.847785   27962 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 17:07:56.859005   27962 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 17:07:56.869479   27962 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 17:07:56.879263   27962 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 17:07:56.879288   27962 kubeadm.go:157] found existing configuration files:
	
	I0920 17:07:56.879350   27962 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 17:07:56.888673   27962 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 17:07:56.888748   27962 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 17:07:56.898330   27962 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 17:07:56.908293   27962 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 17:07:56.908361   27962 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 17:07:56.918173   27962 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 17:07:56.926869   27962 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 17:07:56.926939   27962 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 17:07:56.935901   27962 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 17:07:56.944708   27962 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 17:07:56.944774   27962 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 17:07:56.954425   27962 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 17:07:57.049417   27962 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 17:07:57.049552   27962 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 17:07:57.158652   27962 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 17:07:57.158798   27962 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 17:07:57.158931   27962 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 17:07:57.167722   27962 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 17:07:57.313232   27962 out.go:235]   - Generating certificates and keys ...
	I0920 17:07:57.313352   27962 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 17:07:57.313425   27962 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 17:07:57.313486   27962 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0920 17:07:57.601566   27962 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0920 17:07:57.893152   27962 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0920 17:07:58.140227   27962 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0920 17:07:58.556100   27962 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0920 17:07:58.556284   27962 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-135993 localhost] and IPs [192.168.39.60 127.0.0.1 ::1]
	I0920 17:07:58.800301   27962 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0920 17:07:58.800437   27962 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-135993 localhost] and IPs [192.168.39.60 127.0.0.1 ::1]
	I0920 17:07:58.953666   27962 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0920 17:07:59.106407   27962 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0920 17:07:59.233998   27962 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0920 17:07:59.234129   27962 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 17:07:59.525137   27962 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 17:07:59.766968   27962 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 17:08:00.120492   27962 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 17:08:00.216832   27962 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 17:08:00.360049   27962 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 17:08:00.360513   27962 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 17:08:00.363304   27962 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 17:08:00.365927   27962 out.go:235]   - Booting up control plane ...
	I0920 17:08:00.366064   27962 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 17:08:00.366181   27962 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 17:08:00.366311   27962 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 17:08:00.379619   27962 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 17:08:00.385661   27962 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 17:08:00.385729   27962 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 17:08:00.519566   27962 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 17:08:00.519711   27962 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 17:08:01.020357   27962 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.387016ms
	I0920 17:08:01.020471   27962 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 17:08:07.015773   27962 kubeadm.go:310] [api-check] The API server is healthy after 5.999233043s
	I0920 17:08:07.031789   27962 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 17:08:07.055338   27962 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 17:08:07.096965   27962 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 17:08:07.097212   27962 kubeadm.go:310] [mark-control-plane] Marking the node ha-135993 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 17:08:07.111378   27962 kubeadm.go:310] [bootstrap-token] Using token: xrduw1.53792puohqvk415u
	I0920 17:08:07.112987   27962 out.go:235]   - Configuring RBAC rules ...
	I0920 17:08:07.113105   27962 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 17:08:07.126679   27962 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 17:08:07.140129   27962 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 17:08:07.144364   27962 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 17:08:07.148863   27962 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 17:08:07.153587   27962 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 17:08:07.423299   27962 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 17:08:07.856227   27962 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 17:08:08.423318   27962 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 17:08:08.423341   27962 kubeadm.go:310] 
	I0920 17:08:08.423388   27962 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 17:08:08.423393   27962 kubeadm.go:310] 
	I0920 17:08:08.423477   27962 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 17:08:08.423485   27962 kubeadm.go:310] 
	I0920 17:08:08.423525   27962 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 17:08:08.423586   27962 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 17:08:08.423645   27962 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 17:08:08.423658   27962 kubeadm.go:310] 
	I0920 17:08:08.423712   27962 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 17:08:08.423722   27962 kubeadm.go:310] 
	I0920 17:08:08.423765   27962 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 17:08:08.423774   27962 kubeadm.go:310] 
	I0920 17:08:08.423861   27962 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 17:08:08.423966   27962 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 17:08:08.424052   27962 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 17:08:08.424086   27962 kubeadm.go:310] 
	I0920 17:08:08.424207   27962 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 17:08:08.424318   27962 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 17:08:08.424327   27962 kubeadm.go:310] 
	I0920 17:08:08.424428   27962 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token xrduw1.53792puohqvk415u \
	I0920 17:08:08.424587   27962 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:736f3fb521855813decb4a7214f2b8beff5f81c3971b60decfa20a8807626a2d \
	I0920 17:08:08.424622   27962 kubeadm.go:310] 	--control-plane 
	I0920 17:08:08.424629   27962 kubeadm.go:310] 
	I0920 17:08:08.424753   27962 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 17:08:08.424765   27962 kubeadm.go:310] 
	I0920 17:08:08.424873   27962 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token xrduw1.53792puohqvk415u \
	I0920 17:08:08.425013   27962 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:736f3fb521855813decb4a7214f2b8beff5f81c3971b60decfa20a8807626a2d 
	I0920 17:08:08.425950   27962 kubeadm.go:310] W0920 17:07:57.025597     832 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 17:08:08.426273   27962 kubeadm.go:310] W0920 17:07:57.026508     832 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 17:08:08.426428   27962 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 17:08:08.426462   27962 cni.go:84] Creating CNI manager for ""
	I0920 17:08:08.426477   27962 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0920 17:08:08.428341   27962 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0920 17:08:08.429841   27962 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0920 17:08:08.435818   27962 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0920 17:08:08.435838   27962 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0920 17:08:08.455244   27962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0920 17:08:08.799287   27962 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 17:08:08.799381   27962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:08:08.799436   27962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-135993 minikube.k8s.io/updated_at=2024_09_20T17_08_08_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=0626f22cf0d915d75e291a5bce701f94395056e1 minikube.k8s.io/name=ha-135993 minikube.k8s.io/primary=true
	I0920 17:08:08.948517   27962 ops.go:34] apiserver oom_adj: -16
	I0920 17:08:08.948664   27962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:08:09.449228   27962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:08:09.949041   27962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:08:10.449579   27962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:08:10.949086   27962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:08:11.449011   27962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:08:11.949120   27962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:08:12.448969   27962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:08:12.581415   27962 kubeadm.go:1113] duration metric: took 3.782097256s to wait for elevateKubeSystemPrivileges
	I0920 17:08:12.581460   27962 kubeadm.go:394] duration metric: took 15.776931504s to StartCluster
	I0920 17:08:12.581484   27962 settings.go:142] acquiring lock: {Name:mk2a13c58cdc0faf8cddca5d6716175d45db9bfd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:08:12.581582   27962 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19672-8777/kubeconfig
	I0920 17:08:12.582546   27962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/kubeconfig: {Name:mkf32a4c736808e023459b2f0e40188618a38db1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:08:12.582827   27962 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0920 17:08:12.582838   27962 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 17:08:12.582868   27962 start.go:241] waiting for startup goroutines ...
	I0920 17:08:12.582877   27962 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 17:08:12.582961   27962 addons.go:69] Setting storage-provisioner=true in profile "ha-135993"
	I0920 17:08:12.582983   27962 addons.go:234] Setting addon storage-provisioner=true in "ha-135993"
	I0920 17:08:12.582992   27962 addons.go:69] Setting default-storageclass=true in profile "ha-135993"
	I0920 17:08:12.583015   27962 host.go:66] Checking if "ha-135993" exists ...
	I0920 17:08:12.583021   27962 config.go:182] Loaded profile config "ha-135993": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 17:08:12.583016   27962 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-135993"
	I0920 17:08:12.583508   27962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:08:12.583545   27962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:08:12.583546   27962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:08:12.583578   27962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:08:12.598612   27962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36417
	I0920 17:08:12.598702   27962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37497
	I0920 17:08:12.599159   27962 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:08:12.599205   27962 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:08:12.599708   27962 main.go:141] libmachine: Using API Version  1
	I0920 17:08:12.599711   27962 main.go:141] libmachine: Using API Version  1
	I0920 17:08:12.599730   27962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:08:12.599732   27962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:08:12.600086   27962 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:08:12.600096   27962 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:08:12.600272   27962 main.go:141] libmachine: (ha-135993) Calling .GetState
	I0920 17:08:12.600654   27962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:08:12.600687   27962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:08:12.602399   27962 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19672-8777/kubeconfig
	I0920 17:08:12.602624   27962 kapi.go:59] client config for ha-135993: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/client.crt", KeyFile:"/home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/client.key", CAFile:"/home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0920 17:08:12.603002   27962 cert_rotation.go:140] Starting client certificate rotation controller
	I0920 17:08:12.603197   27962 addons.go:234] Setting addon default-storageclass=true in "ha-135993"
	I0920 17:08:12.603229   27962 host.go:66] Checking if "ha-135993" exists ...
	I0920 17:08:12.603512   27962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:08:12.603547   27962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:08:12.615990   27962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34795
	I0920 17:08:12.616508   27962 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:08:12.617237   27962 main.go:141] libmachine: Using API Version  1
	I0920 17:08:12.617264   27962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:08:12.617610   27962 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:08:12.617796   27962 main.go:141] libmachine: (ha-135993) Calling .GetState
	I0920 17:08:12.619399   27962 main.go:141] libmachine: (ha-135993) Calling .DriverName
	I0920 17:08:12.621713   27962 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 17:08:12.623141   27962 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 17:08:12.623157   27962 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 17:08:12.623178   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHHostname
	I0920 17:08:12.623273   27962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33981
	I0920 17:08:12.623802   27962 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:08:12.624342   27962 main.go:141] libmachine: Using API Version  1
	I0920 17:08:12.624366   27962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:08:12.624828   27962 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:08:12.625480   27962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:08:12.625530   27962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:08:12.626097   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:08:12.626527   27962 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:08:12.626552   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:08:12.626807   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHPort
	I0920 17:08:12.626980   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:08:12.627125   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHUsername
	I0920 17:08:12.627264   27962 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993/id_rsa Username:docker}
	I0920 17:08:12.642774   27962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36005
	I0920 17:08:12.643262   27962 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:08:12.643818   27962 main.go:141] libmachine: Using API Version  1
	I0920 17:08:12.643841   27962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:08:12.644239   27962 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:08:12.644440   27962 main.go:141] libmachine: (ha-135993) Calling .GetState
	I0920 17:08:12.645924   27962 main.go:141] libmachine: (ha-135993) Calling .DriverName
	I0920 17:08:12.646117   27962 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 17:08:12.646130   27962 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 17:08:12.646144   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHHostname
	I0920 17:08:12.649003   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:08:12.649483   27962 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:08:12.649502   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:08:12.649607   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHPort
	I0920 17:08:12.649789   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:08:12.649942   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHUsername
	I0920 17:08:12.650098   27962 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993/id_rsa Username:docker}
	I0920 17:08:12.744585   27962 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0920 17:08:12.762429   27962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 17:08:12.828758   27962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 17:08:13.268354   27962 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0920 17:08:13.434438   27962 main.go:141] libmachine: Making call to close driver server
	I0920 17:08:13.434476   27962 main.go:141] libmachine: (ha-135993) Calling .Close
	I0920 17:08:13.434519   27962 main.go:141] libmachine: Making call to close driver server
	I0920 17:08:13.434543   27962 main.go:141] libmachine: (ha-135993) Calling .Close
	I0920 17:08:13.434773   27962 main.go:141] libmachine: (ha-135993) DBG | Closing plugin on server side
	I0920 17:08:13.434818   27962 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:08:13.434827   27962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:08:13.434838   27962 main.go:141] libmachine: Making call to close driver server
	I0920 17:08:13.434847   27962 main.go:141] libmachine: (ha-135993) Calling .Close
	I0920 17:08:13.434882   27962 main.go:141] libmachine: (ha-135993) DBG | Closing plugin on server side
	I0920 17:08:13.434897   27962 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:08:13.434914   27962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:08:13.434931   27962 main.go:141] libmachine: Making call to close driver server
	I0920 17:08:13.434943   27962 main.go:141] libmachine: (ha-135993) Calling .Close
	I0920 17:08:13.435090   27962 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:08:13.435107   27962 main.go:141] libmachine: (ha-135993) DBG | Closing plugin on server side
	I0920 17:08:13.435115   27962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:08:13.435168   27962 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:08:13.435183   27962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:08:13.435240   27962 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0920 17:08:13.435265   27962 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0920 17:08:13.435361   27962 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0920 17:08:13.435370   27962 round_trippers.go:469] Request Headers:
	I0920 17:08:13.435380   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:08:13.435388   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:08:13.451251   27962 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0920 17:08:13.451915   27962 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0920 17:08:13.451933   27962 round_trippers.go:469] Request Headers:
	I0920 17:08:13.451945   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:08:13.451951   27962 round_trippers.go:473]     Content-Type: application/json
	I0920 17:08:13.451959   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:08:13.455819   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:08:13.456046   27962 main.go:141] libmachine: Making call to close driver server
	I0920 17:08:13.456063   27962 main.go:141] libmachine: (ha-135993) Calling .Close
	I0920 17:08:13.456328   27962 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:08:13.456345   27962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:08:13.457999   27962 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0920 17:08:13.459046   27962 addons.go:510] duration metric: took 876.16629ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0920 17:08:13.459075   27962 start.go:246] waiting for cluster config update ...
	I0920 17:08:13.459086   27962 start.go:255] writing updated cluster config ...
	I0920 17:08:13.460310   27962 out.go:201] 
	I0920 17:08:13.461415   27962 config.go:182] Loaded profile config "ha-135993": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 17:08:13.461487   27962 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/config.json ...
	I0920 17:08:13.462998   27962 out.go:177] * Starting "ha-135993-m02" control-plane node in "ha-135993" cluster
	I0920 17:08:13.463913   27962 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 17:08:13.463932   27962 cache.go:56] Caching tarball of preloaded images
	I0920 17:08:13.464013   27962 preload.go:172] Found /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 17:08:13.464026   27962 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 17:08:13.464094   27962 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/config.json ...
	I0920 17:08:13.464275   27962 start.go:360] acquireMachinesLock for ha-135993-m02: {Name:mkfeedb385cf08b5d2aa00913e85815d02a180c2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 17:08:13.464329   27962 start.go:364] duration metric: took 31.835µs to acquireMachinesLock for "ha-135993-m02"
	I0920 17:08:13.464351   27962 start.go:93] Provisioning new machine with config: &{Name:ha-135993 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-135993 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 17:08:13.464449   27962 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0920 17:08:13.466601   27962 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 17:08:13.466688   27962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:08:13.466714   27962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:08:13.482616   27962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46331
	I0920 17:08:13.483161   27962 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:08:13.483661   27962 main.go:141] libmachine: Using API Version  1
	I0920 17:08:13.483682   27962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:08:13.484002   27962 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:08:13.484185   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetMachineName
	I0920 17:08:13.484325   27962 main.go:141] libmachine: (ha-135993-m02) Calling .DriverName
	I0920 17:08:13.484522   27962 start.go:159] libmachine.API.Create for "ha-135993" (driver="kvm2")
	I0920 17:08:13.484544   27962 client.go:168] LocalClient.Create starting
	I0920 17:08:13.484569   27962 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem
	I0920 17:08:13.484600   27962 main.go:141] libmachine: Decoding PEM data...
	I0920 17:08:13.484614   27962 main.go:141] libmachine: Parsing certificate...
	I0920 17:08:13.484662   27962 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem
	I0920 17:08:13.484680   27962 main.go:141] libmachine: Decoding PEM data...
	I0920 17:08:13.484691   27962 main.go:141] libmachine: Parsing certificate...
	I0920 17:08:13.484704   27962 main.go:141] libmachine: Running pre-create checks...
	I0920 17:08:13.484711   27962 main.go:141] libmachine: (ha-135993-m02) Calling .PreCreateCheck
	I0920 17:08:13.484853   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetConfigRaw
	I0920 17:08:13.485217   27962 main.go:141] libmachine: Creating machine...
	I0920 17:08:13.485230   27962 main.go:141] libmachine: (ha-135993-m02) Calling .Create
	I0920 17:08:13.485333   27962 main.go:141] libmachine: (ha-135993-m02) Creating KVM machine...
	I0920 17:08:13.486545   27962 main.go:141] libmachine: (ha-135993-m02) DBG | found existing default KVM network
	I0920 17:08:13.486700   27962 main.go:141] libmachine: (ha-135993-m02) DBG | found existing private KVM network mk-ha-135993
	I0920 17:08:13.486822   27962 main.go:141] libmachine: (ha-135993-m02) Setting up store path in /home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m02 ...
	I0920 17:08:13.486843   27962 main.go:141] libmachine: (ha-135993-m02) Building disk image from file:///home/jenkins/minikube-integration/19672-8777/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso
	I0920 17:08:13.486907   27962 main.go:141] libmachine: (ha-135993-m02) DBG | I0920 17:08:13.486794   28324 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19672-8777/.minikube
	I0920 17:08:13.486988   27962 main.go:141] libmachine: (ha-135993-m02) Downloading /home/jenkins/minikube-integration/19672-8777/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19672-8777/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso...
	I0920 17:08:13.739935   27962 main.go:141] libmachine: (ha-135993-m02) DBG | I0920 17:08:13.739800   28324 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m02/id_rsa...
	I0920 17:08:13.830603   27962 main.go:141] libmachine: (ha-135993-m02) DBG | I0920 17:08:13.830462   28324 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m02/ha-135993-m02.rawdisk...
	I0920 17:08:13.830640   27962 main.go:141] libmachine: (ha-135993-m02) DBG | Writing magic tar header
	I0920 17:08:13.830656   27962 main.go:141] libmachine: (ha-135993-m02) DBG | Writing SSH key tar header
	I0920 17:08:13.830668   27962 main.go:141] libmachine: (ha-135993-m02) DBG | I0920 17:08:13.830608   28324 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m02 ...
	I0920 17:08:13.830709   27962 main.go:141] libmachine: (ha-135993-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m02
	I0920 17:08:13.830748   27962 main.go:141] libmachine: (ha-135993-m02) Setting executable bit set on /home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m02 (perms=drwx------)
	I0920 17:08:13.830769   27962 main.go:141] libmachine: (ha-135993-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-8777/.minikube/machines
	I0920 17:08:13.830782   27962 main.go:141] libmachine: (ha-135993-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-8777/.minikube
	I0920 17:08:13.830799   27962 main.go:141] libmachine: (ha-135993-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-8777
	I0920 17:08:13.830811   27962 main.go:141] libmachine: (ha-135993-m02) Setting executable bit set on /home/jenkins/minikube-integration/19672-8777/.minikube/machines (perms=drwxr-xr-x)
	I0920 17:08:13.830822   27962 main.go:141] libmachine: (ha-135993-m02) Setting executable bit set on /home/jenkins/minikube-integration/19672-8777/.minikube (perms=drwxr-xr-x)
	I0920 17:08:13.830830   27962 main.go:141] libmachine: (ha-135993-m02) Setting executable bit set on /home/jenkins/minikube-integration/19672-8777 (perms=drwxrwxr-x)
	I0920 17:08:13.830839   27962 main.go:141] libmachine: (ha-135993-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0920 17:08:13.830852   27962 main.go:141] libmachine: (ha-135993-m02) DBG | Checking permissions on dir: /home/jenkins
	I0920 17:08:13.830862   27962 main.go:141] libmachine: (ha-135993-m02) DBG | Checking permissions on dir: /home
	I0920 17:08:13.830873   27962 main.go:141] libmachine: (ha-135993-m02) DBG | Skipping /home - not owner
	I0920 17:08:13.830885   27962 main.go:141] libmachine: (ha-135993-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0920 17:08:13.830900   27962 main.go:141] libmachine: (ha-135993-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0920 17:08:13.830909   27962 main.go:141] libmachine: (ha-135993-m02) Creating domain...
	I0920 17:08:13.831832   27962 main.go:141] libmachine: (ha-135993-m02) define libvirt domain using xml: 
	I0920 17:08:13.831858   27962 main.go:141] libmachine: (ha-135993-m02) <domain type='kvm'>
	I0920 17:08:13.831868   27962 main.go:141] libmachine: (ha-135993-m02)   <name>ha-135993-m02</name>
	I0920 17:08:13.831879   27962 main.go:141] libmachine: (ha-135993-m02)   <memory unit='MiB'>2200</memory>
	I0920 17:08:13.831891   27962 main.go:141] libmachine: (ha-135993-m02)   <vcpu>2</vcpu>
	I0920 17:08:13.831897   27962 main.go:141] libmachine: (ha-135993-m02)   <features>
	I0920 17:08:13.831904   27962 main.go:141] libmachine: (ha-135993-m02)     <acpi/>
	I0920 17:08:13.831913   27962 main.go:141] libmachine: (ha-135993-m02)     <apic/>
	I0920 17:08:13.831922   27962 main.go:141] libmachine: (ha-135993-m02)     <pae/>
	I0920 17:08:13.831931   27962 main.go:141] libmachine: (ha-135993-m02)     
	I0920 17:08:13.831943   27962 main.go:141] libmachine: (ha-135993-m02)   </features>
	I0920 17:08:13.831953   27962 main.go:141] libmachine: (ha-135993-m02)   <cpu mode='host-passthrough'>
	I0920 17:08:13.831960   27962 main.go:141] libmachine: (ha-135993-m02)   
	I0920 17:08:13.831967   27962 main.go:141] libmachine: (ha-135993-m02)   </cpu>
	I0920 17:08:13.831975   27962 main.go:141] libmachine: (ha-135993-m02)   <os>
	I0920 17:08:13.831983   27962 main.go:141] libmachine: (ha-135993-m02)     <type>hvm</type>
	I0920 17:08:13.831995   27962 main.go:141] libmachine: (ha-135993-m02)     <boot dev='cdrom'/>
	I0920 17:08:13.832003   27962 main.go:141] libmachine: (ha-135993-m02)     <boot dev='hd'/>
	I0920 17:08:13.832013   27962 main.go:141] libmachine: (ha-135993-m02)     <bootmenu enable='no'/>
	I0920 17:08:13.832023   27962 main.go:141] libmachine: (ha-135993-m02)   </os>
	I0920 17:08:13.832038   27962 main.go:141] libmachine: (ha-135993-m02)   <devices>
	I0920 17:08:13.832051   27962 main.go:141] libmachine: (ha-135993-m02)     <disk type='file' device='cdrom'>
	I0920 17:08:13.832071   27962 main.go:141] libmachine: (ha-135993-m02)       <source file='/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m02/boot2docker.iso'/>
	I0920 17:08:13.832084   27962 main.go:141] libmachine: (ha-135993-m02)       <target dev='hdc' bus='scsi'/>
	I0920 17:08:13.832095   27962 main.go:141] libmachine: (ha-135993-m02)       <readonly/>
	I0920 17:08:13.832104   27962 main.go:141] libmachine: (ha-135993-m02)     </disk>
	I0920 17:08:13.832113   27962 main.go:141] libmachine: (ha-135993-m02)     <disk type='file' device='disk'>
	I0920 17:08:13.832122   27962 main.go:141] libmachine: (ha-135993-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0920 17:08:13.832133   27962 main.go:141] libmachine: (ha-135993-m02)       <source file='/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m02/ha-135993-m02.rawdisk'/>
	I0920 17:08:13.832144   27962 main.go:141] libmachine: (ha-135993-m02)       <target dev='hda' bus='virtio'/>
	I0920 17:08:13.832153   27962 main.go:141] libmachine: (ha-135993-m02)     </disk>
	I0920 17:08:13.832164   27962 main.go:141] libmachine: (ha-135993-m02)     <interface type='network'>
	I0920 17:08:13.832173   27962 main.go:141] libmachine: (ha-135993-m02)       <source network='mk-ha-135993'/>
	I0920 17:08:13.832186   27962 main.go:141] libmachine: (ha-135993-m02)       <model type='virtio'/>
	I0920 17:08:13.832197   27962 main.go:141] libmachine: (ha-135993-m02)     </interface>
	I0920 17:08:13.832209   27962 main.go:141] libmachine: (ha-135993-m02)     <interface type='network'>
	I0920 17:08:13.832217   27962 main.go:141] libmachine: (ha-135993-m02)       <source network='default'/>
	I0920 17:08:13.832232   27962 main.go:141] libmachine: (ha-135993-m02)       <model type='virtio'/>
	I0920 17:08:13.832243   27962 main.go:141] libmachine: (ha-135993-m02)     </interface>
	I0920 17:08:13.832253   27962 main.go:141] libmachine: (ha-135993-m02)     <serial type='pty'>
	I0920 17:08:13.832261   27962 main.go:141] libmachine: (ha-135993-m02)       <target port='0'/>
	I0920 17:08:13.832270   27962 main.go:141] libmachine: (ha-135993-m02)     </serial>
	I0920 17:08:13.832278   27962 main.go:141] libmachine: (ha-135993-m02)     <console type='pty'>
	I0920 17:08:13.832288   27962 main.go:141] libmachine: (ha-135993-m02)       <target type='serial' port='0'/>
	I0920 17:08:13.832293   27962 main.go:141] libmachine: (ha-135993-m02)     </console>
	I0920 17:08:13.832301   27962 main.go:141] libmachine: (ha-135993-m02)     <rng model='virtio'>
	I0920 17:08:13.832311   27962 main.go:141] libmachine: (ha-135993-m02)       <backend model='random'>/dev/random</backend>
	I0920 17:08:13.832320   27962 main.go:141] libmachine: (ha-135993-m02)     </rng>
	I0920 17:08:13.832333   27962 main.go:141] libmachine: (ha-135993-m02)     
	I0920 17:08:13.832354   27962 main.go:141] libmachine: (ha-135993-m02)     
	I0920 17:08:13.832409   27962 main.go:141] libmachine: (ha-135993-m02)   </devices>
	I0920 17:08:13.832434   27962 main.go:141] libmachine: (ha-135993-m02) </domain>
	I0920 17:08:13.832443   27962 main.go:141] libmachine: (ha-135993-m02) 
	I0920 17:08:13.839347   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:40:3b:17 in network default
	I0920 17:08:13.839981   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:13.840002   27962 main.go:141] libmachine: (ha-135993-m02) Ensuring networks are active...
	I0920 17:08:13.840774   27962 main.go:141] libmachine: (ha-135993-m02) Ensuring network default is active
	I0920 17:08:13.841013   27962 main.go:141] libmachine: (ha-135993-m02) Ensuring network mk-ha-135993 is active
	I0920 17:08:13.841381   27962 main.go:141] libmachine: (ha-135993-m02) Getting domain xml...
	I0920 17:08:13.842134   27962 main.go:141] libmachine: (ha-135993-m02) Creating domain...
	I0920 17:08:15.062497   27962 main.go:141] libmachine: (ha-135993-m02) Waiting to get IP...
	I0920 17:08:15.063280   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:15.063771   27962 main.go:141] libmachine: (ha-135993-m02) DBG | unable to find current IP address of domain ha-135993-m02 in network mk-ha-135993
	I0920 17:08:15.063837   27962 main.go:141] libmachine: (ha-135993-m02) DBG | I0920 17:08:15.063776   28324 retry.go:31] will retry after 209.317935ms: waiting for machine to come up
	I0920 17:08:15.275351   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:15.275800   27962 main.go:141] libmachine: (ha-135993-m02) DBG | unable to find current IP address of domain ha-135993-m02 in network mk-ha-135993
	I0920 17:08:15.275825   27962 main.go:141] libmachine: (ha-135993-m02) DBG | I0920 17:08:15.275759   28324 retry.go:31] will retry after 321.648558ms: waiting for machine to come up
	I0920 17:08:15.599294   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:15.599955   27962 main.go:141] libmachine: (ha-135993-m02) DBG | unable to find current IP address of domain ha-135993-m02 in network mk-ha-135993
	I0920 17:08:15.599981   27962 main.go:141] libmachine: (ha-135993-m02) DBG | I0920 17:08:15.599902   28324 retry.go:31] will retry after 379.94005ms: waiting for machine to come up
	I0920 17:08:15.981649   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:15.982207   27962 main.go:141] libmachine: (ha-135993-m02) DBG | unable to find current IP address of domain ha-135993-m02 in network mk-ha-135993
	I0920 17:08:15.982258   27962 main.go:141] libmachine: (ha-135993-m02) DBG | I0920 17:08:15.982185   28324 retry.go:31] will retry after 407.2672ms: waiting for machine to come up
	I0920 17:08:16.390723   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:16.391164   27962 main.go:141] libmachine: (ha-135993-m02) DBG | unable to find current IP address of domain ha-135993-m02 in network mk-ha-135993
	I0920 17:08:16.391190   27962 main.go:141] libmachine: (ha-135993-m02) DBG | I0920 17:08:16.391121   28324 retry.go:31] will retry after 540.634265ms: waiting for machine to come up
	I0920 17:08:16.933924   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:16.934354   27962 main.go:141] libmachine: (ha-135993-m02) DBG | unable to find current IP address of domain ha-135993-m02 in network mk-ha-135993
	I0920 17:08:16.934380   27962 main.go:141] libmachine: (ha-135993-m02) DBG | I0920 17:08:16.934280   28324 retry.go:31] will retry after 944.239732ms: waiting for machine to come up
	I0920 17:08:17.880458   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:17.880905   27962 main.go:141] libmachine: (ha-135993-m02) DBG | unable to find current IP address of domain ha-135993-m02 in network mk-ha-135993
	I0920 17:08:17.880937   27962 main.go:141] libmachine: (ha-135993-m02) DBG | I0920 17:08:17.880855   28324 retry.go:31] will retry after 1.092727798s: waiting for machine to come up
	I0920 17:08:18.975422   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:18.975784   27962 main.go:141] libmachine: (ha-135993-m02) DBG | unable to find current IP address of domain ha-135993-m02 in network mk-ha-135993
	I0920 17:08:18.975813   27962 main.go:141] libmachine: (ha-135993-m02) DBG | I0920 17:08:18.975727   28324 retry.go:31] will retry after 1.481134943s: waiting for machine to come up
	I0920 17:08:20.459346   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:20.459802   27962 main.go:141] libmachine: (ha-135993-m02) DBG | unable to find current IP address of domain ha-135993-m02 in network mk-ha-135993
	I0920 17:08:20.459819   27962 main.go:141] libmachine: (ha-135993-m02) DBG | I0920 17:08:20.459747   28324 retry.go:31] will retry after 1.808510088s: waiting for machine to come up
	I0920 17:08:22.270788   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:22.271210   27962 main.go:141] libmachine: (ha-135993-m02) DBG | unable to find current IP address of domain ha-135993-m02 in network mk-ha-135993
	I0920 17:08:22.271239   27962 main.go:141] libmachine: (ha-135993-m02) DBG | I0920 17:08:22.271135   28324 retry.go:31] will retry after 1.59499674s: waiting for machine to come up
	I0920 17:08:23.868039   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:23.868429   27962 main.go:141] libmachine: (ha-135993-m02) DBG | unable to find current IP address of domain ha-135993-m02 in network mk-ha-135993
	I0920 17:08:23.868456   27962 main.go:141] libmachine: (ha-135993-m02) DBG | I0920 17:08:23.868389   28324 retry.go:31] will retry after 2.718058875s: waiting for machine to come up
	I0920 17:08:26.587523   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:26.588013   27962 main.go:141] libmachine: (ha-135993-m02) DBG | unable to find current IP address of domain ha-135993-m02 in network mk-ha-135993
	I0920 17:08:26.588042   27962 main.go:141] libmachine: (ha-135993-m02) DBG | I0920 17:08:26.587966   28324 retry.go:31] will retry after 2.496735484s: waiting for machine to come up
	I0920 17:08:29.085932   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:29.086306   27962 main.go:141] libmachine: (ha-135993-m02) DBG | unable to find current IP address of domain ha-135993-m02 in network mk-ha-135993
	I0920 17:08:29.086335   27962 main.go:141] libmachine: (ha-135993-m02) DBG | I0920 17:08:29.086239   28324 retry.go:31] will retry after 2.750361097s: waiting for machine to come up
	I0920 17:08:31.838828   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:31.839392   27962 main.go:141] libmachine: (ha-135993-m02) DBG | unable to find current IP address of domain ha-135993-m02 in network mk-ha-135993
	I0920 17:08:31.839414   27962 main.go:141] libmachine: (ha-135993-m02) DBG | I0920 17:08:31.839344   28324 retry.go:31] will retry after 4.254809645s: waiting for machine to come up
	I0920 17:08:36.096360   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:36.096729   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has current primary IP address 192.168.39.227 and MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:36.096746   27962 main.go:141] libmachine: (ha-135993-m02) Found IP for machine: 192.168.39.227
	I0920 17:08:36.096755   27962 main.go:141] libmachine: (ha-135993-m02) Reserving static IP address...
	I0920 17:08:36.097098   27962 main.go:141] libmachine: (ha-135993-m02) DBG | unable to find host DHCP lease matching {name: "ha-135993-m02", mac: "52:54:00:87:dc:24", ip: "192.168.39.227"} in network mk-ha-135993
	I0920 17:08:36.167513   27962 main.go:141] libmachine: (ha-135993-m02) DBG | Getting to WaitForSSH function...
	I0920 17:08:36.167545   27962 main.go:141] libmachine: (ha-135993-m02) Reserved static IP address: 192.168.39.227
	I0920 17:08:36.167558   27962 main.go:141] libmachine: (ha-135993-m02) Waiting for SSH to be available...
	I0920 17:08:36.170087   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:36.170491   27962 main.go:141] libmachine: (ha-135993-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:dc:24", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:08:28 +0000 UTC Type:0 Mac:52:54:00:87:dc:24 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:minikube Clientid:01:52:54:00:87:dc:24}
	I0920 17:08:36.170519   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:36.170690   27962 main.go:141] libmachine: (ha-135993-m02) DBG | Using SSH client type: external
	I0920 17:08:36.170712   27962 main.go:141] libmachine: (ha-135993-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m02/id_rsa (-rw-------)
	I0920 17:08:36.170731   27962 main.go:141] libmachine: (ha-135993-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.227 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 17:08:36.170745   27962 main.go:141] libmachine: (ha-135993-m02) DBG | About to run SSH command:
	I0920 17:08:36.170753   27962 main.go:141] libmachine: (ha-135993-m02) DBG | exit 0
	I0920 17:08:36.294607   27962 main.go:141] libmachine: (ha-135993-m02) DBG | SSH cmd err, output: <nil>: 
	I0920 17:08:36.294933   27962 main.go:141] libmachine: (ha-135993-m02) KVM machine creation complete!
	I0920 17:08:36.295321   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetConfigRaw
	I0920 17:08:36.295951   27962 main.go:141] libmachine: (ha-135993-m02) Calling .DriverName
	I0920 17:08:36.296272   27962 main.go:141] libmachine: (ha-135993-m02) Calling .DriverName
	I0920 17:08:36.296483   27962 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0920 17:08:36.296509   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetState
	I0920 17:08:36.298367   27962 main.go:141] libmachine: Detecting operating system of created instance...
	I0920 17:08:36.298385   27962 main.go:141] libmachine: Waiting for SSH to be available...
	I0920 17:08:36.298392   27962 main.go:141] libmachine: Getting to WaitForSSH function...
	I0920 17:08:36.298400   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHHostname
	I0920 17:08:36.301173   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:36.301568   27962 main.go:141] libmachine: (ha-135993-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:dc:24", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:08:28 +0000 UTC Type:0 Mac:52:54:00:87:dc:24 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-135993-m02 Clientid:01:52:54:00:87:dc:24}
	I0920 17:08:36.301596   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:36.301712   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHPort
	I0920 17:08:36.301889   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHKeyPath
	I0920 17:08:36.302037   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHKeyPath
	I0920 17:08:36.302163   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHUsername
	I0920 17:08:36.302363   27962 main.go:141] libmachine: Using SSH client type: native
	I0920 17:08:36.302570   27962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0920 17:08:36.302587   27962 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0920 17:08:36.409296   27962 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 17:08:36.409321   27962 main.go:141] libmachine: Detecting the provisioner...
	I0920 17:08:36.409329   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHHostname
	I0920 17:08:36.412054   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:36.412453   27962 main.go:141] libmachine: (ha-135993-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:dc:24", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:08:28 +0000 UTC Type:0 Mac:52:54:00:87:dc:24 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-135993-m02 Clientid:01:52:54:00:87:dc:24}
	I0920 17:08:36.412473   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:36.412680   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHPort
	I0920 17:08:36.412859   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHKeyPath
	I0920 17:08:36.413003   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHKeyPath
	I0920 17:08:36.413158   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHUsername
	I0920 17:08:36.413299   27962 main.go:141] libmachine: Using SSH client type: native
	I0920 17:08:36.413464   27962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0920 17:08:36.413474   27962 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0920 17:08:36.522550   27962 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0920 17:08:36.522639   27962 main.go:141] libmachine: found compatible host: buildroot
	I0920 17:08:36.522653   27962 main.go:141] libmachine: Provisioning with buildroot...
	I0920 17:08:36.522668   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetMachineName
	I0920 17:08:36.522875   27962 buildroot.go:166] provisioning hostname "ha-135993-m02"
	I0920 17:08:36.522896   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetMachineName
	I0920 17:08:36.523039   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHHostname
	I0920 17:08:36.525697   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:36.526081   27962 main.go:141] libmachine: (ha-135993-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:dc:24", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:08:28 +0000 UTC Type:0 Mac:52:54:00:87:dc:24 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-135993-m02 Clientid:01:52:54:00:87:dc:24}
	I0920 17:08:36.526108   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:36.526279   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHPort
	I0920 17:08:36.526447   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHKeyPath
	I0920 17:08:36.526596   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHKeyPath
	I0920 17:08:36.526717   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHUsername
	I0920 17:08:36.526893   27962 main.go:141] libmachine: Using SSH client type: native
	I0920 17:08:36.527091   27962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0920 17:08:36.527103   27962 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-135993-m02 && echo "ha-135993-m02" | sudo tee /etc/hostname
	I0920 17:08:36.648108   27962 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-135993-m02
	
	I0920 17:08:36.648139   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHHostname
	I0920 17:08:36.651735   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:36.652103   27962 main.go:141] libmachine: (ha-135993-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:dc:24", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:08:28 +0000 UTC Type:0 Mac:52:54:00:87:dc:24 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-135993-m02 Clientid:01:52:54:00:87:dc:24}
	I0920 17:08:36.652141   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:36.652372   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHPort
	I0920 17:08:36.652553   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHKeyPath
	I0920 17:08:36.652726   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHKeyPath
	I0920 17:08:36.652907   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHUsername
	I0920 17:08:36.653066   27962 main.go:141] libmachine: Using SSH client type: native
	I0920 17:08:36.653241   27962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0920 17:08:36.653262   27962 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-135993-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-135993-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-135993-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 17:08:36.767084   27962 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 17:08:36.767120   27962 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-8777/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-8777/.minikube}
	I0920 17:08:36.767142   27962 buildroot.go:174] setting up certificates
	I0920 17:08:36.767150   27962 provision.go:84] configureAuth start
	I0920 17:08:36.767159   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetMachineName
	I0920 17:08:36.767459   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetIP
	I0920 17:08:36.770189   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:36.770520   27962 main.go:141] libmachine: (ha-135993-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:dc:24", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:08:28 +0000 UTC Type:0 Mac:52:54:00:87:dc:24 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-135993-m02 Clientid:01:52:54:00:87:dc:24}
	I0920 17:08:36.770547   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:36.770672   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHHostname
	I0920 17:08:36.772567   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:36.772866   27962 main.go:141] libmachine: (ha-135993-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:dc:24", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:08:28 +0000 UTC Type:0 Mac:52:54:00:87:dc:24 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-135993-m02 Clientid:01:52:54:00:87:dc:24}
	I0920 17:08:36.772893   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:36.773001   27962 provision.go:143] copyHostCerts
	I0920 17:08:36.773032   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem
	I0920 17:08:36.773066   27962 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem, removing ...
	I0920 17:08:36.773075   27962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem
	I0920 17:08:36.773139   27962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem (1082 bytes)
	I0920 17:08:36.773212   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem
	I0920 17:08:36.773230   27962 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem, removing ...
	I0920 17:08:36.773237   27962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem
	I0920 17:08:36.773260   27962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem (1123 bytes)
	I0920 17:08:36.773312   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem
	I0920 17:08:36.773331   27962 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem, removing ...
	I0920 17:08:36.773337   27962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem
	I0920 17:08:36.773357   27962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem (1675 bytes)
	I0920 17:08:36.773424   27962 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem org=jenkins.ha-135993-m02 san=[127.0.0.1 192.168.39.227 ha-135993-m02 localhost minikube]
	I0920 17:08:36.941019   27962 provision.go:177] copyRemoteCerts
	I0920 17:08:36.941075   27962 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 17:08:36.941096   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHHostname
	I0920 17:08:36.943678   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:36.944038   27962 main.go:141] libmachine: (ha-135993-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:dc:24", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:08:28 +0000 UTC Type:0 Mac:52:54:00:87:dc:24 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-135993-m02 Clientid:01:52:54:00:87:dc:24}
	I0920 17:08:36.944072   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:36.944262   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHPort
	I0920 17:08:36.944447   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHKeyPath
	I0920 17:08:36.944600   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHUsername
	I0920 17:08:36.944758   27962 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m02/id_rsa Username:docker}
	I0920 17:08:37.028603   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0920 17:08:37.028690   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0920 17:08:37.052665   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0920 17:08:37.052750   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0920 17:08:37.077892   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0920 17:08:37.077976   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0920 17:08:37.100815   27962 provision.go:87] duration metric: took 333.648023ms to configureAuth
	I0920 17:08:37.100849   27962 buildroot.go:189] setting minikube options for container-runtime
	I0920 17:08:37.101060   27962 config.go:182] Loaded profile config "ha-135993": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 17:08:37.101132   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHHostname
	I0920 17:08:37.103680   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:37.104025   27962 main.go:141] libmachine: (ha-135993-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:dc:24", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:08:28 +0000 UTC Type:0 Mac:52:54:00:87:dc:24 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-135993-m02 Clientid:01:52:54:00:87:dc:24}
	I0920 17:08:37.104065   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:37.104260   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHPort
	I0920 17:08:37.104442   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHKeyPath
	I0920 17:08:37.104572   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHKeyPath
	I0920 17:08:37.104716   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHUsername
	I0920 17:08:37.104930   27962 main.go:141] libmachine: Using SSH client type: native
	I0920 17:08:37.105131   27962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0920 17:08:37.105151   27962 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 17:08:37.328322   27962 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 17:08:37.328359   27962 main.go:141] libmachine: Checking connection to Docker...
	I0920 17:08:37.328371   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetURL
	I0920 17:08:37.329623   27962 main.go:141] libmachine: (ha-135993-m02) DBG | Using libvirt version 6000000
	I0920 17:08:37.331823   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:37.332143   27962 main.go:141] libmachine: (ha-135993-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:dc:24", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:08:28 +0000 UTC Type:0 Mac:52:54:00:87:dc:24 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-135993-m02 Clientid:01:52:54:00:87:dc:24}
	I0920 17:08:37.332167   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:37.332339   27962 main.go:141] libmachine: Docker is up and running!
	I0920 17:08:37.332353   27962 main.go:141] libmachine: Reticulating splines...
	I0920 17:08:37.332361   27962 client.go:171] duration metric: took 23.847807748s to LocalClient.Create
	I0920 17:08:37.332387   27962 start.go:167] duration metric: took 23.84786362s to libmachine.API.Create "ha-135993"
	I0920 17:08:37.332399   27962 start.go:293] postStartSetup for "ha-135993-m02" (driver="kvm2")
	I0920 17:08:37.332415   27962 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 17:08:37.332439   27962 main.go:141] libmachine: (ha-135993-m02) Calling .DriverName
	I0920 17:08:37.332705   27962 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 17:08:37.332736   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHHostname
	I0920 17:08:37.334802   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:37.335108   27962 main.go:141] libmachine: (ha-135993-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:dc:24", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:08:28 +0000 UTC Type:0 Mac:52:54:00:87:dc:24 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-135993-m02 Clientid:01:52:54:00:87:dc:24}
	I0920 17:08:37.335134   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:37.335218   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHPort
	I0920 17:08:37.335362   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHKeyPath
	I0920 17:08:37.335477   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHUsername
	I0920 17:08:37.335595   27962 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m02/id_rsa Username:docker}
	I0920 17:08:37.416843   27962 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 17:08:37.421359   27962 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 17:08:37.421384   27962 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/addons for local assets ...
	I0920 17:08:37.421448   27962 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/files for local assets ...
	I0920 17:08:37.421538   27962 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem -> 159732.pem in /etc/ssl/certs
	I0920 17:08:37.421549   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem -> /etc/ssl/certs/159732.pem
	I0920 17:08:37.421657   27962 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 17:08:37.431863   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /etc/ssl/certs/159732.pem (1708 bytes)
	I0920 17:08:37.454586   27962 start.go:296] duration metric: took 122.170431ms for postStartSetup
	I0920 17:08:37.454638   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetConfigRaw
	I0920 17:08:37.455188   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetIP
	I0920 17:08:37.457599   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:37.457923   27962 main.go:141] libmachine: (ha-135993-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:dc:24", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:08:28 +0000 UTC Type:0 Mac:52:54:00:87:dc:24 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-135993-m02 Clientid:01:52:54:00:87:dc:24}
	I0920 17:08:37.457945   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:37.458188   27962 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/config.json ...
	I0920 17:08:37.458382   27962 start.go:128] duration metric: took 23.993921825s to createHost
	I0920 17:08:37.458410   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHHostname
	I0920 17:08:37.460848   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:37.461348   27962 main.go:141] libmachine: (ha-135993-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:dc:24", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:08:28 +0000 UTC Type:0 Mac:52:54:00:87:dc:24 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-135993-m02 Clientid:01:52:54:00:87:dc:24}
	I0920 17:08:37.461378   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:37.461561   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHPort
	I0920 17:08:37.461755   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHKeyPath
	I0920 17:08:37.461935   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHKeyPath
	I0920 17:08:37.462069   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHUsername
	I0920 17:08:37.462223   27962 main.go:141] libmachine: Using SSH client type: native
	I0920 17:08:37.462383   27962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0920 17:08:37.462392   27962 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 17:08:37.570351   27962 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726852117.546992904
	
	I0920 17:08:37.570372   27962 fix.go:216] guest clock: 1726852117.546992904
	I0920 17:08:37.570379   27962 fix.go:229] Guest: 2024-09-20 17:08:37.546992904 +0000 UTC Remote: 2024-09-20 17:08:37.458395452 +0000 UTC m=+69.269105040 (delta=88.597452ms)
	I0920 17:08:37.570394   27962 fix.go:200] guest clock delta is within tolerance: 88.597452ms
	I0920 17:08:37.570398   27962 start.go:83] releasing machines lock for "ha-135993-m02", held for 24.10605904s
	I0920 17:08:37.570419   27962 main.go:141] libmachine: (ha-135993-m02) Calling .DriverName
	I0920 17:08:37.570730   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetIP
	I0920 17:08:37.573185   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:37.573501   27962 main.go:141] libmachine: (ha-135993-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:dc:24", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:08:28 +0000 UTC Type:0 Mac:52:54:00:87:dc:24 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-135993-m02 Clientid:01:52:54:00:87:dc:24}
	I0920 17:08:37.573529   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:37.576260   27962 out.go:177] * Found network options:
	I0920 17:08:37.577727   27962 out.go:177]   - NO_PROXY=192.168.39.60
	W0920 17:08:37.578902   27962 proxy.go:119] fail to check proxy env: Error ip not in block
	I0920 17:08:37.578937   27962 main.go:141] libmachine: (ha-135993-m02) Calling .DriverName
	I0920 17:08:37.579631   27962 main.go:141] libmachine: (ha-135993-m02) Calling .DriverName
	I0920 17:08:37.579801   27962 main.go:141] libmachine: (ha-135993-m02) Calling .DriverName
	I0920 17:08:37.579884   27962 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 17:08:37.579926   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHHostname
	W0920 17:08:37.580027   27962 proxy.go:119] fail to check proxy env: Error ip not in block
	I0920 17:08:37.580105   27962 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 17:08:37.580127   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHHostname
	I0920 17:08:37.582896   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:37.583131   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:37.583396   27962 main.go:141] libmachine: (ha-135993-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:dc:24", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:08:28 +0000 UTC Type:0 Mac:52:54:00:87:dc:24 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-135993-m02 Clientid:01:52:54:00:87:dc:24}
	I0920 17:08:37.583422   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:37.583562   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHPort
	I0920 17:08:37.583712   27962 main.go:141] libmachine: (ha-135993-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:dc:24", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:08:28 +0000 UTC Type:0 Mac:52:54:00:87:dc:24 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-135993-m02 Clientid:01:52:54:00:87:dc:24}
	I0920 17:08:37.583731   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:37.583738   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHKeyPath
	I0920 17:08:37.583921   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHUsername
	I0920 17:08:37.583953   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHPort
	I0920 17:08:37.584099   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHKeyPath
	I0920 17:08:37.584097   27962 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m02/id_rsa Username:docker}
	I0920 17:08:37.584246   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHUsername
	I0920 17:08:37.584390   27962 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m02/id_rsa Username:docker}
	I0920 17:08:37.841918   27962 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 17:08:37.847702   27962 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 17:08:37.847782   27962 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 17:08:37.865314   27962 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 17:08:37.865341   27962 start.go:495] detecting cgroup driver to use...
	I0920 17:08:37.865402   27962 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 17:08:37.882395   27962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 17:08:37.898199   27962 docker.go:217] disabling cri-docker service (if available) ...
	I0920 17:08:37.898256   27962 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 17:08:37.914375   27962 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 17:08:37.929731   27962 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 17:08:38.054897   27962 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 17:08:38.213720   27962 docker.go:233] disabling docker service ...
	I0920 17:08:38.213781   27962 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 17:08:38.228604   27962 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 17:08:38.241927   27962 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 17:08:38.372497   27962 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 17:08:38.492012   27962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 17:08:38.505545   27962 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 17:08:38.522859   27962 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 17:08:38.522917   27962 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:08:38.533670   27962 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 17:08:38.533742   27962 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:08:38.543534   27962 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:08:38.553115   27962 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:08:38.563278   27962 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 17:08:38.573734   27962 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:08:38.585820   27962 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:08:38.602582   27962 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:08:38.612986   27962 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 17:08:38.625878   27962 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 17:08:38.625952   27962 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 17:08:38.640746   27962 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 17:08:38.650259   27962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:08:38.774025   27962 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 17:08:38.868968   27962 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 17:08:38.869037   27962 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 17:08:38.873544   27962 start.go:563] Will wait 60s for crictl version
	I0920 17:08:38.873611   27962 ssh_runner.go:195] Run: which crictl
	I0920 17:08:38.877199   27962 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 17:08:38.914545   27962 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 17:08:38.914652   27962 ssh_runner.go:195] Run: crio --version
	I0920 17:08:38.942570   27962 ssh_runner.go:195] Run: crio --version
	I0920 17:08:38.974013   27962 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 17:08:38.975371   27962 out.go:177]   - env NO_PROXY=192.168.39.60
	I0920 17:08:38.976693   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetIP
	I0920 17:08:38.979315   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:38.979662   27962 main.go:141] libmachine: (ha-135993-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:dc:24", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:08:28 +0000 UTC Type:0 Mac:52:54:00:87:dc:24 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-135993-m02 Clientid:01:52:54:00:87:dc:24}
	I0920 17:08:38.979686   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:38.979928   27962 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 17:08:38.984450   27962 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 17:08:38.996637   27962 mustload.go:65] Loading cluster: ha-135993
	I0920 17:08:38.996863   27962 config.go:182] Loaded profile config "ha-135993": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 17:08:38.997116   27962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:08:38.997144   27962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:08:39.011615   27962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36791
	I0920 17:08:39.012110   27962 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:08:39.012595   27962 main.go:141] libmachine: Using API Version  1
	I0920 17:08:39.012618   27962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:08:39.012951   27962 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:08:39.013120   27962 main.go:141] libmachine: (ha-135993) Calling .GetState
	I0920 17:08:39.014524   27962 host.go:66] Checking if "ha-135993" exists ...
	I0920 17:08:39.014807   27962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:08:39.014829   27962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:08:39.028965   27962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39851
	I0920 17:08:39.029376   27962 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:08:39.029829   27962 main.go:141] libmachine: Using API Version  1
	I0920 17:08:39.029863   27962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:08:39.030149   27962 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:08:39.030299   27962 main.go:141] libmachine: (ha-135993) Calling .DriverName
	I0920 17:08:39.030433   27962 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993 for IP: 192.168.39.227
	I0920 17:08:39.030445   27962 certs.go:194] generating shared ca certs ...
	I0920 17:08:39.030462   27962 certs.go:226] acquiring lock for ca certs: {Name:mkc7ef6c737c6bdc3fdd9dcff8f57029c020d8f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:08:39.030587   27962 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key
	I0920 17:08:39.030622   27962 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key
	I0920 17:08:39.030631   27962 certs.go:256] generating profile certs ...
	I0920 17:08:39.030698   27962 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/client.key
	I0920 17:08:39.030722   27962 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.key.8c3b1447
	I0920 17:08:39.030736   27962 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.crt.8c3b1447 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.60 192.168.39.227 192.168.39.254]
	I0920 17:08:39.095051   27962 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.crt.8c3b1447 ...
	I0920 17:08:39.095081   27962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.crt.8c3b1447: {Name:mke080ae3589481bb1ac84166b67a86b0862deca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:08:39.095299   27962 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.key.8c3b1447 ...
	I0920 17:08:39.095313   27962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.key.8c3b1447: {Name:mk0aaeb424c58a29d9543a386b9ebefcbd99d38f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:08:39.095401   27962 certs.go:381] copying /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.crt.8c3b1447 -> /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.crt
	I0920 17:08:39.095524   27962 certs.go:385] copying /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.key.8c3b1447 -> /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.key
	I0920 17:08:39.095653   27962 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/proxy-client.key
	I0920 17:08:39.095667   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0920 17:08:39.095679   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0920 17:08:39.095689   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0920 17:08:39.095702   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0920 17:08:39.095712   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0920 17:08:39.095724   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0920 17:08:39.095736   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0920 17:08:39.095749   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0920 17:08:39.095802   27962 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem (1338 bytes)
	W0920 17:08:39.095830   27962 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973_empty.pem, impossibly tiny 0 bytes
	I0920 17:08:39.095839   27962 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 17:08:39.095858   27962 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem (1082 bytes)
	I0920 17:08:39.095878   27962 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem (1123 bytes)
	I0920 17:08:39.095901   27962 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem (1675 bytes)
	I0920 17:08:39.095936   27962 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem (1708 bytes)
	I0920 17:08:39.095961   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:08:39.095977   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem -> /usr/share/ca-certificates/15973.pem
	I0920 17:08:39.095989   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem -> /usr/share/ca-certificates/159732.pem
	I0920 17:08:39.096019   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHHostname
	I0920 17:08:39.099130   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:08:39.099635   27962 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:08:39.099664   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:08:39.099789   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHPort
	I0920 17:08:39.100010   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:08:39.100156   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHUsername
	I0920 17:08:39.100302   27962 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993/id_rsa Username:docker}
	I0920 17:08:39.178198   27962 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0920 17:08:39.183212   27962 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0920 17:08:39.194269   27962 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0920 17:08:39.198144   27962 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0920 17:08:39.207842   27962 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0920 17:08:39.212563   27962 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0920 17:08:39.225008   27962 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0920 17:08:39.228957   27962 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0920 17:08:39.240966   27962 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0920 17:08:39.244710   27962 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0920 17:08:39.255704   27962 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0920 17:08:39.261179   27962 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0920 17:08:39.272522   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 17:08:39.298671   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 17:08:39.323122   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 17:08:39.347904   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0920 17:08:39.372895   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0920 17:08:39.396433   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 17:08:39.420958   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 17:08:39.444600   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 17:08:39.468099   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 17:08:39.492182   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem --> /usr/share/ca-certificates/15973.pem (1338 bytes)
	I0920 17:08:39.516275   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /usr/share/ca-certificates/159732.pem (1708 bytes)
	I0920 17:08:39.538881   27962 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0920 17:08:39.554623   27962 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0920 17:08:39.569829   27962 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0920 17:08:39.585133   27962 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0920 17:08:39.601137   27962 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0920 17:08:39.617605   27962 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0920 17:08:39.633667   27962 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0920 17:08:39.650104   27962 ssh_runner.go:195] Run: openssl version
	I0920 17:08:39.656001   27962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 17:08:39.667261   27962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:08:39.671479   27962 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:08:39.671552   27962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:08:39.677168   27962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 17:08:39.687694   27962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15973.pem && ln -fs /usr/share/ca-certificates/15973.pem /etc/ssl/certs/15973.pem"
	I0920 17:08:39.697763   27962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15973.pem
	I0920 17:08:39.702178   27962 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:04 /usr/share/ca-certificates/15973.pem
	I0920 17:08:39.702233   27962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15973.pem
	I0920 17:08:39.708012   27962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15973.pem /etc/ssl/certs/51391683.0"
	I0920 17:08:39.718526   27962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/159732.pem && ln -fs /usr/share/ca-certificates/159732.pem /etc/ssl/certs/159732.pem"
	I0920 17:08:39.729775   27962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/159732.pem
	I0920 17:08:39.734571   27962 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:04 /usr/share/ca-certificates/159732.pem
	I0920 17:08:39.734627   27962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/159732.pem
	I0920 17:08:39.740342   27962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/159732.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 17:08:39.751136   27962 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 17:08:39.755553   27962 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 17:08:39.755646   27962 kubeadm.go:934] updating node {m02 192.168.39.227 8443 v1.31.1 crio true true} ...
	I0920 17:08:39.755760   27962 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-135993-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.227
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-135993 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 17:08:39.755800   27962 kube-vip.go:115] generating kube-vip config ...
	I0920 17:08:39.755854   27962 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0920 17:08:39.773764   27962 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0920 17:08:39.773847   27962 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0920 17:08:39.773905   27962 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 17:08:39.783942   27962 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0920 17:08:39.784007   27962 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0920 17:08:39.793636   27962 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0920 17:08:39.793672   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0920 17:08:39.793735   27962 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0920 17:08:39.793780   27962 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19672-8777/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I0920 17:08:39.793842   27962 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19672-8777/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I0920 17:08:39.798080   27962 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0920 17:08:39.798118   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0920 17:08:40.867820   27962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 17:08:40.882080   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0920 17:08:40.882178   27962 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0920 17:08:40.886572   27962 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0920 17:08:40.886607   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0920 17:08:41.226998   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0920 17:08:41.227076   27962 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0920 17:08:41.238040   27962 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0920 17:08:41.238078   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0920 17:08:41.520778   27962 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0920 17:08:41.530138   27962 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0920 17:08:41.546031   27962 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 17:08:41.561648   27962 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0920 17:08:41.577512   27962 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0920 17:08:41.581127   27962 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 17:08:41.593044   27962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:08:41.727078   27962 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 17:08:41.743823   27962 host.go:66] Checking if "ha-135993" exists ...
	I0920 17:08:41.744278   27962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:08:41.744326   27962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:08:41.759319   27962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43037
	I0920 17:08:41.759806   27962 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:08:41.760334   27962 main.go:141] libmachine: Using API Version  1
	I0920 17:08:41.760365   27962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:08:41.760710   27962 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:08:41.760950   27962 main.go:141] libmachine: (ha-135993) Calling .DriverName
	I0920 17:08:41.761092   27962 start.go:317] joinCluster: &{Name:ha-135993 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-135993 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 17:08:41.761208   27962 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0920 17:08:41.761228   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHHostname
	I0920 17:08:41.764476   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:08:41.765051   27962 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:08:41.765084   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:08:41.765229   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHPort
	I0920 17:08:41.765376   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:08:41.765547   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHUsername
	I0920 17:08:41.765689   27962 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993/id_rsa Username:docker}
	I0920 17:08:41.915104   27962 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 17:08:41.915146   27962 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 76lnz4.6r1ezgurod2l1q25 --discovery-token-ca-cert-hash sha256:736f3fb521855813decb4a7214f2b8beff5f81c3971b60decfa20a8807626a2d --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-135993-m02 --control-plane --apiserver-advertise-address=192.168.39.227 --apiserver-bind-port=8443"
	I0920 17:09:04.881318   27962 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 76lnz4.6r1ezgurod2l1q25 --discovery-token-ca-cert-hash sha256:736f3fb521855813decb4a7214f2b8beff5f81c3971b60decfa20a8807626a2d --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-135993-m02 --control-plane --apiserver-advertise-address=192.168.39.227 --apiserver-bind-port=8443": (22.966149697s)
	I0920 17:09:04.881355   27962 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0920 17:09:05.471754   27962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-135993-m02 minikube.k8s.io/updated_at=2024_09_20T17_09_05_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=0626f22cf0d915d75e291a5bce701f94395056e1 minikube.k8s.io/name=ha-135993 minikube.k8s.io/primary=false
	I0920 17:09:05.593812   27962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-135993-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0920 17:09:05.743557   27962 start.go:319] duration metric: took 23.982457966s to joinCluster
	I0920 17:09:05.743641   27962 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 17:09:05.743939   27962 config.go:182] Loaded profile config "ha-135993": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 17:09:05.745782   27962 out.go:177] * Verifying Kubernetes components...
	I0920 17:09:05.747592   27962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:09:06.068898   27962 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 17:09:06.098222   27962 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19672-8777/kubeconfig
	I0920 17:09:06.098478   27962 kapi.go:59] client config for ha-135993: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/client.crt", KeyFile:"/home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/client.key", CAFile:"/home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0920 17:09:06.098546   27962 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.60:8443
	I0920 17:09:06.098829   27962 node_ready.go:35] waiting up to 6m0s for node "ha-135993-m02" to be "Ready" ...
	I0920 17:09:06.098967   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:06.098980   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:06.098991   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:06.098997   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:06.110154   27962 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0920 17:09:06.599028   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:06.599058   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:06.599068   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:06.599080   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:06.607526   27962 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0920 17:09:07.100044   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:07.100066   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:07.100080   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:07.100088   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:07.104606   27962 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 17:09:07.599532   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:07.599561   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:07.599573   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:07.599592   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:07.603898   27962 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 17:09:08.099892   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:08.099925   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:08.099936   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:08.099939   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:08.104089   27962 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 17:09:08.104669   27962 node_ready.go:53] node "ha-135993-m02" has status "Ready":"False"
	I0920 17:09:08.599188   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:08.599216   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:08.599232   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:08.599237   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:08.602674   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:09.099543   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:09.099573   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:09.099590   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:09.099595   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:09.103157   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:09.599047   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:09.599068   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:09.599079   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:09.599083   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:09.602661   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:10.099869   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:10.099898   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:10.099910   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:10.099917   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:10.104382   27962 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 17:09:10.105025   27962 node_ready.go:53] node "ha-135993-m02" has status "Ready":"False"
	I0920 17:09:10.599990   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:10.600015   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:10.600025   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:10.600040   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:10.604181   27962 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 17:09:11.100016   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:11.100036   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:11.100044   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:11.100048   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:11.104486   27962 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 17:09:11.599135   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:11.599157   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:11.599167   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:11.599172   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:11.603466   27962 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 17:09:12.099094   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:12.099116   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:12.099124   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:12.099128   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:12.102631   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:12.600054   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:12.600077   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:12.600087   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:12.600091   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:12.603960   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:12.604540   27962 node_ready.go:53] node "ha-135993-m02" has status "Ready":"False"
	I0920 17:09:13.099920   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:13.099940   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:13.099947   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:13.099951   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:13.104962   27962 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 17:09:13.599362   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:13.599385   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:13.599392   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:13.599397   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:13.602694   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:14.099536   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:14.099555   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:14.099563   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:14.099566   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:14.110011   27962 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0920 17:09:14.600088   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:14.600116   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:14.600127   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:14.600132   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:14.603733   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:15.099810   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:15.099833   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:15.099842   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:15.099847   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:15.103493   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:15.106748   27962 node_ready.go:53] node "ha-135993-m02" has status "Ready":"False"
	I0920 17:09:15.599114   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:15.599137   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:15.599145   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:15.599149   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:15.602587   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:16.099797   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:16.099819   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:16.099836   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:16.099841   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:16.104385   27962 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 17:09:16.599221   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:16.599261   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:16.599273   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:16.599281   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:16.602198   27962 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 17:09:17.099641   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:17.099665   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:17.099674   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:17.099679   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:17.102538   27962 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 17:09:17.599451   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:17.599479   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:17.599488   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:17.599493   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:17.604108   27962 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 17:09:17.604651   27962 node_ready.go:53] node "ha-135993-m02" has status "Ready":"False"
	I0920 17:09:18.099653   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:18.099682   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:18.099694   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:18.099698   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:18.103414   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:18.599738   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:18.599765   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:18.599774   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:18.599781   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:18.603208   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:19.100125   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:19.100153   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:19.100166   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:19.100175   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:19.184153   27962 round_trippers.go:574] Response Status: 200 OK in 83 milliseconds
	I0920 17:09:19.600050   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:19.600072   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:19.600080   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:19.600085   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:19.603736   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:20.099655   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:20.099677   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:20.099685   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:20.099689   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:20.103774   27962 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 17:09:20.104534   27962 node_ready.go:53] node "ha-135993-m02" has status "Ready":"False"
	I0920 17:09:20.599975   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:20.599999   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:20.600008   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:20.600012   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:20.603324   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:21.099118   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:21.099157   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:21.099168   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:21.099174   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:21.102835   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:21.599923   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:21.599950   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:21.599959   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:21.599963   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:21.604036   27962 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 17:09:22.099740   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:22.099765   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:22.099774   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:22.099779   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:22.103432   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:22.599193   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:22.599216   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:22.599225   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:22.599230   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:22.602523   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:22.603230   27962 node_ready.go:53] node "ha-135993-m02" has status "Ready":"False"
	I0920 17:09:23.099535   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:23.099562   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:23.099571   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:23.099575   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:23.103060   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:23.600005   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:23.600028   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:23.600037   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:23.600042   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:23.602925   27962 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 17:09:24.099721   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:24.099748   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:24.099760   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:24.099768   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:24.103420   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:24.599142   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:24.599163   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:24.599171   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:24.599175   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:24.601879   27962 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 17:09:25.099978   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:25.100008   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:25.100020   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:25.100025   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:25.103311   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:25.104017   27962 node_ready.go:49] node "ha-135993-m02" has status "Ready":"True"
	I0920 17:09:25.104039   27962 node_ready.go:38] duration metric: took 19.005166756s for node "ha-135993-m02" to be "Ready" ...
	I0920 17:09:25.104051   27962 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 17:09:25.104149   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods
	I0920 17:09:25.104165   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:25.104177   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:25.104185   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:25.108765   27962 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 17:09:25.115719   27962 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-gcvg4" in "kube-system" namespace to be "Ready" ...
	I0920 17:09:25.115809   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-gcvg4
	I0920 17:09:25.115817   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:25.115832   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:25.115839   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:25.118912   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:25.119515   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993
	I0920 17:09:25.119530   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:25.119545   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:25.119553   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:25.122165   27962 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 17:09:25.123205   27962 pod_ready.go:93] pod "coredns-7c65d6cfc9-gcvg4" in "kube-system" namespace has status "Ready":"True"
	I0920 17:09:25.123229   27962 pod_ready.go:82] duration metric: took 7.483763ms for pod "coredns-7c65d6cfc9-gcvg4" in "kube-system" namespace to be "Ready" ...
	I0920 17:09:25.123245   27962 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-kpbhk" in "kube-system" namespace to be "Ready" ...
	I0920 17:09:25.123328   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-kpbhk
	I0920 17:09:25.123336   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:25.123346   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:25.123362   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:25.127621   27962 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 17:09:25.128286   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993
	I0920 17:09:25.128301   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:25.128309   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:25.128312   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:25.130781   27962 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 17:09:25.131328   27962 pod_ready.go:93] pod "coredns-7c65d6cfc9-kpbhk" in "kube-system" namespace has status "Ready":"True"
	I0920 17:09:25.131344   27962 pod_ready.go:82] duration metric: took 8.091385ms for pod "coredns-7c65d6cfc9-kpbhk" in "kube-system" namespace to be "Ready" ...
	I0920 17:09:25.131353   27962 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-135993" in "kube-system" namespace to be "Ready" ...
	I0920 17:09:25.131430   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-135993
	I0920 17:09:25.131441   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:25.131447   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:25.131452   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:25.133900   27962 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 17:09:25.134469   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993
	I0920 17:09:25.134482   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:25.134489   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:25.134491   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:25.136541   27962 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 17:09:25.137016   27962 pod_ready.go:93] pod "etcd-ha-135993" in "kube-system" namespace has status "Ready":"True"
	I0920 17:09:25.137035   27962 pod_ready.go:82] duration metric: took 5.675303ms for pod "etcd-ha-135993" in "kube-system" namespace to be "Ready" ...
	I0920 17:09:25.137046   27962 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-135993-m02" in "kube-system" namespace to be "Ready" ...
	I0920 17:09:25.137099   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-135993-m02
	I0920 17:09:25.137110   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:25.137120   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:25.137129   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:25.139596   27962 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 17:09:25.140245   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:25.140261   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:25.140268   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:25.140275   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:25.143653   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:25.144087   27962 pod_ready.go:93] pod "etcd-ha-135993-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 17:09:25.144104   27962 pod_ready.go:82] duration metric: took 7.049824ms for pod "etcd-ha-135993-m02" in "kube-system" namespace to be "Ready" ...
	I0920 17:09:25.144123   27962 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-135993" in "kube-system" namespace to be "Ready" ...
	I0920 17:09:25.300530   27962 request.go:632] Waited for 156.341043ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-135993
	I0920 17:09:25.300600   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-135993
	I0920 17:09:25.300608   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:25.300615   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:25.300619   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:25.303926   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:25.500905   27962 request.go:632] Waited for 196.365656ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-135993
	I0920 17:09:25.500972   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993
	I0920 17:09:25.500979   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:25.500991   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:25.501002   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:25.504242   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:25.504741   27962 pod_ready.go:93] pod "kube-apiserver-ha-135993" in "kube-system" namespace has status "Ready":"True"
	I0920 17:09:25.504761   27962 pod_ready.go:82] duration metric: took 360.627268ms for pod "kube-apiserver-ha-135993" in "kube-system" namespace to be "Ready" ...
	I0920 17:09:25.504775   27962 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-135993-m02" in "kube-system" namespace to be "Ready" ...
	I0920 17:09:25.700017   27962 request.go:632] Waited for 195.167851ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-135993-m02
	I0920 17:09:25.700099   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-135993-m02
	I0920 17:09:25.700105   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:25.700111   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:25.700116   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:25.703342   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:25.900444   27962 request.go:632] Waited for 196.370493ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:25.900528   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:25.900536   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:25.900546   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:25.900556   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:25.904185   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:25.904729   27962 pod_ready.go:93] pod "kube-apiserver-ha-135993-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 17:09:25.904749   27962 pod_ready.go:82] duration metric: took 399.965762ms for pod "kube-apiserver-ha-135993-m02" in "kube-system" namespace to be "Ready" ...
	I0920 17:09:25.904762   27962 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-135993" in "kube-system" namespace to be "Ready" ...
	I0920 17:09:26.100837   27962 request.go:632] Waited for 195.996544ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-135993
	I0920 17:09:26.100911   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-135993
	I0920 17:09:26.100922   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:26.100930   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:26.100934   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:26.104514   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:26.300664   27962 request.go:632] Waited for 195.385658ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-135993
	I0920 17:09:26.300743   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993
	I0920 17:09:26.300751   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:26.300761   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:26.300767   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:26.304576   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:26.305216   27962 pod_ready.go:93] pod "kube-controller-manager-ha-135993" in "kube-system" namespace has status "Ready":"True"
	I0920 17:09:26.305236   27962 pod_ready.go:82] duration metric: took 400.465668ms for pod "kube-controller-manager-ha-135993" in "kube-system" namespace to be "Ready" ...
	I0920 17:09:26.305250   27962 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-135993-m02" in "kube-system" namespace to be "Ready" ...
	I0920 17:09:26.500476   27962 request.go:632] Waited for 195.132114ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-135993-m02
	I0920 17:09:26.500563   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-135993-m02
	I0920 17:09:26.500573   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:26.500585   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:26.500595   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:26.503974   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:26.700109   27962 request.go:632] Waited for 195.31021ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:26.700178   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:26.700184   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:26.700192   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:26.700197   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:26.703786   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:26.704325   27962 pod_ready.go:93] pod "kube-controller-manager-ha-135993-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 17:09:26.704346   27962 pod_ready.go:82] duration metric: took 399.089711ms for pod "kube-controller-manager-ha-135993-m02" in "kube-system" namespace to be "Ready" ...
	I0920 17:09:26.704359   27962 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-52r49" in "kube-system" namespace to be "Ready" ...
	I0920 17:09:26.900914   27962 request.go:632] Waited for 196.454204ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-proxy-52r49
	I0920 17:09:26.900979   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-proxy-52r49
	I0920 17:09:26.900988   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:26.900999   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:26.901008   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:26.904465   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:27.100636   27962 request.go:632] Waited for 195.370556ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-135993
	I0920 17:09:27.100694   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993
	I0920 17:09:27.100700   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:27.100707   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:27.100713   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:27.104136   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:27.104731   27962 pod_ready.go:93] pod "kube-proxy-52r49" in "kube-system" namespace has status "Ready":"True"
	I0920 17:09:27.104752   27962 pod_ready.go:82] duration metric: took 400.38236ms for pod "kube-proxy-52r49" in "kube-system" namespace to be "Ready" ...
	I0920 17:09:27.104762   27962 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-z6xqt" in "kube-system" namespace to be "Ready" ...
	I0920 17:09:27.300919   27962 request.go:632] Waited for 196.074087ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z6xqt
	I0920 17:09:27.300987   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z6xqt
	I0920 17:09:27.300993   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:27.301002   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:27.301038   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:27.304315   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:27.500226   27962 request.go:632] Waited for 195.315282ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:27.500323   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:27.500337   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:27.500347   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:27.500353   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:27.503809   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:27.504585   27962 pod_ready.go:93] pod "kube-proxy-z6xqt" in "kube-system" namespace has status "Ready":"True"
	I0920 17:09:27.504607   27962 pod_ready.go:82] duration metric: took 399.833703ms for pod "kube-proxy-z6xqt" in "kube-system" namespace to be "Ready" ...
	I0920 17:09:27.504623   27962 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-135993" in "kube-system" namespace to be "Ready" ...
	I0920 17:09:27.700599   27962 request.go:632] Waited for 195.904246ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-135993
	I0920 17:09:27.700671   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-135993
	I0920 17:09:27.700677   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:27.700684   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:27.700691   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:27.704470   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:27.900633   27962 request.go:632] Waited for 195.387225ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-135993
	I0920 17:09:27.900688   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993
	I0920 17:09:27.900695   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:27.900708   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:27.900716   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:27.903956   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:27.904541   27962 pod_ready.go:93] pod "kube-scheduler-ha-135993" in "kube-system" namespace has status "Ready":"True"
	I0920 17:09:27.904563   27962 pod_ready.go:82] duration metric: took 399.932453ms for pod "kube-scheduler-ha-135993" in "kube-system" namespace to be "Ready" ...
	I0920 17:09:27.904573   27962 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-135993-m02" in "kube-system" namespace to be "Ready" ...
	I0920 17:09:28.100547   27962 request.go:632] Waited for 195.899157ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-135993-m02
	I0920 17:09:28.100623   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-135993-m02
	I0920 17:09:28.100628   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:28.100637   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:28.100642   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:28.104043   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:28.299961   27962 request.go:632] Waited for 195.327445ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:28.300029   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:28.300037   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:28.300046   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:28.300054   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:28.303288   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:28.303968   27962 pod_ready.go:93] pod "kube-scheduler-ha-135993-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 17:09:28.303986   27962 pod_ready.go:82] duration metric: took 399.402915ms for pod "kube-scheduler-ha-135993-m02" in "kube-system" namespace to be "Ready" ...
	I0920 17:09:28.304000   27962 pod_ready.go:39] duration metric: took 3.199931535s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 17:09:28.304019   27962 api_server.go:52] waiting for apiserver process to appear ...
	I0920 17:09:28.304077   27962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 17:09:28.320006   27962 api_server.go:72] duration metric: took 22.576329593s to wait for apiserver process to appear ...
	I0920 17:09:28.320037   27962 api_server.go:88] waiting for apiserver healthz status ...
	I0920 17:09:28.320064   27962 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0920 17:09:28.324668   27962 api_server.go:279] https://192.168.39.60:8443/healthz returned 200:
	ok
	I0920 17:09:28.324734   27962 round_trippers.go:463] GET https://192.168.39.60:8443/version
	I0920 17:09:28.324739   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:28.324747   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:28.324752   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:28.325606   27962 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0920 17:09:28.325696   27962 api_server.go:141] control plane version: v1.31.1
	I0920 17:09:28.325719   27962 api_server.go:131] duration metric: took 5.673918ms to wait for apiserver health ...
	I0920 17:09:28.325728   27962 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 17:09:28.500898   27962 request.go:632] Waited for 175.10825ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods
	I0920 17:09:28.500971   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods
	I0920 17:09:28.500978   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:28.500986   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:28.500995   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:28.506063   27962 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0920 17:09:28.510476   27962 system_pods.go:59] 17 kube-system pods found
	I0920 17:09:28.510506   27962 system_pods.go:61] "coredns-7c65d6cfc9-gcvg4" [899b2b8c-9009-46c0-816b-781e85eb8b19] Running
	I0920 17:09:28.510512   27962 system_pods.go:61] "coredns-7c65d6cfc9-kpbhk" [0dfd9f1a-148c-4dba-884a-8618b74f82d0] Running
	I0920 17:09:28.510516   27962 system_pods.go:61] "etcd-ha-135993" [d5e44451-ab41-4bdf-8f34-c8202bc33cda] Running
	I0920 17:09:28.510520   27962 system_pods.go:61] "etcd-ha-135993-m02" [415d8f47-3bc4-42fc-ae75-b8217f5a731d] Running
	I0920 17:09:28.510524   27962 system_pods.go:61] "kindnet-5m4r8" [c799ac5a-69f7-45d5-a291-61edfd753404] Running
	I0920 17:09:28.510528   27962 system_pods.go:61] "kindnet-6clt2" [d73a0817-d84f-4269-9de0-1532287a07db] Running
	I0920 17:09:28.510532   27962 system_pods.go:61] "kube-apiserver-ha-135993" [0c2adef2-0b98-4752-ba6c-0719b723c93c] Running
	I0920 17:09:28.510536   27962 system_pods.go:61] "kube-apiserver-ha-135993-m02" [e74438ac-347a-4f38-ba96-c7687763c669] Running
	I0920 17:09:28.510539   27962 system_pods.go:61] "kube-controller-manager-ha-135993" [ba82afa0-d0a1-4515-bd0f-574f03c2a5a5] Running
	I0920 17:09:28.510543   27962 system_pods.go:61] "kube-controller-manager-ha-135993-m02" [ba4a59ae-5348-43b7-b80d-96d5d63afea2] Running
	I0920 17:09:28.510548   27962 system_pods.go:61] "kube-proxy-52r49" [8d1124bd-e7cb-4239-a29d-c1d5b8870aff] Running
	I0920 17:09:28.510551   27962 system_pods.go:61] "kube-proxy-z6xqt" [70b8c8ab-77cc-4681-a9b1-3c28fc0f2674] Running
	I0920 17:09:28.510555   27962 system_pods.go:61] "kube-scheduler-ha-135993" [25eaf632-035a-44d9-8ce0-e6c3c0e4e0c4] Running
	I0920 17:09:28.510558   27962 system_pods.go:61] "kube-scheduler-ha-135993-m02" [6daee23a-d627-41e8-81b1-ceb4fa70ec3e] Running
	I0920 17:09:28.510563   27962 system_pods.go:61] "kube-vip-ha-135993" [6aa396e1-76b2-4911-bc93-660c51cef03d] Running
	I0920 17:09:28.510566   27962 system_pods.go:61] "kube-vip-ha-135993-m02" [2e9d1f8a-1ce9-46f9-b58b-f13302832062] Running
	I0920 17:09:28.510571   27962 system_pods.go:61] "storage-provisioner" [57137bee-9a7b-4659-a855-0da82d137cb0] Running
	I0920 17:09:28.510576   27962 system_pods.go:74] duration metric: took 184.843309ms to wait for pod list to return data ...
	I0920 17:09:28.510583   27962 default_sa.go:34] waiting for default service account to be created ...
	I0920 17:09:28.701010   27962 request.go:632] Waited for 190.33295ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/default/serviceaccounts
	I0920 17:09:28.701070   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/default/serviceaccounts
	I0920 17:09:28.701075   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:28.701082   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:28.701086   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:28.704833   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:28.705046   27962 default_sa.go:45] found service account: "default"
	I0920 17:09:28.705060   27962 default_sa.go:55] duration metric: took 194.471281ms for default service account to be created ...
	I0920 17:09:28.705068   27962 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 17:09:28.900520   27962 request.go:632] Waited for 195.386336ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods
	I0920 17:09:28.900601   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods
	I0920 17:09:28.900607   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:28.900614   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:28.900622   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:28.905157   27962 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 17:09:28.910152   27962 system_pods.go:86] 17 kube-system pods found
	I0920 17:09:28.910177   27962 system_pods.go:89] "coredns-7c65d6cfc9-gcvg4" [899b2b8c-9009-46c0-816b-781e85eb8b19] Running
	I0920 17:09:28.910183   27962 system_pods.go:89] "coredns-7c65d6cfc9-kpbhk" [0dfd9f1a-148c-4dba-884a-8618b74f82d0] Running
	I0920 17:09:28.910188   27962 system_pods.go:89] "etcd-ha-135993" [d5e44451-ab41-4bdf-8f34-c8202bc33cda] Running
	I0920 17:09:28.910193   27962 system_pods.go:89] "etcd-ha-135993-m02" [415d8f47-3bc4-42fc-ae75-b8217f5a731d] Running
	I0920 17:09:28.910197   27962 system_pods.go:89] "kindnet-5m4r8" [c799ac5a-69f7-45d5-a291-61edfd753404] Running
	I0920 17:09:28.910200   27962 system_pods.go:89] "kindnet-6clt2" [d73a0817-d84f-4269-9de0-1532287a07db] Running
	I0920 17:09:28.910204   27962 system_pods.go:89] "kube-apiserver-ha-135993" [0c2adef2-0b98-4752-ba6c-0719b723c93c] Running
	I0920 17:09:28.910210   27962 system_pods.go:89] "kube-apiserver-ha-135993-m02" [e74438ac-347a-4f38-ba96-c7687763c669] Running
	I0920 17:09:28.910216   27962 system_pods.go:89] "kube-controller-manager-ha-135993" [ba82afa0-d0a1-4515-bd0f-574f03c2a5a5] Running
	I0920 17:09:28.910221   27962 system_pods.go:89] "kube-controller-manager-ha-135993-m02" [ba4a59ae-5348-43b7-b80d-96d5d63afea2] Running
	I0920 17:09:28.910224   27962 system_pods.go:89] "kube-proxy-52r49" [8d1124bd-e7cb-4239-a29d-c1d5b8870aff] Running
	I0920 17:09:28.910232   27962 system_pods.go:89] "kube-proxy-z6xqt" [70b8c8ab-77cc-4681-a9b1-3c28fc0f2674] Running
	I0920 17:09:28.910236   27962 system_pods.go:89] "kube-scheduler-ha-135993" [25eaf632-035a-44d9-8ce0-e6c3c0e4e0c4] Running
	I0920 17:09:28.910240   27962 system_pods.go:89] "kube-scheduler-ha-135993-m02" [6daee23a-d627-41e8-81b1-ceb4fa70ec3e] Running
	I0920 17:09:28.910243   27962 system_pods.go:89] "kube-vip-ha-135993" [6aa396e1-76b2-4911-bc93-660c51cef03d] Running
	I0920 17:09:28.910246   27962 system_pods.go:89] "kube-vip-ha-135993-m02" [2e9d1f8a-1ce9-46f9-b58b-f13302832062] Running
	I0920 17:09:28.910249   27962 system_pods.go:89] "storage-provisioner" [57137bee-9a7b-4659-a855-0da82d137cb0] Running
	I0920 17:09:28.910257   27962 system_pods.go:126] duration metric: took 205.181263ms to wait for k8s-apps to be running ...
	I0920 17:09:28.910266   27962 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 17:09:28.910308   27962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 17:09:28.926895   27962 system_svc.go:56] duration metric: took 16.618557ms WaitForService to wait for kubelet
	I0920 17:09:28.926931   27962 kubeadm.go:582] duration metric: took 23.18325481s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 17:09:28.926955   27962 node_conditions.go:102] verifying NodePressure condition ...
	I0920 17:09:29.100293   27962 request.go:632] Waited for 173.230558ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes
	I0920 17:09:29.100347   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes
	I0920 17:09:29.100351   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:29.100362   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:29.100368   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:29.104004   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:29.104756   27962 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 17:09:29.104780   27962 node_conditions.go:123] node cpu capacity is 2
	I0920 17:09:29.104790   27962 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 17:09:29.104794   27962 node_conditions.go:123] node cpu capacity is 2
	I0920 17:09:29.104798   27962 node_conditions.go:105] duration metric: took 177.838136ms to run NodePressure ...
	I0920 17:09:29.104811   27962 start.go:241] waiting for startup goroutines ...
	I0920 17:09:29.104835   27962 start.go:255] writing updated cluster config ...
	I0920 17:09:29.107129   27962 out.go:201] 
	I0920 17:09:29.108641   27962 config.go:182] Loaded profile config "ha-135993": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 17:09:29.108741   27962 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/config.json ...
	I0920 17:09:29.110401   27962 out.go:177] * Starting "ha-135993-m03" control-plane node in "ha-135993" cluster
	I0920 17:09:29.111695   27962 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 17:09:29.111718   27962 cache.go:56] Caching tarball of preloaded images
	I0920 17:09:29.111819   27962 preload.go:172] Found /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 17:09:29.111832   27962 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 17:09:29.111919   27962 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/config.json ...
	I0920 17:09:29.112087   27962 start.go:360] acquireMachinesLock for ha-135993-m03: {Name:mkfeedb385cf08b5d2aa00913e85815d02a180c2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 17:09:29.112125   27962 start.go:364] duration metric: took 21.568µs to acquireMachinesLock for "ha-135993-m03"
	I0920 17:09:29.112142   27962 start.go:93] Provisioning new machine with config: &{Name:ha-135993 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-135993 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor
-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 17:09:29.112230   27962 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0920 17:09:29.114039   27962 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 17:09:29.114124   27962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:09:29.114159   27962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:09:29.130067   27962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37683
	I0920 17:09:29.130534   27962 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:09:29.131025   27962 main.go:141] libmachine: Using API Version  1
	I0920 17:09:29.131052   27962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:09:29.131373   27962 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:09:29.131541   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetMachineName
	I0920 17:09:29.131727   27962 main.go:141] libmachine: (ha-135993-m03) Calling .DriverName
	I0920 17:09:29.131887   27962 start.go:159] libmachine.API.Create for "ha-135993" (driver="kvm2")
	I0920 17:09:29.131918   27962 client.go:168] LocalClient.Create starting
	I0920 17:09:29.131956   27962 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem
	I0920 17:09:29.131998   27962 main.go:141] libmachine: Decoding PEM data...
	I0920 17:09:29.132021   27962 main.go:141] libmachine: Parsing certificate...
	I0920 17:09:29.132086   27962 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem
	I0920 17:09:29.132115   27962 main.go:141] libmachine: Decoding PEM data...
	I0920 17:09:29.132130   27962 main.go:141] libmachine: Parsing certificate...
	I0920 17:09:29.132158   27962 main.go:141] libmachine: Running pre-create checks...
	I0920 17:09:29.132169   27962 main.go:141] libmachine: (ha-135993-m03) Calling .PreCreateCheck
	I0920 17:09:29.132361   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetConfigRaw
	I0920 17:09:29.132775   27962 main.go:141] libmachine: Creating machine...
	I0920 17:09:29.132791   27962 main.go:141] libmachine: (ha-135993-m03) Calling .Create
	I0920 17:09:29.132937   27962 main.go:141] libmachine: (ha-135993-m03) Creating KVM machine...
	I0920 17:09:29.134340   27962 main.go:141] libmachine: (ha-135993-m03) DBG | found existing default KVM network
	I0920 17:09:29.134482   27962 main.go:141] libmachine: (ha-135993-m03) DBG | found existing private KVM network mk-ha-135993
	I0920 17:09:29.134586   27962 main.go:141] libmachine: (ha-135993-m03) Setting up store path in /home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m03 ...
	I0920 17:09:29.134610   27962 main.go:141] libmachine: (ha-135993-m03) Building disk image from file:///home/jenkins/minikube-integration/19672-8777/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso
	I0920 17:09:29.134709   27962 main.go:141] libmachine: (ha-135993-m03) DBG | I0920 17:09:29.134570   28745 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19672-8777/.minikube
	I0920 17:09:29.134788   27962 main.go:141] libmachine: (ha-135993-m03) Downloading /home/jenkins/minikube-integration/19672-8777/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19672-8777/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso...
	I0920 17:09:29.623687   27962 main.go:141] libmachine: (ha-135993-m03) DBG | I0920 17:09:29.623559   28745 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m03/id_rsa...
	I0920 17:09:29.849339   27962 main.go:141] libmachine: (ha-135993-m03) DBG | I0920 17:09:29.849213   28745 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m03/ha-135993-m03.rawdisk...
	I0920 17:09:29.849379   27962 main.go:141] libmachine: (ha-135993-m03) DBG | Writing magic tar header
	I0920 17:09:29.849390   27962 main.go:141] libmachine: (ha-135993-m03) DBG | Writing SSH key tar header
	I0920 17:09:29.849398   27962 main.go:141] libmachine: (ha-135993-m03) DBG | I0920 17:09:29.849332   28745 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m03 ...
	I0920 17:09:29.849416   27962 main.go:141] libmachine: (ha-135993-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m03
	I0920 17:09:29.849450   27962 main.go:141] libmachine: (ha-135993-m03) Setting executable bit set on /home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m03 (perms=drwx------)
	I0920 17:09:29.849472   27962 main.go:141] libmachine: (ha-135993-m03) Setting executable bit set on /home/jenkins/minikube-integration/19672-8777/.minikube/machines (perms=drwxr-xr-x)
	I0920 17:09:29.849487   27962 main.go:141] libmachine: (ha-135993-m03) Setting executable bit set on /home/jenkins/minikube-integration/19672-8777/.minikube (perms=drwxr-xr-x)
	I0920 17:09:29.849501   27962 main.go:141] libmachine: (ha-135993-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-8777/.minikube/machines
	I0920 17:09:29.849511   27962 main.go:141] libmachine: (ha-135993-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-8777/.minikube
	I0920 17:09:29.849524   27962 main.go:141] libmachine: (ha-135993-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-8777
	I0920 17:09:29.849537   27962 main.go:141] libmachine: (ha-135993-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0920 17:09:29.849559   27962 main.go:141] libmachine: (ha-135993-m03) Setting executable bit set on /home/jenkins/minikube-integration/19672-8777 (perms=drwxrwxr-x)
	I0920 17:09:29.849572   27962 main.go:141] libmachine: (ha-135993-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0920 17:09:29.849581   27962 main.go:141] libmachine: (ha-135993-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0920 17:09:29.849589   27962 main.go:141] libmachine: (ha-135993-m03) DBG | Checking permissions on dir: /home/jenkins
	I0920 17:09:29.849596   27962 main.go:141] libmachine: (ha-135993-m03) DBG | Checking permissions on dir: /home
	I0920 17:09:29.849612   27962 main.go:141] libmachine: (ha-135993-m03) DBG | Skipping /home - not owner
	I0920 17:09:29.849623   27962 main.go:141] libmachine: (ha-135993-m03) Creating domain...
	I0920 17:09:29.850674   27962 main.go:141] libmachine: (ha-135993-m03) define libvirt domain using xml: 
	I0920 17:09:29.850697   27962 main.go:141] libmachine: (ha-135993-m03) <domain type='kvm'>
	I0920 17:09:29.850706   27962 main.go:141] libmachine: (ha-135993-m03)   <name>ha-135993-m03</name>
	I0920 17:09:29.850718   27962 main.go:141] libmachine: (ha-135993-m03)   <memory unit='MiB'>2200</memory>
	I0920 17:09:29.850725   27962 main.go:141] libmachine: (ha-135993-m03)   <vcpu>2</vcpu>
	I0920 17:09:29.850730   27962 main.go:141] libmachine: (ha-135993-m03)   <features>
	I0920 17:09:29.850737   27962 main.go:141] libmachine: (ha-135993-m03)     <acpi/>
	I0920 17:09:29.850744   27962 main.go:141] libmachine: (ha-135993-m03)     <apic/>
	I0920 17:09:29.850757   27962 main.go:141] libmachine: (ha-135993-m03)     <pae/>
	I0920 17:09:29.850769   27962 main.go:141] libmachine: (ha-135993-m03)     
	I0920 17:09:29.850776   27962 main.go:141] libmachine: (ha-135993-m03)   </features>
	I0920 17:09:29.850783   27962 main.go:141] libmachine: (ha-135993-m03)   <cpu mode='host-passthrough'>
	I0920 17:09:29.850803   27962 main.go:141] libmachine: (ha-135993-m03)   
	I0920 17:09:29.850826   27962 main.go:141] libmachine: (ha-135993-m03)   </cpu>
	I0920 17:09:29.850834   27962 main.go:141] libmachine: (ha-135993-m03)   <os>
	I0920 17:09:29.850839   27962 main.go:141] libmachine: (ha-135993-m03)     <type>hvm</type>
	I0920 17:09:29.850844   27962 main.go:141] libmachine: (ha-135993-m03)     <boot dev='cdrom'/>
	I0920 17:09:29.850850   27962 main.go:141] libmachine: (ha-135993-m03)     <boot dev='hd'/>
	I0920 17:09:29.850855   27962 main.go:141] libmachine: (ha-135993-m03)     <bootmenu enable='no'/>
	I0920 17:09:29.850866   27962 main.go:141] libmachine: (ha-135993-m03)   </os>
	I0920 17:09:29.850873   27962 main.go:141] libmachine: (ha-135993-m03)   <devices>
	I0920 17:09:29.850878   27962 main.go:141] libmachine: (ha-135993-m03)     <disk type='file' device='cdrom'>
	I0920 17:09:29.850887   27962 main.go:141] libmachine: (ha-135993-m03)       <source file='/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m03/boot2docker.iso'/>
	I0920 17:09:29.850894   27962 main.go:141] libmachine: (ha-135993-m03)       <target dev='hdc' bus='scsi'/>
	I0920 17:09:29.850925   27962 main.go:141] libmachine: (ha-135993-m03)       <readonly/>
	I0920 17:09:29.850951   27962 main.go:141] libmachine: (ha-135993-m03)     </disk>
	I0920 17:09:29.850962   27962 main.go:141] libmachine: (ha-135993-m03)     <disk type='file' device='disk'>
	I0920 17:09:29.850974   27962 main.go:141] libmachine: (ha-135993-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0920 17:09:29.850990   27962 main.go:141] libmachine: (ha-135993-m03)       <source file='/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m03/ha-135993-m03.rawdisk'/>
	I0920 17:09:29.851010   27962 main.go:141] libmachine: (ha-135993-m03)       <target dev='hda' bus='virtio'/>
	I0920 17:09:29.851030   27962 main.go:141] libmachine: (ha-135993-m03)     </disk>
	I0920 17:09:29.851045   27962 main.go:141] libmachine: (ha-135993-m03)     <interface type='network'>
	I0920 17:09:29.851055   27962 main.go:141] libmachine: (ha-135993-m03)       <source network='mk-ha-135993'/>
	I0920 17:09:29.851062   27962 main.go:141] libmachine: (ha-135993-m03)       <model type='virtio'/>
	I0920 17:09:29.851069   27962 main.go:141] libmachine: (ha-135993-m03)     </interface>
	I0920 17:09:29.851077   27962 main.go:141] libmachine: (ha-135993-m03)     <interface type='network'>
	I0920 17:09:29.851085   27962 main.go:141] libmachine: (ha-135993-m03)       <source network='default'/>
	I0920 17:09:29.851090   27962 main.go:141] libmachine: (ha-135993-m03)       <model type='virtio'/>
	I0920 17:09:29.851095   27962 main.go:141] libmachine: (ha-135993-m03)     </interface>
	I0920 17:09:29.851101   27962 main.go:141] libmachine: (ha-135993-m03)     <serial type='pty'>
	I0920 17:09:29.851109   27962 main.go:141] libmachine: (ha-135993-m03)       <target port='0'/>
	I0920 17:09:29.851115   27962 main.go:141] libmachine: (ha-135993-m03)     </serial>
	I0920 17:09:29.851133   27962 main.go:141] libmachine: (ha-135993-m03)     <console type='pty'>
	I0920 17:09:29.851153   27962 main.go:141] libmachine: (ha-135993-m03)       <target type='serial' port='0'/>
	I0920 17:09:29.851165   27962 main.go:141] libmachine: (ha-135993-m03)     </console>
	I0920 17:09:29.851172   27962 main.go:141] libmachine: (ha-135993-m03)     <rng model='virtio'>
	I0920 17:09:29.851184   27962 main.go:141] libmachine: (ha-135993-m03)       <backend model='random'>/dev/random</backend>
	I0920 17:09:29.851194   27962 main.go:141] libmachine: (ha-135993-m03)     </rng>
	I0920 17:09:29.851201   27962 main.go:141] libmachine: (ha-135993-m03)     
	I0920 17:09:29.851209   27962 main.go:141] libmachine: (ha-135993-m03)     
	I0920 17:09:29.851215   27962 main.go:141] libmachine: (ha-135993-m03)   </devices>
	I0920 17:09:29.851224   27962 main.go:141] libmachine: (ha-135993-m03) </domain>
	I0920 17:09:29.851251   27962 main.go:141] libmachine: (ha-135993-m03) 
	I0920 17:09:29.858905   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:e3:0b:70 in network default
	I0920 17:09:29.859443   27962 main.go:141] libmachine: (ha-135993-m03) Ensuring networks are active...
	I0920 17:09:29.859461   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:29.860217   27962 main.go:141] libmachine: (ha-135993-m03) Ensuring network default is active
	I0920 17:09:29.860531   27962 main.go:141] libmachine: (ha-135993-m03) Ensuring network mk-ha-135993 is active
	I0920 17:09:29.860904   27962 main.go:141] libmachine: (ha-135993-m03) Getting domain xml...
	I0920 17:09:29.861590   27962 main.go:141] libmachine: (ha-135993-m03) Creating domain...
	I0920 17:09:31.187018   27962 main.go:141] libmachine: (ha-135993-m03) Waiting to get IP...
	I0920 17:09:31.187715   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:31.188084   27962 main.go:141] libmachine: (ha-135993-m03) DBG | unable to find current IP address of domain ha-135993-m03 in network mk-ha-135993
	I0920 17:09:31.188106   27962 main.go:141] libmachine: (ha-135993-m03) DBG | I0920 17:09:31.188068   28745 retry.go:31] will retry after 213.512063ms: waiting for machine to come up
	I0920 17:09:31.403627   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:31.404039   27962 main.go:141] libmachine: (ha-135993-m03) DBG | unable to find current IP address of domain ha-135993-m03 in network mk-ha-135993
	I0920 17:09:31.404070   27962 main.go:141] libmachine: (ha-135993-m03) DBG | I0920 17:09:31.403991   28745 retry.go:31] will retry after 361.212458ms: waiting for machine to come up
	I0920 17:09:31.766642   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:31.767089   27962 main.go:141] libmachine: (ha-135993-m03) DBG | unable to find current IP address of domain ha-135993-m03 in network mk-ha-135993
	I0920 17:09:31.767116   27962 main.go:141] libmachine: (ha-135993-m03) DBG | I0920 17:09:31.767037   28745 retry.go:31] will retry after 376.833715ms: waiting for machine to come up
	I0920 17:09:32.145427   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:32.145898   27962 main.go:141] libmachine: (ha-135993-m03) DBG | unable to find current IP address of domain ha-135993-m03 in network mk-ha-135993
	I0920 17:09:32.145947   27962 main.go:141] libmachine: (ha-135993-m03) DBG | I0920 17:09:32.145871   28745 retry.go:31] will retry after 557.65015ms: waiting for machine to come up
	I0920 17:09:32.705540   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:32.705975   27962 main.go:141] libmachine: (ha-135993-m03) DBG | unable to find current IP address of domain ha-135993-m03 in network mk-ha-135993
	I0920 17:09:32.706023   27962 main.go:141] libmachine: (ha-135993-m03) DBG | I0920 17:09:32.705956   28745 retry.go:31] will retry after 695.507494ms: waiting for machine to come up
	I0920 17:09:33.402909   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:33.403356   27962 main.go:141] libmachine: (ha-135993-m03) DBG | unable to find current IP address of domain ha-135993-m03 in network mk-ha-135993
	I0920 17:09:33.403389   27962 main.go:141] libmachine: (ha-135993-m03) DBG | I0920 17:09:33.403304   28745 retry.go:31] will retry after 645.712565ms: waiting for machine to come up
	I0920 17:09:34.051477   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:34.052378   27962 main.go:141] libmachine: (ha-135993-m03) DBG | unable to find current IP address of domain ha-135993-m03 in network mk-ha-135993
	I0920 17:09:34.052405   27962 main.go:141] libmachine: (ha-135993-m03) DBG | I0920 17:09:34.052280   28745 retry.go:31] will retry after 770.593421ms: waiting for machine to come up
	I0920 17:09:34.824986   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:34.825490   27962 main.go:141] libmachine: (ha-135993-m03) DBG | unable to find current IP address of domain ha-135993-m03 in network mk-ha-135993
	I0920 17:09:34.825514   27962 main.go:141] libmachine: (ha-135993-m03) DBG | I0920 17:09:34.825451   28745 retry.go:31] will retry after 1.327368797s: waiting for machine to come up
	I0920 17:09:36.154205   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:36.154624   27962 main.go:141] libmachine: (ha-135993-m03) DBG | unable to find current IP address of domain ha-135993-m03 in network mk-ha-135993
	I0920 17:09:36.154646   27962 main.go:141] libmachine: (ha-135993-m03) DBG | I0920 17:09:36.154579   28745 retry.go:31] will retry after 1.581269715s: waiting for machine to come up
	I0920 17:09:37.738322   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:37.738736   27962 main.go:141] libmachine: (ha-135993-m03) DBG | unable to find current IP address of domain ha-135993-m03 in network mk-ha-135993
	I0920 17:09:37.738762   27962 main.go:141] libmachine: (ha-135993-m03) DBG | I0920 17:09:37.738689   28745 retry.go:31] will retry after 1.459267896s: waiting for machine to come up
	I0920 17:09:39.199274   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:39.199678   27962 main.go:141] libmachine: (ha-135993-m03) DBG | unable to find current IP address of domain ha-135993-m03 in network mk-ha-135993
	I0920 17:09:39.199706   27962 main.go:141] libmachine: (ha-135993-m03) DBG | I0920 17:09:39.199627   28745 retry.go:31] will retry after 2.386585249s: waiting for machine to come up
	I0920 17:09:41.588281   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:41.588804   27962 main.go:141] libmachine: (ha-135993-m03) DBG | unable to find current IP address of domain ha-135993-m03 in network mk-ha-135993
	I0920 17:09:41.588834   27962 main.go:141] libmachine: (ha-135993-m03) DBG | I0920 17:09:41.588752   28745 retry.go:31] will retry after 2.639705596s: waiting for machine to come up
	I0920 17:09:44.229971   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:44.230371   27962 main.go:141] libmachine: (ha-135993-m03) DBG | unable to find current IP address of domain ha-135993-m03 in network mk-ha-135993
	I0920 17:09:44.230422   27962 main.go:141] libmachine: (ha-135993-m03) DBG | I0920 17:09:44.230347   28745 retry.go:31] will retry after 3.819742823s: waiting for machine to come up
	I0920 17:09:48.054340   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:48.054705   27962 main.go:141] libmachine: (ha-135993-m03) DBG | unable to find current IP address of domain ha-135993-m03 in network mk-ha-135993
	I0920 17:09:48.054731   27962 main.go:141] libmachine: (ha-135993-m03) DBG | I0920 17:09:48.054671   28745 retry.go:31] will retry after 4.961691445s: waiting for machine to come up
	I0920 17:09:53.018825   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:53.019259   27962 main.go:141] libmachine: (ha-135993-m03) Found IP for machine: 192.168.39.133
	I0920 17:09:53.019281   27962 main.go:141] libmachine: (ha-135993-m03) Reserving static IP address...
	I0920 17:09:53.019295   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has current primary IP address 192.168.39.133 and MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:53.019682   27962 main.go:141] libmachine: (ha-135993-m03) DBG | unable to find host DHCP lease matching {name: "ha-135993-m03", mac: "52:54:00:4a:49:98", ip: "192.168.39.133"} in network mk-ha-135993
	I0920 17:09:53.093855   27962 main.go:141] libmachine: (ha-135993-m03) DBG | Getting to WaitForSSH function...
	I0920 17:09:53.093888   27962 main.go:141] libmachine: (ha-135993-m03) Reserved static IP address: 192.168.39.133
	I0920 17:09:53.093913   27962 main.go:141] libmachine: (ha-135993-m03) Waiting for SSH to be available...
	I0920 17:09:53.096549   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:53.096917   27962 main.go:141] libmachine: (ha-135993-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:49:98", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:09:45 +0000 UTC Type:0 Mac:52:54:00:4a:49:98 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:minikube Clientid:01:52:54:00:4a:49:98}
	I0920 17:09:53.096942   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:53.097072   27962 main.go:141] libmachine: (ha-135993-m03) DBG | Using SSH client type: external
	I0920 17:09:53.097099   27962 main.go:141] libmachine: (ha-135993-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m03/id_rsa (-rw-------)
	I0920 17:09:53.097137   27962 main.go:141] libmachine: (ha-135993-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.133 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 17:09:53.097159   27962 main.go:141] libmachine: (ha-135993-m03) DBG | About to run SSH command:
	I0920 17:09:53.097174   27962 main.go:141] libmachine: (ha-135993-m03) DBG | exit 0
	I0920 17:09:53.225462   27962 main.go:141] libmachine: (ha-135993-m03) DBG | SSH cmd err, output: <nil>: 
	I0920 17:09:53.225738   27962 main.go:141] libmachine: (ha-135993-m03) KVM machine creation complete!
	I0920 17:09:53.226079   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetConfigRaw
	I0920 17:09:53.226700   27962 main.go:141] libmachine: (ha-135993-m03) Calling .DriverName
	I0920 17:09:53.226858   27962 main.go:141] libmachine: (ha-135993-m03) Calling .DriverName
	I0920 17:09:53.226985   27962 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0920 17:09:53.226999   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetState
	I0920 17:09:53.228014   27962 main.go:141] libmachine: Detecting operating system of created instance...
	I0920 17:09:53.228026   27962 main.go:141] libmachine: Waiting for SSH to be available...
	I0920 17:09:53.228031   27962 main.go:141] libmachine: Getting to WaitForSSH function...
	I0920 17:09:53.228038   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHHostname
	I0920 17:09:53.230141   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:53.230494   27962 main.go:141] libmachine: (ha-135993-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:49:98", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:09:45 +0000 UTC Type:0 Mac:52:54:00:4a:49:98 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-135993-m03 Clientid:01:52:54:00:4a:49:98}
	I0920 17:09:53.230517   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:53.230669   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHPort
	I0920 17:09:53.230844   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHKeyPath
	I0920 17:09:53.230948   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHKeyPath
	I0920 17:09:53.231082   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHUsername
	I0920 17:09:53.231200   27962 main.go:141] libmachine: Using SSH client type: native
	I0920 17:09:53.231420   27962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.133 22 <nil> <nil>}
	I0920 17:09:53.231435   27962 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0920 17:09:53.341375   27962 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 17:09:53.341396   27962 main.go:141] libmachine: Detecting the provisioner...
	I0920 17:09:53.341403   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHHostname
	I0920 17:09:53.344112   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:53.344480   27962 main.go:141] libmachine: (ha-135993-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:49:98", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:09:45 +0000 UTC Type:0 Mac:52:54:00:4a:49:98 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-135993-m03 Clientid:01:52:54:00:4a:49:98}
	I0920 17:09:53.344511   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:53.344666   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHPort
	I0920 17:09:53.344839   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHKeyPath
	I0920 17:09:53.344987   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHKeyPath
	I0920 17:09:53.345174   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHUsername
	I0920 17:09:53.345354   27962 main.go:141] libmachine: Using SSH client type: native
	I0920 17:09:53.345510   27962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.133 22 <nil> <nil>}
	I0920 17:09:53.345521   27962 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0920 17:09:53.458337   27962 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0920 17:09:53.458388   27962 main.go:141] libmachine: found compatible host: buildroot
	I0920 17:09:53.458394   27962 main.go:141] libmachine: Provisioning with buildroot...
	I0920 17:09:53.458407   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetMachineName
	I0920 17:09:53.458649   27962 buildroot.go:166] provisioning hostname "ha-135993-m03"
	I0920 17:09:53.458675   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetMachineName
	I0920 17:09:53.458849   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHHostname
	I0920 17:09:53.461596   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:53.461987   27962 main.go:141] libmachine: (ha-135993-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:49:98", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:09:45 +0000 UTC Type:0 Mac:52:54:00:4a:49:98 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-135993-m03 Clientid:01:52:54:00:4a:49:98}
	I0920 17:09:53.462013   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:53.462204   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHPort
	I0920 17:09:53.462360   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHKeyPath
	I0920 17:09:53.462538   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHKeyPath
	I0920 17:09:53.462693   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHUsername
	I0920 17:09:53.462836   27962 main.go:141] libmachine: Using SSH client type: native
	I0920 17:09:53.463061   27962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.133 22 <nil> <nil>}
	I0920 17:09:53.463079   27962 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-135993-m03 && echo "ha-135993-m03" | sudo tee /etc/hostname
	I0920 17:09:53.590131   27962 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-135993-m03
	
	I0920 17:09:53.590160   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHHostname
	I0920 17:09:53.592877   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:53.593210   27962 main.go:141] libmachine: (ha-135993-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:49:98", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:09:45 +0000 UTC Type:0 Mac:52:54:00:4a:49:98 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-135993-m03 Clientid:01:52:54:00:4a:49:98}
	I0920 17:09:53.593257   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:53.593412   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHPort
	I0920 17:09:53.593615   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHKeyPath
	I0920 17:09:53.593758   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHKeyPath
	I0920 17:09:53.593944   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHUsername
	I0920 17:09:53.594124   27962 main.go:141] libmachine: Using SSH client type: native
	I0920 17:09:53.594335   27962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.133 22 <nil> <nil>}
	I0920 17:09:53.594356   27962 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-135993-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-135993-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-135993-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 17:09:53.715013   27962 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 17:09:53.715044   27962 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-8777/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-8777/.minikube}
	I0920 17:09:53.715074   27962 buildroot.go:174] setting up certificates
	I0920 17:09:53.715086   27962 provision.go:84] configureAuth start
	I0920 17:09:53.715098   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetMachineName
	I0920 17:09:53.715402   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetIP
	I0920 17:09:53.718102   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:53.718382   27962 main.go:141] libmachine: (ha-135993-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:49:98", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:09:45 +0000 UTC Type:0 Mac:52:54:00:4a:49:98 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-135993-m03 Clientid:01:52:54:00:4a:49:98}
	I0920 17:09:53.718400   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:53.718579   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHHostname
	I0920 17:09:53.720967   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:53.721315   27962 main.go:141] libmachine: (ha-135993-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:49:98", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:09:45 +0000 UTC Type:0 Mac:52:54:00:4a:49:98 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-135993-m03 Clientid:01:52:54:00:4a:49:98}
	I0920 17:09:53.721341   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:53.721476   27962 provision.go:143] copyHostCerts
	I0920 17:09:53.721506   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem
	I0920 17:09:53.721536   27962 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem, removing ...
	I0920 17:09:53.721544   27962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem
	I0920 17:09:53.721632   27962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem (1082 bytes)
	I0920 17:09:53.721706   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem
	I0920 17:09:53.721728   27962 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem, removing ...
	I0920 17:09:53.721734   27962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem
	I0920 17:09:53.721757   27962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem (1123 bytes)
	I0920 17:09:53.721801   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem
	I0920 17:09:53.721822   27962 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem, removing ...
	I0920 17:09:53.721828   27962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem
	I0920 17:09:53.721880   27962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem (1675 bytes)
	I0920 17:09:53.721951   27962 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem org=jenkins.ha-135993-m03 san=[127.0.0.1 192.168.39.133 ha-135993-m03 localhost minikube]
	I0920 17:09:53.848713   27962 provision.go:177] copyRemoteCerts
	I0920 17:09:53.848773   27962 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 17:09:53.848800   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHHostname
	I0920 17:09:53.851795   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:53.852202   27962 main.go:141] libmachine: (ha-135993-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:49:98", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:09:45 +0000 UTC Type:0 Mac:52:54:00:4a:49:98 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-135993-m03 Clientid:01:52:54:00:4a:49:98}
	I0920 17:09:53.852234   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:53.852521   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHPort
	I0920 17:09:53.852708   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHKeyPath
	I0920 17:09:53.852882   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHUsername
	I0920 17:09:53.853058   27962 sshutil.go:53] new ssh client: &{IP:192.168.39.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m03/id_rsa Username:docker}
	I0920 17:09:53.939365   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0920 17:09:53.939433   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0920 17:09:53.962495   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0920 17:09:53.962567   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0920 17:09:53.985499   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0920 17:09:53.985574   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0920 17:09:54.008320   27962 provision.go:87] duration metric: took 293.220585ms to configureAuth
	I0920 17:09:54.008349   27962 buildroot.go:189] setting minikube options for container-runtime
	I0920 17:09:54.008604   27962 config.go:182] Loaded profile config "ha-135993": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 17:09:54.008700   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHHostname
	I0920 17:09:54.011605   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:54.011968   27962 main.go:141] libmachine: (ha-135993-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:49:98", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:09:45 +0000 UTC Type:0 Mac:52:54:00:4a:49:98 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-135993-m03 Clientid:01:52:54:00:4a:49:98}
	I0920 17:09:54.012001   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:54.012140   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHPort
	I0920 17:09:54.012318   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHKeyPath
	I0920 17:09:54.012493   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHKeyPath
	I0920 17:09:54.012609   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHUsername
	I0920 17:09:54.012754   27962 main.go:141] libmachine: Using SSH client type: native
	I0920 17:09:54.012956   27962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.133 22 <nil> <nil>}
	I0920 17:09:54.012972   27962 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 17:09:54.245416   27962 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 17:09:54.245443   27962 main.go:141] libmachine: Checking connection to Docker...
	I0920 17:09:54.245453   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetURL
	I0920 17:09:54.246780   27962 main.go:141] libmachine: (ha-135993-m03) DBG | Using libvirt version 6000000
	I0920 17:09:54.249527   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:54.249947   27962 main.go:141] libmachine: (ha-135993-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:49:98", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:09:45 +0000 UTC Type:0 Mac:52:54:00:4a:49:98 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-135993-m03 Clientid:01:52:54:00:4a:49:98}
	I0920 17:09:54.249971   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:54.250158   27962 main.go:141] libmachine: Docker is up and running!
	I0920 17:09:54.250187   27962 main.go:141] libmachine: Reticulating splines...
	I0920 17:09:54.250195   27962 client.go:171] duration metric: took 25.118268806s to LocalClient.Create
	I0920 17:09:54.250222   27962 start.go:167] duration metric: took 25.118338101s to libmachine.API.Create "ha-135993"
	I0920 17:09:54.250241   27962 start.go:293] postStartSetup for "ha-135993-m03" (driver="kvm2")
	I0920 17:09:54.250252   27962 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 17:09:54.250268   27962 main.go:141] libmachine: (ha-135993-m03) Calling .DriverName
	I0920 17:09:54.250588   27962 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 17:09:54.250617   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHHostname
	I0920 17:09:54.252892   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:54.253325   27962 main.go:141] libmachine: (ha-135993-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:49:98", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:09:45 +0000 UTC Type:0 Mac:52:54:00:4a:49:98 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-135993-m03 Clientid:01:52:54:00:4a:49:98}
	I0920 17:09:54.253360   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:54.253498   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHPort
	I0920 17:09:54.253673   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHKeyPath
	I0920 17:09:54.253825   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHUsername
	I0920 17:09:54.253986   27962 sshutil.go:53] new ssh client: &{IP:192.168.39.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m03/id_rsa Username:docker}
	I0920 17:09:54.339595   27962 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 17:09:54.343490   27962 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 17:09:54.343513   27962 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/addons for local assets ...
	I0920 17:09:54.343594   27962 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/files for local assets ...
	I0920 17:09:54.343690   27962 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem -> 159732.pem in /etc/ssl/certs
	I0920 17:09:54.343700   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem -> /etc/ssl/certs/159732.pem
	I0920 17:09:54.343811   27962 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 17:09:54.352574   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /etc/ssl/certs/159732.pem (1708 bytes)
	I0920 17:09:54.376021   27962 start.go:296] duration metric: took 125.763298ms for postStartSetup
	I0920 17:09:54.376085   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetConfigRaw
	I0920 17:09:54.376726   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetIP
	I0920 17:09:54.379455   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:54.379860   27962 main.go:141] libmachine: (ha-135993-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:49:98", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:09:45 +0000 UTC Type:0 Mac:52:54:00:4a:49:98 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-135993-m03 Clientid:01:52:54:00:4a:49:98}
	I0920 17:09:54.379889   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:54.380133   27962 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/config.json ...
	I0920 17:09:54.380334   27962 start.go:128] duration metric: took 25.268094288s to createHost
	I0920 17:09:54.380356   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHHostname
	I0920 17:09:54.382551   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:54.382926   27962 main.go:141] libmachine: (ha-135993-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:49:98", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:09:45 +0000 UTC Type:0 Mac:52:54:00:4a:49:98 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-135993-m03 Clientid:01:52:54:00:4a:49:98}
	I0920 17:09:54.382948   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:54.383118   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHPort
	I0920 17:09:54.383308   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHKeyPath
	I0920 17:09:54.383448   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHKeyPath
	I0920 17:09:54.383614   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHUsername
	I0920 17:09:54.383768   27962 main.go:141] libmachine: Using SSH client type: native
	I0920 17:09:54.383925   27962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.133 22 <nil> <nil>}
	I0920 17:09:54.383934   27962 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 17:09:54.498180   27962 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726852194.467876031
	
	I0920 17:09:54.498204   27962 fix.go:216] guest clock: 1726852194.467876031
	I0920 17:09:54.498211   27962 fix.go:229] Guest: 2024-09-20 17:09:54.467876031 +0000 UTC Remote: 2024-09-20 17:09:54.38034625 +0000 UTC m=+146.191055828 (delta=87.529781ms)
	I0920 17:09:54.498227   27962 fix.go:200] guest clock delta is within tolerance: 87.529781ms
	I0920 17:09:54.498231   27962 start.go:83] releasing machines lock for "ha-135993-m03", held for 25.386097949s
	I0920 17:09:54.498253   27962 main.go:141] libmachine: (ha-135993-m03) Calling .DriverName
	I0920 17:09:54.498534   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetIP
	I0920 17:09:54.501028   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:54.501386   27962 main.go:141] libmachine: (ha-135993-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:49:98", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:09:45 +0000 UTC Type:0 Mac:52:54:00:4a:49:98 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-135993-m03 Clientid:01:52:54:00:4a:49:98}
	I0920 17:09:54.501414   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:54.503574   27962 out.go:177] * Found network options:
	I0920 17:09:54.504800   27962 out.go:177]   - NO_PROXY=192.168.39.60,192.168.39.227
	W0920 17:09:54.505950   27962 proxy.go:119] fail to check proxy env: Error ip not in block
	W0920 17:09:54.505970   27962 proxy.go:119] fail to check proxy env: Error ip not in block
	I0920 17:09:54.505986   27962 main.go:141] libmachine: (ha-135993-m03) Calling .DriverName
	I0920 17:09:54.506533   27962 main.go:141] libmachine: (ha-135993-m03) Calling .DriverName
	I0920 17:09:54.506677   27962 main.go:141] libmachine: (ha-135993-m03) Calling .DriverName
	I0920 17:09:54.506748   27962 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 17:09:54.506777   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHHostname
	W0920 17:09:54.506811   27962 proxy.go:119] fail to check proxy env: Error ip not in block
	W0920 17:09:54.506837   27962 proxy.go:119] fail to check proxy env: Error ip not in block
	I0920 17:09:54.506918   27962 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 17:09:54.506942   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHHostname
	I0920 17:09:54.510430   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:54.510572   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:54.510840   27962 main.go:141] libmachine: (ha-135993-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:49:98", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:09:45 +0000 UTC Type:0 Mac:52:54:00:4a:49:98 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-135993-m03 Clientid:01:52:54:00:4a:49:98}
	I0920 17:09:54.510857   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:54.511009   27962 main.go:141] libmachine: (ha-135993-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:49:98", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:09:45 +0000 UTC Type:0 Mac:52:54:00:4a:49:98 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-135993-m03 Clientid:01:52:54:00:4a:49:98}
	I0920 17:09:54.511022   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHPort
	I0920 17:09:54.511025   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:54.511158   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHPort
	I0920 17:09:54.511238   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHKeyPath
	I0920 17:09:54.511306   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHKeyPath
	I0920 17:09:54.511366   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHUsername
	I0920 17:09:54.511419   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHUsername
	I0920 17:09:54.511477   27962 sshutil.go:53] new ssh client: &{IP:192.168.39.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m03/id_rsa Username:docker}
	I0920 17:09:54.511516   27962 sshutil.go:53] new ssh client: &{IP:192.168.39.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m03/id_rsa Username:docker}
	I0920 17:09:54.752778   27962 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 17:09:54.758470   27962 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 17:09:54.758545   27962 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 17:09:54.777293   27962 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 17:09:54.777314   27962 start.go:495] detecting cgroup driver to use...
	I0920 17:09:54.777373   27962 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 17:09:54.794867   27962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 17:09:54.812379   27962 docker.go:217] disabling cri-docker service (if available) ...
	I0920 17:09:54.812435   27962 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 17:09:54.829513   27962 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 17:09:54.844058   27962 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 17:09:54.965032   27962 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 17:09:55.105410   27962 docker.go:233] disabling docker service ...
	I0920 17:09:55.105473   27962 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 17:09:55.119024   27962 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 17:09:55.131474   27962 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 17:09:55.280550   27962 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 17:09:55.424589   27962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 17:09:55.438591   27962 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 17:09:55.457023   27962 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 17:09:55.457079   27962 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:09:55.469113   27962 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 17:09:55.469204   27962 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:09:55.480768   27962 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:09:55.491997   27962 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:09:55.503252   27962 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 17:09:55.515007   27962 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:09:55.527072   27962 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:09:55.544868   27962 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:09:55.556070   27962 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 17:09:55.566274   27962 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 17:09:55.566347   27962 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 17:09:55.579815   27962 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 17:09:55.591271   27962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:09:55.721172   27962 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 17:09:55.816671   27962 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 17:09:55.816750   27962 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 17:09:55.821593   27962 start.go:563] Will wait 60s for crictl version
	I0920 17:09:55.821670   27962 ssh_runner.go:195] Run: which crictl
	I0920 17:09:55.825326   27962 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 17:09:55.861139   27962 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 17:09:55.861214   27962 ssh_runner.go:195] Run: crio --version
	I0920 17:09:55.889848   27962 ssh_runner.go:195] Run: crio --version
	I0920 17:09:55.919422   27962 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 17:09:55.920775   27962 out.go:177]   - env NO_PROXY=192.168.39.60
	I0920 17:09:55.922083   27962 out.go:177]   - env NO_PROXY=192.168.39.60,192.168.39.227
	I0920 17:09:55.923747   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetIP
	I0920 17:09:55.926252   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:55.926556   27962 main.go:141] libmachine: (ha-135993-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:49:98", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:09:45 +0000 UTC Type:0 Mac:52:54:00:4a:49:98 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-135993-m03 Clientid:01:52:54:00:4a:49:98}
	I0920 17:09:55.926586   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:55.926743   27962 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 17:09:55.930814   27962 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 17:09:55.943504   27962 mustload.go:65] Loading cluster: ha-135993
	I0920 17:09:55.943748   27962 config.go:182] Loaded profile config "ha-135993": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 17:09:55.944067   27962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:09:55.944109   27962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:09:55.959177   27962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34285
	I0920 17:09:55.959707   27962 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:09:55.960208   27962 main.go:141] libmachine: Using API Version  1
	I0920 17:09:55.960231   27962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:09:55.960549   27962 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:09:55.960794   27962 main.go:141] libmachine: (ha-135993) Calling .GetState
	I0920 17:09:55.962489   27962 host.go:66] Checking if "ha-135993" exists ...
	I0920 17:09:55.962798   27962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:09:55.962843   27962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:09:55.977302   27962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45181
	I0920 17:09:55.977710   27962 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:09:55.978227   27962 main.go:141] libmachine: Using API Version  1
	I0920 17:09:55.978253   27962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:09:55.978558   27962 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:09:55.978742   27962 main.go:141] libmachine: (ha-135993) Calling .DriverName
	I0920 17:09:55.978879   27962 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993 for IP: 192.168.39.133
	I0920 17:09:55.978893   27962 certs.go:194] generating shared ca certs ...
	I0920 17:09:55.978913   27962 certs.go:226] acquiring lock for ca certs: {Name:mkc7ef6c737c6bdc3fdd9dcff8f57029c020d8f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:09:55.979064   27962 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key
	I0920 17:09:55.979123   27962 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key
	I0920 17:09:55.979137   27962 certs.go:256] generating profile certs ...
	I0920 17:09:55.979252   27962 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/client.key
	I0920 17:09:55.979287   27962 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.key.9a1e7345
	I0920 17:09:55.979305   27962 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.crt.9a1e7345 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.60 192.168.39.227 192.168.39.133 192.168.39.254]
	I0920 17:09:56.205622   27962 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.crt.9a1e7345 ...
	I0920 17:09:56.205652   27962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.crt.9a1e7345: {Name:mk741001df891368c2b48ce6ca33636b00474c7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:09:56.205862   27962 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.key.9a1e7345 ...
	I0920 17:09:56.205885   27962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.key.9a1e7345: {Name:mka8bfccee8c9e3909ae2b3c3cb9e59688362565 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:09:56.206039   27962 certs.go:381] copying /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.crt.9a1e7345 -> /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.crt
	I0920 17:09:56.206211   27962 certs.go:385] copying /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.key.9a1e7345 -> /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.key
	I0920 17:09:56.206388   27962 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/proxy-client.key
	I0920 17:09:56.206407   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0920 17:09:56.206426   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0920 17:09:56.206446   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0920 17:09:56.206464   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0920 17:09:56.206480   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0920 17:09:56.206494   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0920 17:09:56.206511   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0920 17:09:56.225918   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0920 17:09:56.225997   27962 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem (1338 bytes)
	W0920 17:09:56.226041   27962 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973_empty.pem, impossibly tiny 0 bytes
	I0920 17:09:56.226052   27962 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 17:09:56.226073   27962 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem (1082 bytes)
	I0920 17:09:56.226113   27962 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem (1123 bytes)
	I0920 17:09:56.226142   27962 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem (1675 bytes)
	I0920 17:09:56.226194   27962 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem (1708 bytes)
	I0920 17:09:56.226220   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem -> /usr/share/ca-certificates/159732.pem
	I0920 17:09:56.226236   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:09:56.226256   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem -> /usr/share/ca-certificates/15973.pem
	I0920 17:09:56.226300   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHHostname
	I0920 17:09:56.229337   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:09:56.229721   27962 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:09:56.229749   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:09:56.229930   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHPort
	I0920 17:09:56.230128   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:09:56.230302   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHUsername
	I0920 17:09:56.230392   27962 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993/id_rsa Username:docker}
	I0920 17:09:56.306176   27962 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0920 17:09:56.311850   27962 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0920 17:09:56.324295   27962 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0920 17:09:56.330346   27962 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0920 17:09:56.342029   27962 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0920 17:09:56.345907   27962 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0920 17:09:56.356185   27962 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0920 17:09:56.360478   27962 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0920 17:09:56.372648   27962 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0920 17:09:56.377310   27962 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0920 17:09:56.392310   27962 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0920 17:09:56.398873   27962 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0920 17:09:56.416705   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 17:09:56.442036   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 17:09:56.465893   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 17:09:56.491259   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0920 17:09:56.515541   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0920 17:09:56.538762   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 17:09:56.561229   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 17:09:56.583847   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 17:09:56.607936   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /usr/share/ca-certificates/159732.pem (1708 bytes)
	I0920 17:09:56.634323   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 17:09:56.662363   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem --> /usr/share/ca-certificates/15973.pem (1338 bytes)
	I0920 17:09:56.687040   27962 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0920 17:09:56.702914   27962 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0920 17:09:56.719096   27962 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0920 17:09:56.735043   27962 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0920 17:09:56.751375   27962 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0920 17:09:56.767907   27962 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0920 17:09:56.785247   27962 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0920 17:09:56.800819   27962 ssh_runner.go:195] Run: openssl version
	I0920 17:09:56.807059   27962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15973.pem && ln -fs /usr/share/ca-certificates/15973.pem /etc/ssl/certs/15973.pem"
	I0920 17:09:56.819325   27962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15973.pem
	I0920 17:09:56.823881   27962 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:04 /usr/share/ca-certificates/15973.pem
	I0920 17:09:56.823942   27962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15973.pem
	I0920 17:09:56.829735   27962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15973.pem /etc/ssl/certs/51391683.0"
	I0920 17:09:56.840229   27962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/159732.pem && ln -fs /usr/share/ca-certificates/159732.pem /etc/ssl/certs/159732.pem"
	I0920 17:09:56.850295   27962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/159732.pem
	I0920 17:09:56.854454   27962 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:04 /usr/share/ca-certificates/159732.pem
	I0920 17:09:56.854516   27962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/159732.pem
	I0920 17:09:56.859987   27962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/159732.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 17:09:56.870869   27962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 17:09:56.881683   27962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:09:56.886087   27962 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:09:56.886162   27962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:09:56.891826   27962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 17:09:56.902542   27962 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 17:09:56.906493   27962 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 17:09:56.906563   27962 kubeadm.go:934] updating node {m03 192.168.39.133 8443 v1.31.1 crio true true} ...
	I0920 17:09:56.906662   27962 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-135993-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.133
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-135993 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 17:09:56.906694   27962 kube-vip.go:115] generating kube-vip config ...
	I0920 17:09:56.906737   27962 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0920 17:09:56.924849   27962 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0920 17:09:56.924928   27962 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0920 17:09:56.924987   27962 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 17:09:56.935083   27962 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0920 17:09:56.935139   27962 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0920 17:09:56.944640   27962 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0920 17:09:56.944640   27962 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0920 17:09:56.944675   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0920 17:09:56.944710   27962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 17:09:56.944648   27962 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0920 17:09:56.944785   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0920 17:09:56.944830   27962 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0920 17:09:56.944765   27962 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0920 17:09:56.962033   27962 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0920 17:09:56.962071   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0920 17:09:56.962074   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0920 17:09:56.962167   27962 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0920 17:09:56.962114   27962 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0920 17:09:56.962188   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0920 17:09:56.995038   27962 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0920 17:09:56.995085   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0920 17:09:57.877062   27962 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0920 17:09:57.886499   27962 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0920 17:09:57.902951   27962 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 17:09:57.919648   27962 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0920 17:09:57.936776   27962 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0920 17:09:57.940394   27962 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 17:09:57.952344   27962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:09:58.086995   27962 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 17:09:58.104838   27962 host.go:66] Checking if "ha-135993" exists ...
	I0920 17:09:58.105202   27962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:09:58.105252   27962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:09:58.121702   27962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37739
	I0920 17:09:58.122199   27962 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:09:58.122665   27962 main.go:141] libmachine: Using API Version  1
	I0920 17:09:58.122690   27962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:09:58.123042   27962 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:09:58.123222   27962 main.go:141] libmachine: (ha-135993) Calling .DriverName
	I0920 17:09:58.123436   27962 start.go:317] joinCluster: &{Name:ha-135993 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-135993 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.133 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:fa
lse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 17:09:58.123567   27962 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0920 17:09:58.123585   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHHostname
	I0920 17:09:58.126769   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:09:58.127177   27962 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:09:58.127198   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:09:58.127380   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHPort
	I0920 17:09:58.127561   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:09:58.127676   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHUsername
	I0920 17:09:58.127807   27962 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993/id_rsa Username:docker}
	I0920 17:09:58.304684   27962 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.133 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 17:09:58.304742   27962 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token xem78t.sac8uh54rhaf5wj8 --discovery-token-ca-cert-hash sha256:736f3fb521855813decb4a7214f2b8beff5f81c3971b60decfa20a8807626a2d --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-135993-m03 --control-plane --apiserver-advertise-address=192.168.39.133 --apiserver-bind-port=8443"
	I0920 17:10:20.782828   27962 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token xem78t.sac8uh54rhaf5wj8 --discovery-token-ca-cert-hash sha256:736f3fb521855813decb4a7214f2b8beff5f81c3971b60decfa20a8807626a2d --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-135993-m03 --control-plane --apiserver-advertise-address=192.168.39.133 --apiserver-bind-port=8443": (22.478064097s)
	I0920 17:10:20.782862   27962 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0920 17:10:21.369579   27962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-135993-m03 minikube.k8s.io/updated_at=2024_09_20T17_10_21_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=0626f22cf0d915d75e291a5bce701f94395056e1 minikube.k8s.io/name=ha-135993 minikube.k8s.io/primary=false
	I0920 17:10:21.545661   27962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-135993-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0920 17:10:21.676455   27962 start.go:319] duration metric: took 23.553017419s to joinCluster
	I0920 17:10:21.676541   27962 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.133 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 17:10:21.676981   27962 config.go:182] Loaded profile config "ha-135993": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 17:10:21.678497   27962 out.go:177] * Verifying Kubernetes components...
	I0920 17:10:21.679903   27962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:10:21.961073   27962 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 17:10:21.996476   27962 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19672-8777/kubeconfig
	I0920 17:10:21.996707   27962 kapi.go:59] client config for ha-135993: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/client.crt", KeyFile:"/home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/client.key", CAFile:"/home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0920 17:10:21.996765   27962 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.60:8443
	I0920 17:10:21.996997   27962 node_ready.go:35] waiting up to 6m0s for node "ha-135993-m03" to be "Ready" ...
	I0920 17:10:21.997072   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:21.997080   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:21.997090   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:21.997095   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:22.001181   27962 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 17:10:22.497463   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:22.497485   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:22.497495   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:22.497507   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:22.502449   27962 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 17:10:22.997389   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:22.997418   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:22.997429   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:22.997438   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:23.001501   27962 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 17:10:23.497533   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:23.497557   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:23.497566   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:23.497570   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:23.500839   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:23.997331   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:23.997361   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:23.997370   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:23.997375   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:24.001172   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:24.001662   27962 node_ready.go:53] node "ha-135993-m03" has status "Ready":"False"
	I0920 17:10:24.497248   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:24.497270   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:24.497279   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:24.497284   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:24.501584   27962 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 17:10:24.997441   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:24.997461   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:24.997474   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:24.997479   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:25.001314   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:25.497255   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:25.497284   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:25.497297   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:25.497302   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:25.500828   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:25.997812   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:25.997877   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:25.997892   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:25.997897   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:26.001955   27962 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 17:10:26.002456   27962 node_ready.go:53] node "ha-135993-m03" has status "Ready":"False"
	I0920 17:10:26.497957   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:26.497985   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:26.498009   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:26.498014   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:26.505329   27962 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0920 17:10:26.997635   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:26.997665   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:26.997677   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:26.997681   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:27.001531   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:27.497548   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:27.497572   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:27.497582   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:27.497587   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:27.501038   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:27.998155   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:27.998184   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:27.998196   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:27.998201   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:28.002255   27962 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 17:10:28.002946   27962 node_ready.go:53] node "ha-135993-m03" has status "Ready":"False"
	I0920 17:10:28.497717   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:28.497741   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:28.497752   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:28.497759   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:28.501375   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:28.997522   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:28.997548   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:28.997556   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:28.997562   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:29.002576   27962 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 17:10:29.498184   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:29.498217   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:29.498230   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:29.498237   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:29.502043   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:29.998000   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:29.998032   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:29.998044   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:29.998050   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:30.001668   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:30.497469   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:30.497508   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:30.497521   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:30.497530   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:30.500913   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:30.501381   27962 node_ready.go:53] node "ha-135993-m03" has status "Ready":"False"
	I0920 17:10:30.997662   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:30.997683   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:30.997692   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:30.997696   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:31.001443   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:31.497374   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:31.497396   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:31.497406   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:31.497411   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:31.500970   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:31.998212   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:31.998237   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:31.998245   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:31.998250   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:32.005715   27962 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0920 17:10:32.497621   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:32.497644   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:32.497652   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:32.497656   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:32.501947   27962 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 17:10:32.502498   27962 node_ready.go:53] node "ha-135993-m03" has status "Ready":"False"
	I0920 17:10:32.998138   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:32.998162   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:32.998170   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:32.998174   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:33.002736   27962 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 17:10:33.497634   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:33.497655   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:33.497663   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:33.497669   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:33.501049   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:33.997307   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:33.997332   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:33.997340   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:33.997343   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:34.001271   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:34.497449   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:34.497471   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:34.497479   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:34.497483   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:34.501394   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:34.997478   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:34.997503   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:34.997512   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:34.997518   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:35.001257   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:35.001994   27962 node_ready.go:53] node "ha-135993-m03" has status "Ready":"False"
	I0920 17:10:35.497192   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:35.497221   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:35.497238   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:35.497244   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:35.501544   27962 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 17:10:35.997358   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:35.997383   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:35.997390   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:35.997394   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:36.000988   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:36.498031   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:36.498054   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:36.498064   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:36.498069   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:36.501887   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:36.997545   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:36.997568   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:36.997576   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:36.997579   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:37.001444   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:37.002042   27962 node_ready.go:53] node "ha-135993-m03" has status "Ready":"False"
	I0920 17:10:37.497312   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:37.497339   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:37.497347   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:37.497352   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:37.500690   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:37.997364   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:37.997392   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:37.997402   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:37.997406   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:38.000903   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:38.498015   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:38.498036   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:38.498046   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:38.498053   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:38.501382   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:38.997276   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:38.997298   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:38.997307   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:38.997311   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:39.000962   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:39.497287   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:39.497313   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:39.497323   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:39.497329   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:39.501180   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:39.501915   27962 node_ready.go:53] node "ha-135993-m03" has status "Ready":"False"
	I0920 17:10:39.997251   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:39.997274   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:39.997285   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:39.997291   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:40.000356   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:40.000916   27962 node_ready.go:49] node "ha-135993-m03" has status "Ready":"True"
	I0920 17:10:40.000937   27962 node_ready.go:38] duration metric: took 18.003923058s for node "ha-135993-m03" to be "Ready" ...
	I0920 17:10:40.000949   27962 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 17:10:40.001029   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods
	I0920 17:10:40.001041   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:40.001051   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:40.001059   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:40.007086   27962 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0920 17:10:40.013456   27962 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-gcvg4" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:40.013531   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-gcvg4
	I0920 17:10:40.013539   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:40.013547   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:40.013551   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:40.016217   27962 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 17:10:40.016928   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993
	I0920 17:10:40.016944   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:40.016951   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:40.016954   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:40.019552   27962 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 17:10:40.020302   27962 pod_ready.go:93] pod "coredns-7c65d6cfc9-gcvg4" in "kube-system" namespace has status "Ready":"True"
	I0920 17:10:40.020321   27962 pod_ready.go:82] duration metric: took 6.8416ms for pod "coredns-7c65d6cfc9-gcvg4" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:40.020329   27962 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-kpbhk" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:40.020387   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-kpbhk
	I0920 17:10:40.020395   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:40.020402   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:40.020405   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:40.022739   27962 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 17:10:40.023876   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993
	I0920 17:10:40.023897   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:40.023907   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:40.023914   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:40.026180   27962 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 17:10:40.026617   27962 pod_ready.go:93] pod "coredns-7c65d6cfc9-kpbhk" in "kube-system" namespace has status "Ready":"True"
	I0920 17:10:40.026633   27962 pod_ready.go:82] duration metric: took 6.291183ms for pod "coredns-7c65d6cfc9-kpbhk" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:40.026644   27962 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-135993" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:40.026708   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-135993
	I0920 17:10:40.026721   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:40.026729   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:40.026733   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:40.029955   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:40.030688   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993
	I0920 17:10:40.030707   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:40.030717   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:40.030724   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:40.033291   27962 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 17:10:40.033722   27962 pod_ready.go:93] pod "etcd-ha-135993" in "kube-system" namespace has status "Ready":"True"
	I0920 17:10:40.033740   27962 pod_ready.go:82] duration metric: took 7.086877ms for pod "etcd-ha-135993" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:40.033752   27962 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-135993-m02" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:40.033808   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-135993-m02
	I0920 17:10:40.033816   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:40.033823   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:40.033827   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:40.036180   27962 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 17:10:40.036735   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:10:40.036750   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:40.036757   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:40.036761   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:40.039148   27962 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 17:10:40.039672   27962 pod_ready.go:93] pod "etcd-ha-135993-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 17:10:40.039690   27962 pod_ready.go:82] duration metric: took 5.930508ms for pod "etcd-ha-135993-m02" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:40.039699   27962 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-135993-m03" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:40.198080   27962 request.go:632] Waited for 158.310883ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-135993-m03
	I0920 17:10:40.198142   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-135993-m03
	I0920 17:10:40.198147   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:40.198156   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:40.198165   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:40.201559   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:40.397955   27962 request.go:632] Waited for 195.344828ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:40.398036   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:40.398047   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:40.398057   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:40.398064   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:40.401572   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:40.402144   27962 pod_ready.go:93] pod "etcd-ha-135993-m03" in "kube-system" namespace has status "Ready":"True"
	I0920 17:10:40.402168   27962 pod_ready.go:82] duration metric: took 362.461912ms for pod "etcd-ha-135993-m03" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:40.402191   27962 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-135993" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:40.598190   27962 request.go:632] Waited for 195.924651ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-135993
	I0920 17:10:40.598265   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-135993
	I0920 17:10:40.598274   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:40.598282   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:40.598292   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:40.601449   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:40.797361   27962 request.go:632] Waited for 195.295556ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-135993
	I0920 17:10:40.797452   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993
	I0920 17:10:40.797463   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:40.797474   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:40.797479   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:40.800725   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:40.801428   27962 pod_ready.go:93] pod "kube-apiserver-ha-135993" in "kube-system" namespace has status "Ready":"True"
	I0920 17:10:40.801448   27962 pod_ready.go:82] duration metric: took 399.249989ms for pod "kube-apiserver-ha-135993" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:40.801457   27962 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-135993-m02" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:40.997409   27962 request.go:632] Waited for 195.878449ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-135993-m02
	I0920 17:10:40.997467   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-135993-m02
	I0920 17:10:40.997472   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:40.997479   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:40.997488   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:41.001457   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:41.197787   27962 request.go:632] Waited for 195.349078ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:10:41.197860   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:10:41.197871   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:41.197879   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:41.197882   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:41.201485   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:41.202105   27962 pod_ready.go:93] pod "kube-apiserver-ha-135993-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 17:10:41.202124   27962 pod_ready.go:82] duration metric: took 400.661085ms for pod "kube-apiserver-ha-135993-m02" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:41.202133   27962 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-135993-m03" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:41.398233   27962 request.go:632] Waited for 195.997178ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-135993-m03
	I0920 17:10:41.398303   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-135993-m03
	I0920 17:10:41.398378   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:41.398394   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:41.398400   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:41.402317   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:41.597319   27962 request.go:632] Waited for 194.299169ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:41.597378   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:41.597383   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:41.597411   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:41.597417   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:41.600918   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:41.601672   27962 pod_ready.go:93] pod "kube-apiserver-ha-135993-m03" in "kube-system" namespace has status "Ready":"True"
	I0920 17:10:41.601692   27962 pod_ready.go:82] duration metric: took 399.551518ms for pod "kube-apiserver-ha-135993-m03" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:41.601704   27962 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-135993" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:41.797255   27962 request.go:632] Waited for 195.471307ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-135993
	I0920 17:10:41.797312   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-135993
	I0920 17:10:41.797318   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:41.797325   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:41.797329   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:41.801261   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:41.997269   27962 request.go:632] Waited for 195.294616ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-135993
	I0920 17:10:41.997363   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993
	I0920 17:10:41.997371   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:41.997382   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:41.997392   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:42.001363   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:42.002111   27962 pod_ready.go:93] pod "kube-controller-manager-ha-135993" in "kube-system" namespace has status "Ready":"True"
	I0920 17:10:42.002135   27962 pod_ready.go:82] duration metric: took 400.422144ms for pod "kube-controller-manager-ha-135993" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:42.002152   27962 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-135993-m02" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:42.198137   27962 request.go:632] Waited for 195.883622ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-135993-m02
	I0920 17:10:42.198204   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-135993-m02
	I0920 17:10:42.198211   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:42.198224   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:42.198233   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:42.201776   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:42.397933   27962 request.go:632] Waited for 195.390844ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:10:42.397990   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:10:42.397996   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:42.398003   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:42.398008   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:42.401639   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:42.402402   27962 pod_ready.go:93] pod "kube-controller-manager-ha-135993-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 17:10:42.402423   27962 pod_ready.go:82] duration metric: took 400.260074ms for pod "kube-controller-manager-ha-135993-m02" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:42.402438   27962 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-135993-m03" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:42.597289   27962 request.go:632] Waited for 194.763978ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-135993-m03
	I0920 17:10:42.597371   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-135993-m03
	I0920 17:10:42.597384   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:42.597393   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:42.597400   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:42.601014   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:42.797863   27962 request.go:632] Waited for 195.944092ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:42.797944   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:42.797955   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:42.797965   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:42.797974   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:42.801609   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:42.802166   27962 pod_ready.go:93] pod "kube-controller-manager-ha-135993-m03" in "kube-system" namespace has status "Ready":"True"
	I0920 17:10:42.802184   27962 pod_ready.go:82] duration metric: took 399.739056ms for pod "kube-controller-manager-ha-135993-m03" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:42.802194   27962 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-45c9m" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:42.997304   27962 request.go:632] Waited for 195.040269ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-proxy-45c9m
	I0920 17:10:42.997408   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-proxy-45c9m
	I0920 17:10:42.997421   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:42.997432   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:42.997437   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:43.001257   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:43.198020   27962 request.go:632] Waited for 196.102413ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:43.198085   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:43.198092   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:43.198100   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:43.198106   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:43.201658   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:43.202252   27962 pod_ready.go:93] pod "kube-proxy-45c9m" in "kube-system" namespace has status "Ready":"True"
	I0920 17:10:43.202273   27962 pod_ready.go:82] duration metric: took 400.072197ms for pod "kube-proxy-45c9m" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:43.202287   27962 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-52r49" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:43.397914   27962 request.go:632] Waited for 195.445037ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-proxy-52r49
	I0920 17:10:43.397992   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-proxy-52r49
	I0920 17:10:43.397998   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:43.398005   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:43.398011   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:43.401788   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:43.597874   27962 request.go:632] Waited for 195.37712ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-135993
	I0920 17:10:43.597952   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993
	I0920 17:10:43.597964   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:43.597978   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:43.597989   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:43.600840   27962 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 17:10:43.601662   27962 pod_ready.go:93] pod "kube-proxy-52r49" in "kube-system" namespace has status "Ready":"True"
	I0920 17:10:43.601684   27962 pod_ready.go:82] duration metric: took 399.386758ms for pod "kube-proxy-52r49" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:43.601693   27962 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-z6xqt" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:43.797664   27962 request.go:632] Waited for 195.909482ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z6xqt
	I0920 17:10:43.797730   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z6xqt
	I0920 17:10:43.797738   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:43.797745   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:43.797750   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:43.801166   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:43.998193   27962 request.go:632] Waited for 196.396377ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:10:43.998304   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:10:43.998313   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:43.998325   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:43.998334   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:44.001971   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:44.002756   27962 pod_ready.go:93] pod "kube-proxy-z6xqt" in "kube-system" namespace has status "Ready":"True"
	I0920 17:10:44.002782   27962 pod_ready.go:82] duration metric: took 401.080699ms for pod "kube-proxy-z6xqt" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:44.002795   27962 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-135993" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:44.198129   27962 request.go:632] Waited for 195.259225ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-135993
	I0920 17:10:44.198208   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-135993
	I0920 17:10:44.198217   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:44.198225   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:44.198229   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:44.202058   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:44.398232   27962 request.go:632] Waited for 195.373668ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-135993
	I0920 17:10:44.398304   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993
	I0920 17:10:44.398313   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:44.398322   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:44.398336   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:44.402177   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:44.402890   27962 pod_ready.go:93] pod "kube-scheduler-ha-135993" in "kube-system" namespace has status "Ready":"True"
	I0920 17:10:44.402910   27962 pod_ready.go:82] duration metric: took 400.107134ms for pod "kube-scheduler-ha-135993" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:44.402920   27962 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-135993-m02" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:44.598018   27962 request.go:632] Waited for 195.007589ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-135993-m02
	I0920 17:10:44.598096   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-135993-m02
	I0920 17:10:44.598103   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:44.598114   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:44.598131   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:44.601458   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:44.797367   27962 request.go:632] Waited for 195.276041ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:10:44.797421   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:10:44.797426   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:44.797434   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:44.797437   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:44.800953   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:44.801547   27962 pod_ready.go:93] pod "kube-scheduler-ha-135993-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 17:10:44.801566   27962 pod_ready.go:82] duration metric: took 398.637509ms for pod "kube-scheduler-ha-135993-m02" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:44.801580   27962 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-135993-m03" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:44.997661   27962 request.go:632] Waited for 195.986647ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-135993-m03
	I0920 17:10:44.997741   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-135993-m03
	I0920 17:10:44.997749   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:44.997760   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:44.997769   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:45.001737   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:45.197777   27962 request.go:632] Waited for 195.358869ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:45.197842   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:45.197848   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:45.197858   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:45.197867   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:45.201296   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:45.201689   27962 pod_ready.go:93] pod "kube-scheduler-ha-135993-m03" in "kube-system" namespace has status "Ready":"True"
	I0920 17:10:45.201707   27962 pod_ready.go:82] duration metric: took 400.119509ms for pod "kube-scheduler-ha-135993-m03" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:45.201719   27962 pod_ready.go:39] duration metric: took 5.200758265s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 17:10:45.201733   27962 api_server.go:52] waiting for apiserver process to appear ...
	I0920 17:10:45.201783   27962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 17:10:45.218374   27962 api_server.go:72] duration metric: took 23.541794087s to wait for apiserver process to appear ...
	I0920 17:10:45.218402   27962 api_server.go:88] waiting for apiserver healthz status ...
	I0920 17:10:45.218421   27962 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0920 17:10:45.222904   27962 api_server.go:279] https://192.168.39.60:8443/healthz returned 200:
	ok
	I0920 17:10:45.222982   27962 round_trippers.go:463] GET https://192.168.39.60:8443/version
	I0920 17:10:45.222994   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:45.223006   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:45.223010   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:45.224049   27962 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0920 17:10:45.224222   27962 api_server.go:141] control plane version: v1.31.1
	I0920 17:10:45.224245   27962 api_server.go:131] duration metric: took 5.83633ms to wait for apiserver health ...
	I0920 17:10:45.224256   27962 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 17:10:45.397714   27962 request.go:632] Waited for 173.358789ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods
	I0920 17:10:45.397793   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods
	I0920 17:10:45.397805   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:45.397818   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:45.397824   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:45.404937   27962 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0920 17:10:45.411424   27962 system_pods.go:59] 24 kube-system pods found
	I0920 17:10:45.411457   27962 system_pods.go:61] "coredns-7c65d6cfc9-gcvg4" [899b2b8c-9009-46c0-816b-781e85eb8b19] Running
	I0920 17:10:45.411462   27962 system_pods.go:61] "coredns-7c65d6cfc9-kpbhk" [0dfd9f1a-148c-4dba-884a-8618b74f82d0] Running
	I0920 17:10:45.411466   27962 system_pods.go:61] "etcd-ha-135993" [d5e44451-ab41-4bdf-8f34-c8202bc33cda] Running
	I0920 17:10:45.411470   27962 system_pods.go:61] "etcd-ha-135993-m02" [415d8f47-3bc4-42fc-ae75-b8217f5a731d] Running
	I0920 17:10:45.411473   27962 system_pods.go:61] "etcd-ha-135993-m03" [e5b78345-b51c-4e26-871e-76e190b209b1] Running
	I0920 17:10:45.411476   27962 system_pods.go:61] "kindnet-5m4r8" [c799ac5a-69f7-45d5-a291-61edfd753404] Running
	I0920 17:10:45.411479   27962 system_pods.go:61] "kindnet-6clt2" [d73a0817-d84f-4269-9de0-1532287a07db] Running
	I0920 17:10:45.411483   27962 system_pods.go:61] "kindnet-hcqf8" [727437d8-e050-4785-8bb7-90f8a496a2cb] Running
	I0920 17:10:45.411485   27962 system_pods.go:61] "kube-apiserver-ha-135993" [0c2adef2-0b98-4752-ba6c-0719b723c93c] Running
	I0920 17:10:45.411489   27962 system_pods.go:61] "kube-apiserver-ha-135993-m02" [e74438ac-347a-4f38-ba96-c7687763c669] Running
	I0920 17:10:45.411492   27962 system_pods.go:61] "kube-apiserver-ha-135993-m03" [9b8aa964-0aee-4301-94be-5c3f4e6f67bb] Running
	I0920 17:10:45.411495   27962 system_pods.go:61] "kube-controller-manager-ha-135993" [ba82afa0-d0a1-4515-bd0f-574f03c2a5a5] Running
	I0920 17:10:45.411498   27962 system_pods.go:61] "kube-controller-manager-ha-135993-m02" [ba4a59ae-5348-43b7-b80d-96d5d63afea2] Running
	I0920 17:10:45.411501   27962 system_pods.go:61] "kube-controller-manager-ha-135993-m03" [d8ae5058-1dd2-4012-90ce-75dc09f57a92] Running
	I0920 17:10:45.411504   27962 system_pods.go:61] "kube-proxy-45c9m" [c04f91a8-5bae-4e79-bfca-66b941217821] Running
	I0920 17:10:45.411507   27962 system_pods.go:61] "kube-proxy-52r49" [8d1124bd-e7cb-4239-a29d-c1d5b8870aff] Running
	I0920 17:10:45.411510   27962 system_pods.go:61] "kube-proxy-z6xqt" [70b8c8ab-77cc-4681-a9b1-3c28fc0f2674] Running
	I0920 17:10:45.411514   27962 system_pods.go:61] "kube-scheduler-ha-135993" [25eaf632-035a-44d9-8ce0-e6c3c0e4e0c4] Running
	I0920 17:10:45.411520   27962 system_pods.go:61] "kube-scheduler-ha-135993-m02" [6daee23a-d627-41e8-81b1-ceb4fa70ec3e] Running
	I0920 17:10:45.411522   27962 system_pods.go:61] "kube-scheduler-ha-135993-m03" [3ae2b99d-898a-448c-8e87-2b1e4b5fbae9] Running
	I0920 17:10:45.411525   27962 system_pods.go:61] "kube-vip-ha-135993" [6aa396e1-76b2-4911-bc93-660c51cef03d] Running
	I0920 17:10:45.411528   27962 system_pods.go:61] "kube-vip-ha-135993-m02" [2e9d1f8a-1ce9-46f9-b58b-f13302832062] Running
	I0920 17:10:45.411531   27962 system_pods.go:61] "kube-vip-ha-135993-m03" [ccf084e2-8b4f-4dde-a290-e644d7a0dde3] Running
	I0920 17:10:45.411536   27962 system_pods.go:61] "storage-provisioner" [57137bee-9a7b-4659-a855-0da82d137cb0] Running
	I0920 17:10:45.411542   27962 system_pods.go:74] duration metric: took 187.277251ms to wait for pod list to return data ...
	I0920 17:10:45.411551   27962 default_sa.go:34] waiting for default service account to be created ...
	I0920 17:10:45.597901   27962 request.go:632] Waited for 186.270484ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/default/serviceaccounts
	I0920 17:10:45.597955   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/default/serviceaccounts
	I0920 17:10:45.597961   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:45.597969   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:45.597974   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:45.601352   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:45.601480   27962 default_sa.go:45] found service account: "default"
	I0920 17:10:45.601500   27962 default_sa.go:55] duration metric: took 189.941966ms for default service account to be created ...
	I0920 17:10:45.601512   27962 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 17:10:45.797900   27962 request.go:632] Waited for 196.315857ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods
	I0920 17:10:45.797971   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods
	I0920 17:10:45.797976   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:45.797983   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:45.797988   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:45.805414   27962 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0920 17:10:45.812236   27962 system_pods.go:86] 24 kube-system pods found
	I0920 17:10:45.812269   27962 system_pods.go:89] "coredns-7c65d6cfc9-gcvg4" [899b2b8c-9009-46c0-816b-781e85eb8b19] Running
	I0920 17:10:45.812275   27962 system_pods.go:89] "coredns-7c65d6cfc9-kpbhk" [0dfd9f1a-148c-4dba-884a-8618b74f82d0] Running
	I0920 17:10:45.812279   27962 system_pods.go:89] "etcd-ha-135993" [d5e44451-ab41-4bdf-8f34-c8202bc33cda] Running
	I0920 17:10:45.812282   27962 system_pods.go:89] "etcd-ha-135993-m02" [415d8f47-3bc4-42fc-ae75-b8217f5a731d] Running
	I0920 17:10:45.812287   27962 system_pods.go:89] "etcd-ha-135993-m03" [e5b78345-b51c-4e26-871e-76e190b209b1] Running
	I0920 17:10:45.812290   27962 system_pods.go:89] "kindnet-5m4r8" [c799ac5a-69f7-45d5-a291-61edfd753404] Running
	I0920 17:10:45.812294   27962 system_pods.go:89] "kindnet-6clt2" [d73a0817-d84f-4269-9de0-1532287a07db] Running
	I0920 17:10:45.812297   27962 system_pods.go:89] "kindnet-hcqf8" [727437d8-e050-4785-8bb7-90f8a496a2cb] Running
	I0920 17:10:45.812301   27962 system_pods.go:89] "kube-apiserver-ha-135993" [0c2adef2-0b98-4752-ba6c-0719b723c93c] Running
	I0920 17:10:45.812304   27962 system_pods.go:89] "kube-apiserver-ha-135993-m02" [e74438ac-347a-4f38-ba96-c7687763c669] Running
	I0920 17:10:45.812308   27962 system_pods.go:89] "kube-apiserver-ha-135993-m03" [9b8aa964-0aee-4301-94be-5c3f4e6f67bb] Running
	I0920 17:10:45.812311   27962 system_pods.go:89] "kube-controller-manager-ha-135993" [ba82afa0-d0a1-4515-bd0f-574f03c2a5a5] Running
	I0920 17:10:45.812314   27962 system_pods.go:89] "kube-controller-manager-ha-135993-m02" [ba4a59ae-5348-43b7-b80d-96d5d63afea2] Running
	I0920 17:10:45.812319   27962 system_pods.go:89] "kube-controller-manager-ha-135993-m03" [d8ae5058-1dd2-4012-90ce-75dc09f57a92] Running
	I0920 17:10:45.812324   27962 system_pods.go:89] "kube-proxy-45c9m" [c04f91a8-5bae-4e79-bfca-66b941217821] Running
	I0920 17:10:45.812328   27962 system_pods.go:89] "kube-proxy-52r49" [8d1124bd-e7cb-4239-a29d-c1d5b8870aff] Running
	I0920 17:10:45.812333   27962 system_pods.go:89] "kube-proxy-z6xqt" [70b8c8ab-77cc-4681-a9b1-3c28fc0f2674] Running
	I0920 17:10:45.812336   27962 system_pods.go:89] "kube-scheduler-ha-135993" [25eaf632-035a-44d9-8ce0-e6c3c0e4e0c4] Running
	I0920 17:10:45.812340   27962 system_pods.go:89] "kube-scheduler-ha-135993-m02" [6daee23a-d627-41e8-81b1-ceb4fa70ec3e] Running
	I0920 17:10:45.812344   27962 system_pods.go:89] "kube-scheduler-ha-135993-m03" [3ae2b99d-898a-448c-8e87-2b1e4b5fbae9] Running
	I0920 17:10:45.812348   27962 system_pods.go:89] "kube-vip-ha-135993" [6aa396e1-76b2-4911-bc93-660c51cef03d] Running
	I0920 17:10:45.812351   27962 system_pods.go:89] "kube-vip-ha-135993-m02" [2e9d1f8a-1ce9-46f9-b58b-f13302832062] Running
	I0920 17:10:45.812354   27962 system_pods.go:89] "kube-vip-ha-135993-m03" [ccf084e2-8b4f-4dde-a290-e644d7a0dde3] Running
	I0920 17:10:45.812360   27962 system_pods.go:89] "storage-provisioner" [57137bee-9a7b-4659-a855-0da82d137cb0] Running
	I0920 17:10:45.812366   27962 system_pods.go:126] duration metric: took 210.848794ms to wait for k8s-apps to be running ...
	I0920 17:10:45.812375   27962 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 17:10:45.812419   27962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 17:10:45.827985   27962 system_svc.go:56] duration metric: took 15.600828ms WaitForService to wait for kubelet
	I0920 17:10:45.828023   27962 kubeadm.go:582] duration metric: took 24.151442817s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 17:10:45.828047   27962 node_conditions.go:102] verifying NodePressure condition ...
	I0920 17:10:45.998195   27962 request.go:632] Waited for 170.064742ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes
	I0920 17:10:45.998254   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes
	I0920 17:10:45.998260   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:45.998267   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:45.998275   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:46.002746   27962 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 17:10:46.003936   27962 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 17:10:46.003959   27962 node_conditions.go:123] node cpu capacity is 2
	I0920 17:10:46.003973   27962 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 17:10:46.003983   27962 node_conditions.go:123] node cpu capacity is 2
	I0920 17:10:46.003987   27962 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 17:10:46.003992   27962 node_conditions.go:123] node cpu capacity is 2
	I0920 17:10:46.004000   27962 node_conditions.go:105] duration metric: took 175.947788ms to run NodePressure ...
	I0920 17:10:46.004016   27962 start.go:241] waiting for startup goroutines ...
	I0920 17:10:46.004041   27962 start.go:255] writing updated cluster config ...
	I0920 17:10:46.004403   27962 ssh_runner.go:195] Run: rm -f paused
	I0920 17:10:46.058462   27962 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 17:10:46.060232   27962 out.go:177] * Done! kubectl is now configured to use "ha-135993" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 20 17:14:21 ha-135993 crio[661]: time="2024-09-20 17:14:21.322960019Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852461322933384,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b061dec6-c15e-4154-87b9-5e0ffc797848 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 17:14:21 ha-135993 crio[661]: time="2024-09-20 17:14:21.323594677Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=04ab45a2-df0f-4be6-a155-66a4a28b8454 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:14:21 ha-135993 crio[661]: time="2024-09-20 17:14:21.323658413Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=04ab45a2-df0f-4be6-a155-66a4a28b8454 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:14:21 ha-135993 crio[661]: time="2024-09-20 17:14:21.323878842Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d2a30264a8299336cfb5a4338bab42f8ef663f285ffa7fcb290de88635658a97,PodSandboxId:afa282bba6347297904c2a6423e39984da1a95a2caf8b23dd6857518cc0bb05d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726852250916004479,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-df429,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ef8ed1eb-a7c6-4c49-9e46-924bad4d9577,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36f3e8a4356ff70d6ac1d50a79bc078c32683201ca9c0787097b28f3417fdc90,PodSandboxId:c1cd70ce60a8324767003f8d8ab2c4e9bbafba1edc0b62b8ad54d09201e8b82c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726852104314815295,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57137bee-9a7b-4659-a855-0da82d137cb0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c668f6376655011c61ac0bac2456f3e751f265e08a925a427f7ecfca3d54d97,PodSandboxId:6e8ccc1edc7282309966f52b57c94684aa08f468f2023c8e5a61044327ad0cdb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726852104355492838,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kpbhk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dfd9f1a-148c-4dba-884a-8618b74f82d0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5054778f39bbb435a191b7cbdabfca614d8b6d64b7bce122294d50415d601787,PodSandboxId:6fda3c09e12fee3d9bfbdf163c989e0242d8464e853282bca12154b468cf1a1d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726852104287014426,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gcvg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 899b2b8c-90
09-46c0-816b-781e85eb8b19,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8792a3b1249ffb1a157682b07dec9a0edf11b81c44e53ea068e0fc6f4548be22,PodSandboxId:ed014d23a111faa2c65032e35b2ef554b33cd4cb8eddbc89e3b77efabb53a5ce,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17268520
92541411891,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6clt2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d73a0817-d84f-4269-9de0-1532287a07db,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4b462c3efaa10b1790ebacd5fe8845d6eb91643cb6db9b2830f33afe1744d57,PodSandboxId:1971096e9fdaaef4dfb856b23a044da810f3ea4c5d05511c7fc53105be9d664b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726852092275090369,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-52r49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d1124bd-e7cb-4239-a29d-c1d5b8870aff,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a56cd54bb369a6e7651ec4ac3a33220fcfa19226c4ca75ab535aeddbf2811a9,PodSandboxId:bd7dad5ca0acddfe976ec11fd8f4a2cd641ca2b663a812a1209683ee830b901e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726852084813198134,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a641242a9f304152af48553fabd7a110,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b48cf1f03207bb258f12f5506b5e60fd4bb742d7b2ced222664cc2b996ff15d,PodSandboxId:f3f5771528b9cb4aa1d625ba84f09a7820fe3964ceb281b04f73fe5e13f6f894,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726852081654759731,Labels:map[string]string{io.kubernetes.container.
name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fa5fbf3465d3154576a11474b7b8548,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f5eb92cf36b033e765f0681f8a6634251dfd6dc0db3410efd3ef6e8580a8b2f,PodSandboxId:b0a0c7068266ae4cdca9368f9606704a9cf02c9b4e905e2d6c179562e88cf8ba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726852081628521436,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bce625d942177d9274bb431f7a7012b2,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db80f5e2505940b81bc20c39e98d873b35eff2e71aac748d8b406e21f5435fca,PodSandboxId:77a9434f5f03e0fce6c021d0cb97ce2bbe8dbb697371c561907dc8656adca49c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726852081504080347,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.
kubernetes.pod.name: kube-scheduler-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086260df87dea88f53b3b3ca08d61864,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e70d74afe0f7f7c1e0f084aba99e9c488a1f999e94866b12072fdcbc24b0dad7,PodSandboxId:74a0a0888b0f65cfdbc3321666aaf8f377feaae2a4e3c132c5ed40ccd4468689,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726852081538334631,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-135993,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f0e3932935569a5c49edd0da5a87eba,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=04ab45a2-df0f-4be6-a155-66a4a28b8454 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:14:21 ha-135993 crio[661]: time="2024-09-20 17:14:21.362596403Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b402a682-a786-4c1f-a90c-03b1562502a9 name=/runtime.v1.RuntimeService/Version
	Sep 20 17:14:21 ha-135993 crio[661]: time="2024-09-20 17:14:21.362671903Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b402a682-a786-4c1f-a90c-03b1562502a9 name=/runtime.v1.RuntimeService/Version
	Sep 20 17:14:21 ha-135993 crio[661]: time="2024-09-20 17:14:21.363696329Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=45cb8a9a-8f2a-43e9-9ce7-4f21572efb7a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 17:14:21 ha-135993 crio[661]: time="2024-09-20 17:14:21.364155960Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852461364131351,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=45cb8a9a-8f2a-43e9-9ce7-4f21572efb7a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 17:14:21 ha-135993 crio[661]: time="2024-09-20 17:14:21.364859193Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=42fa1b8b-d13b-44e4-bf34-35aada814a8a name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:14:21 ha-135993 crio[661]: time="2024-09-20 17:14:21.364928589Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=42fa1b8b-d13b-44e4-bf34-35aada814a8a name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:14:21 ha-135993 crio[661]: time="2024-09-20 17:14:21.365145614Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d2a30264a8299336cfb5a4338bab42f8ef663f285ffa7fcb290de88635658a97,PodSandboxId:afa282bba6347297904c2a6423e39984da1a95a2caf8b23dd6857518cc0bb05d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726852250916004479,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-df429,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ef8ed1eb-a7c6-4c49-9e46-924bad4d9577,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36f3e8a4356ff70d6ac1d50a79bc078c32683201ca9c0787097b28f3417fdc90,PodSandboxId:c1cd70ce60a8324767003f8d8ab2c4e9bbafba1edc0b62b8ad54d09201e8b82c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726852104314815295,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57137bee-9a7b-4659-a855-0da82d137cb0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c668f6376655011c61ac0bac2456f3e751f265e08a925a427f7ecfca3d54d97,PodSandboxId:6e8ccc1edc7282309966f52b57c94684aa08f468f2023c8e5a61044327ad0cdb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726852104355492838,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kpbhk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dfd9f1a-148c-4dba-884a-8618b74f82d0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5054778f39bbb435a191b7cbdabfca614d8b6d64b7bce122294d50415d601787,PodSandboxId:6fda3c09e12fee3d9bfbdf163c989e0242d8464e853282bca12154b468cf1a1d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726852104287014426,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gcvg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 899b2b8c-90
09-46c0-816b-781e85eb8b19,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8792a3b1249ffb1a157682b07dec9a0edf11b81c44e53ea068e0fc6f4548be22,PodSandboxId:ed014d23a111faa2c65032e35b2ef554b33cd4cb8eddbc89e3b77efabb53a5ce,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17268520
92541411891,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6clt2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d73a0817-d84f-4269-9de0-1532287a07db,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4b462c3efaa10b1790ebacd5fe8845d6eb91643cb6db9b2830f33afe1744d57,PodSandboxId:1971096e9fdaaef4dfb856b23a044da810f3ea4c5d05511c7fc53105be9d664b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726852092275090369,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-52r49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d1124bd-e7cb-4239-a29d-c1d5b8870aff,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a56cd54bb369a6e7651ec4ac3a33220fcfa19226c4ca75ab535aeddbf2811a9,PodSandboxId:bd7dad5ca0acddfe976ec11fd8f4a2cd641ca2b663a812a1209683ee830b901e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726852084813198134,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a641242a9f304152af48553fabd7a110,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b48cf1f03207bb258f12f5506b5e60fd4bb742d7b2ced222664cc2b996ff15d,PodSandboxId:f3f5771528b9cb4aa1d625ba84f09a7820fe3964ceb281b04f73fe5e13f6f894,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726852081654759731,Labels:map[string]string{io.kubernetes.container.
name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fa5fbf3465d3154576a11474b7b8548,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f5eb92cf36b033e765f0681f8a6634251dfd6dc0db3410efd3ef6e8580a8b2f,PodSandboxId:b0a0c7068266ae4cdca9368f9606704a9cf02c9b4e905e2d6c179562e88cf8ba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726852081628521436,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bce625d942177d9274bb431f7a7012b2,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db80f5e2505940b81bc20c39e98d873b35eff2e71aac748d8b406e21f5435fca,PodSandboxId:77a9434f5f03e0fce6c021d0cb97ce2bbe8dbb697371c561907dc8656adca49c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726852081504080347,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.
kubernetes.pod.name: kube-scheduler-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086260df87dea88f53b3b3ca08d61864,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e70d74afe0f7f7c1e0f084aba99e9c488a1f999e94866b12072fdcbc24b0dad7,PodSandboxId:74a0a0888b0f65cfdbc3321666aaf8f377feaae2a4e3c132c5ed40ccd4468689,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726852081538334631,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-135993,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f0e3932935569a5c49edd0da5a87eba,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=42fa1b8b-d13b-44e4-bf34-35aada814a8a name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:14:21 ha-135993 crio[661]: time="2024-09-20 17:14:21.403524235Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d076a90b-f8c8-4d98-baeb-c1bd1a126230 name=/runtime.v1.RuntimeService/Version
	Sep 20 17:14:21 ha-135993 crio[661]: time="2024-09-20 17:14:21.403600993Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d076a90b-f8c8-4d98-baeb-c1bd1a126230 name=/runtime.v1.RuntimeService/Version
	Sep 20 17:14:21 ha-135993 crio[661]: time="2024-09-20 17:14:21.407452495Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=23533a23-89cb-435c-ad91-914a53e0f075 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 17:14:21 ha-135993 crio[661]: time="2024-09-20 17:14:21.407890173Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852461407865674,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=23533a23-89cb-435c-ad91-914a53e0f075 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 17:14:21 ha-135993 crio[661]: time="2024-09-20 17:14:21.408596241Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=27a6c5ec-d3cd-44c5-b7ef-11443ba43b40 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:14:21 ha-135993 crio[661]: time="2024-09-20 17:14:21.408675930Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=27a6c5ec-d3cd-44c5-b7ef-11443ba43b40 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:14:21 ha-135993 crio[661]: time="2024-09-20 17:14:21.408908326Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d2a30264a8299336cfb5a4338bab42f8ef663f285ffa7fcb290de88635658a97,PodSandboxId:afa282bba6347297904c2a6423e39984da1a95a2caf8b23dd6857518cc0bb05d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726852250916004479,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-df429,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ef8ed1eb-a7c6-4c49-9e46-924bad4d9577,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36f3e8a4356ff70d6ac1d50a79bc078c32683201ca9c0787097b28f3417fdc90,PodSandboxId:c1cd70ce60a8324767003f8d8ab2c4e9bbafba1edc0b62b8ad54d09201e8b82c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726852104314815295,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57137bee-9a7b-4659-a855-0da82d137cb0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c668f6376655011c61ac0bac2456f3e751f265e08a925a427f7ecfca3d54d97,PodSandboxId:6e8ccc1edc7282309966f52b57c94684aa08f468f2023c8e5a61044327ad0cdb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726852104355492838,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kpbhk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dfd9f1a-148c-4dba-884a-8618b74f82d0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5054778f39bbb435a191b7cbdabfca614d8b6d64b7bce122294d50415d601787,PodSandboxId:6fda3c09e12fee3d9bfbdf163c989e0242d8464e853282bca12154b468cf1a1d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726852104287014426,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gcvg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 899b2b8c-90
09-46c0-816b-781e85eb8b19,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8792a3b1249ffb1a157682b07dec9a0edf11b81c44e53ea068e0fc6f4548be22,PodSandboxId:ed014d23a111faa2c65032e35b2ef554b33cd4cb8eddbc89e3b77efabb53a5ce,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17268520
92541411891,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6clt2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d73a0817-d84f-4269-9de0-1532287a07db,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4b462c3efaa10b1790ebacd5fe8845d6eb91643cb6db9b2830f33afe1744d57,PodSandboxId:1971096e9fdaaef4dfb856b23a044da810f3ea4c5d05511c7fc53105be9d664b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726852092275090369,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-52r49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d1124bd-e7cb-4239-a29d-c1d5b8870aff,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a56cd54bb369a6e7651ec4ac3a33220fcfa19226c4ca75ab535aeddbf2811a9,PodSandboxId:bd7dad5ca0acddfe976ec11fd8f4a2cd641ca2b663a812a1209683ee830b901e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726852084813198134,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a641242a9f304152af48553fabd7a110,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b48cf1f03207bb258f12f5506b5e60fd4bb742d7b2ced222664cc2b996ff15d,PodSandboxId:f3f5771528b9cb4aa1d625ba84f09a7820fe3964ceb281b04f73fe5e13f6f894,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726852081654759731,Labels:map[string]string{io.kubernetes.container.
name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fa5fbf3465d3154576a11474b7b8548,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f5eb92cf36b033e765f0681f8a6634251dfd6dc0db3410efd3ef6e8580a8b2f,PodSandboxId:b0a0c7068266ae4cdca9368f9606704a9cf02c9b4e905e2d6c179562e88cf8ba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726852081628521436,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bce625d942177d9274bb431f7a7012b2,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db80f5e2505940b81bc20c39e98d873b35eff2e71aac748d8b406e21f5435fca,PodSandboxId:77a9434f5f03e0fce6c021d0cb97ce2bbe8dbb697371c561907dc8656adca49c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726852081504080347,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.
kubernetes.pod.name: kube-scheduler-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086260df87dea88f53b3b3ca08d61864,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e70d74afe0f7f7c1e0f084aba99e9c488a1f999e94866b12072fdcbc24b0dad7,PodSandboxId:74a0a0888b0f65cfdbc3321666aaf8f377feaae2a4e3c132c5ed40ccd4468689,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726852081538334631,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-135993,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f0e3932935569a5c49edd0da5a87eba,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=27a6c5ec-d3cd-44c5-b7ef-11443ba43b40 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:14:21 ha-135993 crio[661]: time="2024-09-20 17:14:21.448403962Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e894a184-5a0d-4e92-b2ae-4fd5f1463170 name=/runtime.v1.RuntimeService/Version
	Sep 20 17:14:21 ha-135993 crio[661]: time="2024-09-20 17:14:21.448508794Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e894a184-5a0d-4e92-b2ae-4fd5f1463170 name=/runtime.v1.RuntimeService/Version
	Sep 20 17:14:21 ha-135993 crio[661]: time="2024-09-20 17:14:21.450456282Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=38d24715-344e-4286-b061-45e0ff0a76b7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 17:14:21 ha-135993 crio[661]: time="2024-09-20 17:14:21.450913901Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852461450890696,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=38d24715-344e-4286-b061-45e0ff0a76b7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 17:14:21 ha-135993 crio[661]: time="2024-09-20 17:14:21.451670487Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=719726ba-361f-4aad-bb50-79cdae8316cb name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:14:21 ha-135993 crio[661]: time="2024-09-20 17:14:21.451728147Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=719726ba-361f-4aad-bb50-79cdae8316cb name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:14:21 ha-135993 crio[661]: time="2024-09-20 17:14:21.451961649Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d2a30264a8299336cfb5a4338bab42f8ef663f285ffa7fcb290de88635658a97,PodSandboxId:afa282bba6347297904c2a6423e39984da1a95a2caf8b23dd6857518cc0bb05d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726852250916004479,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-df429,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ef8ed1eb-a7c6-4c49-9e46-924bad4d9577,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36f3e8a4356ff70d6ac1d50a79bc078c32683201ca9c0787097b28f3417fdc90,PodSandboxId:c1cd70ce60a8324767003f8d8ab2c4e9bbafba1edc0b62b8ad54d09201e8b82c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726852104314815295,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57137bee-9a7b-4659-a855-0da82d137cb0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c668f6376655011c61ac0bac2456f3e751f265e08a925a427f7ecfca3d54d97,PodSandboxId:6e8ccc1edc7282309966f52b57c94684aa08f468f2023c8e5a61044327ad0cdb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726852104355492838,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kpbhk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dfd9f1a-148c-4dba-884a-8618b74f82d0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5054778f39bbb435a191b7cbdabfca614d8b6d64b7bce122294d50415d601787,PodSandboxId:6fda3c09e12fee3d9bfbdf163c989e0242d8464e853282bca12154b468cf1a1d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726852104287014426,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gcvg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 899b2b8c-90
09-46c0-816b-781e85eb8b19,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8792a3b1249ffb1a157682b07dec9a0edf11b81c44e53ea068e0fc6f4548be22,PodSandboxId:ed014d23a111faa2c65032e35b2ef554b33cd4cb8eddbc89e3b77efabb53a5ce,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17268520
92541411891,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6clt2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d73a0817-d84f-4269-9de0-1532287a07db,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4b462c3efaa10b1790ebacd5fe8845d6eb91643cb6db9b2830f33afe1744d57,PodSandboxId:1971096e9fdaaef4dfb856b23a044da810f3ea4c5d05511c7fc53105be9d664b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726852092275090369,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-52r49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d1124bd-e7cb-4239-a29d-c1d5b8870aff,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a56cd54bb369a6e7651ec4ac3a33220fcfa19226c4ca75ab535aeddbf2811a9,PodSandboxId:bd7dad5ca0acddfe976ec11fd8f4a2cd641ca2b663a812a1209683ee830b901e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726852084813198134,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a641242a9f304152af48553fabd7a110,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b48cf1f03207bb258f12f5506b5e60fd4bb742d7b2ced222664cc2b996ff15d,PodSandboxId:f3f5771528b9cb4aa1d625ba84f09a7820fe3964ceb281b04f73fe5e13f6f894,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726852081654759731,Labels:map[string]string{io.kubernetes.container.
name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fa5fbf3465d3154576a11474b7b8548,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f5eb92cf36b033e765f0681f8a6634251dfd6dc0db3410efd3ef6e8580a8b2f,PodSandboxId:b0a0c7068266ae4cdca9368f9606704a9cf02c9b4e905e2d6c179562e88cf8ba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726852081628521436,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bce625d942177d9274bb431f7a7012b2,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db80f5e2505940b81bc20c39e98d873b35eff2e71aac748d8b406e21f5435fca,PodSandboxId:77a9434f5f03e0fce6c021d0cb97ce2bbe8dbb697371c561907dc8656adca49c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726852081504080347,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.
kubernetes.pod.name: kube-scheduler-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086260df87dea88f53b3b3ca08d61864,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e70d74afe0f7f7c1e0f084aba99e9c488a1f999e94866b12072fdcbc24b0dad7,PodSandboxId:74a0a0888b0f65cfdbc3321666aaf8f377feaae2a4e3c132c5ed40ccd4468689,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726852081538334631,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-135993,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f0e3932935569a5c49edd0da5a87eba,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=719726ba-361f-4aad-bb50-79cdae8316cb name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d2a30264a8299       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   afa282bba6347       busybox-7dff88458-df429
	7c668f6376655       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   6e8ccc1edc728       coredns-7c65d6cfc9-kpbhk
	36f3e8a4356ff       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   c1cd70ce60a83       storage-provisioner
	5054778f39bbb       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   6fda3c09e12fe       coredns-7c65d6cfc9-gcvg4
	8792a3b1249ff       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      6 minutes ago       Running             kindnet-cni               0                   ed014d23a111f       kindnet-6clt2
	e4b462c3efaa1       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   1971096e9fdaa       kube-proxy-52r49
	1a56cd54bb369       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   bd7dad5ca0acd       kube-vip-ha-135993
	2b48cf1f03207       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   f3f5771528b9c       kube-controller-manager-ha-135993
	1f5eb92cf36b0       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   b0a0c7068266a       kube-apiserver-ha-135993
	e70d74afe0f7f       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   74a0a0888b0f6       etcd-ha-135993
	db80f5e250594       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   77a9434f5f03e       kube-scheduler-ha-135993
	
	
	==> coredns [5054778f39bbb435a191b7cbdabfca614d8b6d64b7bce122294d50415d601787] <==
	[INFO] 10.244.0.4:37855 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001838356s
	[INFO] 10.244.0.4:49834 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000062858s
	[INFO] 10.244.0.4:37202 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001240214s
	[INFO] 10.244.0.4:56343 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000095387s
	[INFO] 10.244.0.4:41974 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000080526s
	[INFO] 10.244.2.2:50089 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000170402s
	[INFO] 10.244.2.2:41205 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001201877s
	[INFO] 10.244.2.2:49094 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000154615s
	[INFO] 10.244.2.2:54226 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000116561s
	[INFO] 10.244.2.2:56885 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000137064s
	[INFO] 10.244.1.2:43199 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133082s
	[INFO] 10.244.1.2:54300 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000122573s
	[INFO] 10.244.1.2:57535 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000095892s
	[INFO] 10.244.1.2:45845 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000088385s
	[INFO] 10.244.0.4:53452 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000193594s
	[INFO] 10.244.0.4:46571 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000075164s
	[INFO] 10.244.2.2:44125 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000166147s
	[INFO] 10.244.2.2:59364 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000113432s
	[INFO] 10.244.2.2:54562 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000112311s
	[INFO] 10.244.1.2:60066 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132637s
	[INFO] 10.244.1.2:43717 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00017413s
	[INFO] 10.244.1.2:51684 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000156522s
	[INFO] 10.244.0.4:56213 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000141144s
	[INFO] 10.244.2.2:56175 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000117658s
	[INFO] 10.244.2.2:59810 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000111868s
	
	
	==> coredns [7c668f6376655011c61ac0bac2456f3e751f265e08a925a427f7ecfca3d54d97] <==
	[INFO] 10.244.0.4:48619 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00021775s
	[INFO] 10.244.0.4:46660 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000082726s
	[INFO] 10.244.2.2:38551 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.001366629s
	[INFO] 10.244.2.2:52956 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001396555s
	[INFO] 10.244.1.2:37231 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000279388s
	[INFO] 10.244.1.2:48508 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000280908s
	[INFO] 10.244.1.2:47714 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004766702s
	[INFO] 10.244.1.2:42041 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000169898s
	[INFO] 10.244.1.2:35115 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000212804s
	[INFO] 10.244.1.2:39956 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000247275s
	[INFO] 10.244.0.4:46191 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134745s
	[INFO] 10.244.0.4:49235 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000135262s
	[INFO] 10.244.0.4:33483 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000051965s
	[INFO] 10.244.2.2:40337 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000151683s
	[INFO] 10.244.2.2:54318 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001827239s
	[INFO] 10.244.2.2:58127 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000121998s
	[INFO] 10.244.0.4:54582 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000104228s
	[INFO] 10.244.0.4:57447 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000174115s
	[INFO] 10.244.2.2:39583 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000117382s
	[INFO] 10.244.1.2:55713 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00021321s
	[INFO] 10.244.0.4:57049 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000099997s
	[INFO] 10.244.0.4:39453 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000227319s
	[INFO] 10.244.0.4:46666 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000102501s
	[INFO] 10.244.2.2:49743 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159057s
	[INFO] 10.244.2.2:55499 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000197724s
	
	
	==> describe nodes <==
	Name:               ha-135993
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-135993
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0626f22cf0d915d75e291a5bce701f94395056e1
	                    minikube.k8s.io/name=ha-135993
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T17_08_08_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 17:08:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-135993
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 17:14:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 17:11:12 +0000   Fri, 20 Sep 2024 17:08:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 17:11:12 +0000   Fri, 20 Sep 2024 17:08:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 17:11:12 +0000   Fri, 20 Sep 2024 17:08:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 17:11:12 +0000   Fri, 20 Sep 2024 17:08:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.60
	  Hostname:    ha-135993
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6e83ceee6b834466a3a10733ff3c06b4
	  System UUID:                6e83ceee-6b83-4466-a3a1-0733ff3c06b4
	  Boot ID:                    ddcdaa90-2381-4c26-932e-b18d04f91d88
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-df429              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m35s
	  kube-system                 coredns-7c65d6cfc9-gcvg4             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m9s
	  kube-system                 coredns-7c65d6cfc9-kpbhk             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m9s
	  kube-system                 etcd-ha-135993                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m14s
	  kube-system                 kindnet-6clt2                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m10s
	  kube-system                 kube-apiserver-ha-135993             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m14s
	  kube-system                 kube-controller-manager-ha-135993    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m14s
	  kube-system                 kube-proxy-52r49                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m10s
	  kube-system                 kube-scheduler-ha-135993             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m14s
	  kube-system                 kube-vip-ha-135993                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m16s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m8s   kube-proxy       
	  Normal  Starting                 6m14s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m14s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m14s  kubelet          Node ha-135993 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m14s  kubelet          Node ha-135993 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m14s  kubelet          Node ha-135993 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m10s  node-controller  Node ha-135993 event: Registered Node ha-135993 in Controller
	  Normal  NodeReady                5m58s  kubelet          Node ha-135993 status is now: NodeReady
	  Normal  RegisteredNode           5m10s  node-controller  Node ha-135993 event: Registered Node ha-135993 in Controller
	  Normal  RegisteredNode           3m55s  node-controller  Node ha-135993 event: Registered Node ha-135993 in Controller
	
	
	Name:               ha-135993-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-135993-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0626f22cf0d915d75e291a5bce701f94395056e1
	                    minikube.k8s.io/name=ha-135993
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_20T17_09_05_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 17:09:03 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-135993-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 17:11:56 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 20 Sep 2024 17:11:05 +0000   Fri, 20 Sep 2024 17:12:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 20 Sep 2024 17:11:05 +0000   Fri, 20 Sep 2024 17:12:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 20 Sep 2024 17:11:05 +0000   Fri, 20 Sep 2024 17:12:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 20 Sep 2024 17:11:05 +0000   Fri, 20 Sep 2024 17:12:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.227
	  Hostname:    ha-135993-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 50c529298e8f4fbb9207cda8fc4b8abe
	  System UUID:                50c52929-8e8f-4fbb-9207-cda8fc4b8abe
	  Boot ID:                    7739b1d1-ac71-4753-b570-c987dc1deaff
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-cw8r4                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m35s
	  kube-system                 etcd-ha-135993-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m16s
	  kube-system                 kindnet-5m4r8                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m18s
	  kube-system                 kube-apiserver-ha-135993-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m16s
	  kube-system                 kube-controller-manager-ha-135993-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m11s
	  kube-system                 kube-proxy-z6xqt                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m18s
	  kube-system                 kube-scheduler-ha-135993-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m16s
	  kube-system                 kube-vip-ha-135993-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m14s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  5m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  CIDRAssignmentFailed     5m18s                  cidrAllocator    Node ha-135993-m02 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  5m18s (x8 over 5m19s)  kubelet          Node ha-135993-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m18s (x8 over 5m19s)  kubelet          Node ha-135993-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m18s (x7 over 5m19s)  kubelet          Node ha-135993-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m15s                  node-controller  Node ha-135993-m02 event: Registered Node ha-135993-m02 in Controller
	  Normal  RegisteredNode           5m10s                  node-controller  Node ha-135993-m02 event: Registered Node ha-135993-m02 in Controller
	  Normal  RegisteredNode           3m55s                  node-controller  Node ha-135993-m02 event: Registered Node ha-135993-m02 in Controller
	  Normal  NodeNotReady             105s                   node-controller  Node ha-135993-m02 status is now: NodeNotReady
	
	
	Name:               ha-135993-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-135993-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0626f22cf0d915d75e291a5bce701f94395056e1
	                    minikube.k8s.io/name=ha-135993
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_20T17_10_21_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 17:10:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-135993-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 17:14:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 17:11:19 +0000   Fri, 20 Sep 2024 17:10:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 17:11:19 +0000   Fri, 20 Sep 2024 17:10:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 17:11:19 +0000   Fri, 20 Sep 2024 17:10:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 17:11:19 +0000   Fri, 20 Sep 2024 17:10:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.133
	  Hostname:    ha-135993-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a16666848f8545f6bbb9419c97d0a0cd
	  System UUID:                a1666684-8f85-45f6-bbb9-419c97d0a0cd
	  Boot ID:                    fe050582-04ee-4cce-a278-cfc26db3e639
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-ksx56                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m35s
	  kube-system                 etcd-ha-135993-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m1s
	  kube-system                 kindnet-hcqf8                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m3s
	  kube-system                 kube-apiserver-ha-135993-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 kube-controller-manager-ha-135993-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 kube-proxy-45c9m                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m3s
	  kube-system                 kube-scheduler-ha-135993-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m1s
	  kube-system                 kube-vip-ha-135993-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m59s                kube-proxy       
	  Normal  CIDRAssignmentFailed     4m3s                 cidrAllocator    Node ha-135993-m03 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  4m3s (x8 over 4m3s)  kubelet          Node ha-135993-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m3s (x8 over 4m3s)  kubelet          Node ha-135993-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m3s (x7 over 4m3s)  kubelet          Node ha-135993-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m3s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m                   node-controller  Node ha-135993-m03 event: Registered Node ha-135993-m03 in Controller
	  Normal  RegisteredNode           4m                   node-controller  Node ha-135993-m03 event: Registered Node ha-135993-m03 in Controller
	  Normal  RegisteredNode           3m55s                node-controller  Node ha-135993-m03 event: Registered Node ha-135993-m03 in Controller
	
	
	Name:               ha-135993-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-135993-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0626f22cf0d915d75e291a5bce701f94395056e1
	                    minikube.k8s.io/name=ha-135993
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_20T17_11_21_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 17:11:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-135993-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 17:14:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 17:11:51 +0000   Fri, 20 Sep 2024 17:11:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 17:11:51 +0000   Fri, 20 Sep 2024 17:11:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 17:11:51 +0000   Fri, 20 Sep 2024 17:11:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 17:11:51 +0000   Fri, 20 Sep 2024 17:11:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.101
	  Hostname:    ha-135993-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 16a282b7a18241dba73a5c13e70f4f98
	  System UUID:                16a282b7-a182-41db-a73a-5c13e70f4f98
	  Boot ID:                    57ea2493-1758-4be8-813f-bc554e901359
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.4.0/24
	PodCIDRs:                     10.244.4.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-88sbs       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m
	  kube-system                 kube-proxy-2q8mx    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 2m56s              kube-proxy       
	  Normal  RegisteredNode           3m                 node-controller  Node ha-135993-m04 event: Registered Node ha-135993-m04 in Controller
	  Normal  RegisteredNode           3m                 node-controller  Node ha-135993-m04 event: Registered Node ha-135993-m04 in Controller
	  Normal  CIDRAssignmentFailed     3m                 cidrAllocator    Node ha-135993-m04 status is now: CIDRAssignmentFailed
	  Normal  CIDRAssignmentFailed     3m                 cidrAllocator    Node ha-135993-m04 status is now: CIDRAssignmentFailed
	  Normal  RegisteredNode           3m                 node-controller  Node ha-135993-m04 event: Registered Node ha-135993-m04 in Controller
	  Normal  NodeHasSufficientMemory  3m (x2 over 3m1s)  kubelet          Node ha-135993-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m (x2 over 3m1s)  kubelet          Node ha-135993-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m (x2 over 3m1s)  kubelet          Node ha-135993-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m41s              kubelet          Node ha-135993-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep20 17:07] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051754] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036812] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.151587] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.924820] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.564513] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.722394] systemd-fstab-generator[585]: Ignoring "noauto" option for root device
	[  +0.057997] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064240] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.169257] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.120861] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.270464] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +4.125709] systemd-fstab-generator[747]: Ignoring "noauto" option for root device
	[Sep20 17:08] systemd-fstab-generator[882]: Ignoring "noauto" option for root device
	[  +0.057676] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.984086] systemd-fstab-generator[1298]: Ignoring "noauto" option for root device
	[  +0.083524] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.134244] kauditd_printk_skb: 33 callbacks suppressed
	[ +11.488548] kauditd_printk_skb: 26 callbacks suppressed
	[Sep20 17:09] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [e70d74afe0f7f7c1e0f084aba99e9c488a1f999e94866b12072fdcbc24b0dad7] <==
	{"level":"warn","ts":"2024-09-20T17:14:21.654959Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"a8909dfe06e7dd47","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T17:14:21.720945Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"a8909dfe06e7dd47","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T17:14:21.729651Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"a8909dfe06e7dd47","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T17:14:21.733809Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"a8909dfe06e7dd47","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T17:14:21.738478Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"a8909dfe06e7dd47","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T17:14:21.745885Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"a8909dfe06e7dd47","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T17:14:21.755088Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"a8909dfe06e7dd47","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T17:14:21.756574Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"a8909dfe06e7dd47","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T17:14:21.766764Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"a8909dfe06e7dd47","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T17:14:21.775831Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"a8909dfe06e7dd47","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T17:14:21.779057Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"a8909dfe06e7dd47","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T17:14:21.785036Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"a8909dfe06e7dd47","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T17:14:21.793305Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"a8909dfe06e7dd47","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T17:14:21.800536Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"a8909dfe06e7dd47","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T17:14:21.803966Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"a8909dfe06e7dd47","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T17:14:21.806947Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"a8909dfe06e7dd47","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T17:14:21.812954Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"a8909dfe06e7dd47","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T17:14:21.819565Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"a8909dfe06e7dd47","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T17:14:21.829168Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"a8909dfe06e7dd47","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T17:14:21.832742Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"a8909dfe06e7dd47","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T17:14:21.836523Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"a8909dfe06e7dd47","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T17:14:21.840846Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"a8909dfe06e7dd47","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T17:14:21.847830Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"a8909dfe06e7dd47","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T17:14:21.855320Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"a8909dfe06e7dd47","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T17:14:21.855889Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"a8909dfe06e7dd47","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 17:14:21 up 6 min,  0 users,  load average: 0.13, 0.27, 0.16
	Linux ha-135993 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [8792a3b1249ffb1a157682b07dec9a0edf11b81c44e53ea068e0fc6f4548be22] <==
	I0920 17:13:43.583416       1 main.go:322] Node ha-135993-m02 has CIDR [10.244.1.0/24] 
	I0920 17:13:53.582427       1 main.go:295] Handling node with IPs: map[192.168.39.60:{}]
	I0920 17:13:53.582562       1 main.go:299] handling current node
	I0920 17:13:53.582596       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0920 17:13:53.582614       1 main.go:322] Node ha-135993-m02 has CIDR [10.244.1.0/24] 
	I0920 17:13:53.582782       1 main.go:295] Handling node with IPs: map[192.168.39.133:{}]
	I0920 17:13:53.582845       1 main.go:322] Node ha-135993-m03 has CIDR [10.244.2.0/24] 
	I0920 17:13:53.582951       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I0920 17:13:53.582999       1 main.go:322] Node ha-135993-m04 has CIDR [10.244.4.0/24] 
	I0920 17:14:03.590376       1 main.go:295] Handling node with IPs: map[192.168.39.60:{}]
	I0920 17:14:03.590443       1 main.go:299] handling current node
	I0920 17:14:03.590471       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0920 17:14:03.590480       1 main.go:322] Node ha-135993-m02 has CIDR [10.244.1.0/24] 
	I0920 17:14:03.590676       1 main.go:295] Handling node with IPs: map[192.168.39.133:{}]
	I0920 17:14:03.590706       1 main.go:322] Node ha-135993-m03 has CIDR [10.244.2.0/24] 
	I0920 17:14:03.590816       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I0920 17:14:03.590843       1 main.go:322] Node ha-135993-m04 has CIDR [10.244.4.0/24] 
	I0920 17:14:13.583195       1 main.go:295] Handling node with IPs: map[192.168.39.60:{}]
	I0920 17:14:13.583335       1 main.go:299] handling current node
	I0920 17:14:13.583420       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0920 17:14:13.583466       1 main.go:322] Node ha-135993-m02 has CIDR [10.244.1.0/24] 
	I0920 17:14:13.583620       1 main.go:295] Handling node with IPs: map[192.168.39.133:{}]
	I0920 17:14:13.583644       1 main.go:322] Node ha-135993-m03 has CIDR [10.244.2.0/24] 
	I0920 17:14:13.583702       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I0920 17:14:13.583720       1 main.go:322] Node ha-135993-m04 has CIDR [10.244.4.0/24] 
	
	
	==> kube-apiserver [1f5eb92cf36b033e765f0681f8a6634251dfd6dc0db3410efd3ef6e8580a8b2f] <==
	I0920 17:08:07.820550       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0920 17:08:07.842885       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0920 17:08:07.862886       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0920 17:08:11.804724       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0920 17:08:12.220544       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0920 17:09:03.875074       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E0920 17:09:03.875307       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 9.525µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E0920 17:09:03.876629       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E0920 17:09:03.877931       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0920 17:09:03.879420       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="4.477542ms" method="POST" path="/api/v1/namespaces/kube-system/events" result=null
	E0920 17:10:52.052815       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53414: use of closed network connection
	E0920 17:10:52.239817       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53432: use of closed network connection
	E0920 17:10:52.430950       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53454: use of closed network connection
	E0920 17:10:52.630448       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53478: use of closed network connection
	E0920 17:10:52.817389       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53506: use of closed network connection
	E0920 17:10:52.989544       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53526: use of closed network connection
	E0920 17:10:53.190104       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53554: use of closed network connection
	E0920 17:10:53.362503       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53570: use of closed network connection
	E0920 17:10:53.531925       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53576: use of closed network connection
	E0920 17:10:53.828718       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53614: use of closed network connection
	E0920 17:10:53.999814       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53638: use of closed network connection
	E0920 17:10:54.192818       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53650: use of closed network connection
	E0920 17:10:54.370009       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53670: use of closed network connection
	E0920 17:10:54.550881       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53696: use of closed network connection
	E0920 17:10:54.730661       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53720: use of closed network connection
	
	
	==> kube-controller-manager [2b48cf1f03207bb258f12f5506b5e60fd4bb742d7b2ced222664cc2b996ff15d] <==
	E0920 17:11:20.808313       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-vs5cl failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-vs5cl\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E0920 17:11:20.822359       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-vs5cl failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-vs5cl\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0920 17:11:21.218667       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-135993-m04\" does not exist"
	I0920 17:11:21.266531       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-135993-m04" podCIDRs=["10.244.4.0/24"]
	I0920 17:11:21.268323       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-135993-m04"
	I0920 17:11:21.270125       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-135993-m04"
	I0920 17:11:21.352675       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-135993-m04"
	I0920 17:11:21.402439       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-135993-m04"
	I0920 17:11:21.449183       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-135993-m04"
	I0920 17:11:21.529576       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-135993-m04"
	I0920 17:11:21.640943       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-135993-m04"
	I0920 17:11:21.919088       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-135993-m04"
	I0920 17:11:31.476194       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-135993-m04"
	I0920 17:11:40.764702       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-135993-m04"
	I0920 17:11:40.765063       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-135993-m04"
	I0920 17:11:40.780191       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-135993-m04"
	I0920 17:11:41.285623       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-135993-m04"
	I0920 17:11:51.690173       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-135993-m04"
	I0920 17:12:36.378745       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-135993-m04"
	I0920 17:12:36.380639       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-135993-m02"
	I0920 17:12:36.411090       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-135993-m02"
	I0920 17:12:36.576962       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="32.12946ms"
	I0920 17:12:36.577066       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="57.179µs"
	I0920 17:12:36.637966       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-135993-m02"
	I0920 17:12:41.581669       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-135993-m02"
	
	
	==> kube-proxy [e4b462c3efaa10b1790ebacd5fe8845d6eb91643cb6db9b2830f33afe1744d57] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 17:08:12.692616       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0920 17:08:12.737645       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.60"]
	E0920 17:08:12.737744       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 17:08:12.838388       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 17:08:12.838464       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 17:08:12.838491       1 server_linux.go:169] "Using iptables Proxier"
	I0920 17:08:12.844425       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 17:08:12.846303       1 server.go:483] "Version info" version="v1.31.1"
	I0920 17:08:12.846331       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 17:08:12.851490       1 config.go:199] "Starting service config controller"
	I0920 17:08:12.851939       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 17:08:12.853474       1 config.go:105] "Starting endpoint slice config controller"
	I0920 17:08:12.855057       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 17:08:12.854368       1 config.go:328] "Starting node config controller"
	I0920 17:08:12.883844       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 17:08:12.954338       1 shared_informer.go:320] Caches are synced for service config
	I0920 17:08:12.955452       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 17:08:12.985151       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [db80f5e2505940b81bc20c39e98d873b35eff2e71aac748d8b406e21f5435fca] <==
	W0920 17:08:05.980455       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0920 17:08:05.980516       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0920 17:08:05.980456       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0920 17:08:07.504058       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0920 17:10:18.405414       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-hcqf8\": pod kindnet-hcqf8 is already assigned to node \"ha-135993-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-hcqf8" node="ha-135993-m03"
	E0920 17:10:18.405548       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-45c9m\": pod kube-proxy-45c9m is already assigned to node \"ha-135993-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-45c9m" node="ha-135993-m03"
	E0920 17:10:18.409425       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-45c9m\": pod kube-proxy-45c9m is already assigned to node \"ha-135993-m03\"" pod="kube-system/kube-proxy-45c9m"
	E0920 17:10:18.411700       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-hcqf8\": pod kindnet-hcqf8 is already assigned to node \"ha-135993-m03\"" pod="kube-system/kindnet-hcqf8"
	I0920 17:10:18.416087       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-hcqf8" node="ha-135993-m03"
	E0920 17:10:46.972562       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-ksx56\": pod busybox-7dff88458-ksx56 is already assigned to node \"ha-135993-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-ksx56" node="ha-135993-m03"
	E0920 17:10:46.972640       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod f499b34f-4e98-4ebc-90b5-90b1b13d26c7(default/busybox-7dff88458-ksx56) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-ksx56"
	E0920 17:10:46.972665       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-ksx56\": pod busybox-7dff88458-ksx56 is already assigned to node \"ha-135993-m03\"" pod="default/busybox-7dff88458-ksx56"
	I0920 17:10:46.972689       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-ksx56" node="ha-135993-m03"
	E0920 17:11:21.276134       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-w6gf8\": pod kube-proxy-w6gf8 is already assigned to node \"ha-135993-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-w6gf8" node="ha-135993-m04"
	E0920 17:11:21.276387       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 344e8822-62e5-4678-9654-381b97c31527(kube-system/kube-proxy-w6gf8) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-w6gf8"
	E0920 17:11:21.277109       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-w6gf8\": pod kube-proxy-w6gf8 is already assigned to node \"ha-135993-m04\"" pod="kube-system/kube-proxy-w6gf8"
	I0920 17:11:21.277247       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-w6gf8" node="ha-135993-m04"
	E0920 17:11:21.344572       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-n6xl6\": pod kindnet-n6xl6 is already assigned to node \"ha-135993-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-n6xl6" node="ha-135993-m04"
	E0920 17:11:21.344755       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-n6xl6\": pod kindnet-n6xl6 is already assigned to node \"ha-135993-m04\"" pod="kube-system/kindnet-n6xl6"
	E0920 17:11:21.388481       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-jfsxq\": pod kube-proxy-jfsxq is already assigned to node \"ha-135993-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-jfsxq" node="ha-135993-m04"
	E0920 17:11:21.388679       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-jfsxq\": pod kube-proxy-jfsxq is already assigned to node \"ha-135993-m04\"" pod="kube-system/kube-proxy-jfsxq"
	E0920 17:11:21.399720       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-svxp4\": pod kindnet-svxp4 is already assigned to node \"ha-135993-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-svxp4" node="ha-135993-m04"
	E0920 17:11:21.401135       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod a758ff76-3e8c-40c1-9742-2fbcddd4aa87(kube-system/kindnet-svxp4) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-svxp4"
	E0920 17:11:21.401322       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-svxp4\": pod kindnet-svxp4 is already assigned to node \"ha-135993-m04\"" pod="kube-system/kindnet-svxp4"
	I0920 17:11:21.401439       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-svxp4" node="ha-135993-m04"
	
	
	==> kubelet <==
	Sep 20 17:13:07 ha-135993 kubelet[1305]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 20 17:13:07 ha-135993 kubelet[1305]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 20 17:13:07 ha-135993 kubelet[1305]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 20 17:13:07 ha-135993 kubelet[1305]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 20 17:13:07 ha-135993 kubelet[1305]: E0920 17:13:07.854081    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852387853510446,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:13:07 ha-135993 kubelet[1305]: E0920 17:13:07.854113    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852387853510446,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:13:17 ha-135993 kubelet[1305]: E0920 17:13:17.855865    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852397855363761,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:13:17 ha-135993 kubelet[1305]: E0920 17:13:17.856405    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852397855363761,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:13:27 ha-135993 kubelet[1305]: E0920 17:13:27.859417    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852407858772404,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:13:27 ha-135993 kubelet[1305]: E0920 17:13:27.859469    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852407858772404,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:13:37 ha-135993 kubelet[1305]: E0920 17:13:37.861128    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852417860446509,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:13:37 ha-135993 kubelet[1305]: E0920 17:13:37.861168    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852417860446509,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:13:47 ha-135993 kubelet[1305]: E0920 17:13:47.864331    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852427863997805,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:13:47 ha-135993 kubelet[1305]: E0920 17:13:47.864372    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852427863997805,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:13:57 ha-135993 kubelet[1305]: E0920 17:13:57.866952    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852437866556596,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:13:57 ha-135993 kubelet[1305]: E0920 17:13:57.866977    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852437866556596,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:14:07 ha-135993 kubelet[1305]: E0920 17:14:07.772947    1305 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 20 17:14:07 ha-135993 kubelet[1305]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 20 17:14:07 ha-135993 kubelet[1305]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 20 17:14:07 ha-135993 kubelet[1305]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 20 17:14:07 ha-135993 kubelet[1305]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 20 17:14:07 ha-135993 kubelet[1305]: E0920 17:14:07.869325    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852447868022083,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:14:07 ha-135993 kubelet[1305]: E0920 17:14:07.869353    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852447868022083,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:14:17 ha-135993 kubelet[1305]: E0920 17:14:17.871289    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852457870862251,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:14:17 ha-135993 kubelet[1305]: E0920 17:14:17.871679    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852457870862251,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-135993 -n ha-135993
helpers_test.go:261: (dbg) Run:  kubectl --context ha-135993 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (5.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E0920 17:14:23.793395   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/functional-945494/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.408421976s)
ha_test.go:413: expected profile "ha-135993" in json of 'profile list' to have "Degraded" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-135993\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-135993\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-135993\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.60\",\"Port\":8443,\"Kuber
netesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.227\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.133\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.101\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\
"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\
":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-135993 -n ha-135993
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-135993 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-135993 logs -n 25: (1.32316316s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-135993 cp ha-135993-m03:/home/docker/cp-test.txt                              | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:11 UTC | 20 Sep 24 17:11 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2672215621/001/cp-test_ha-135993-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-135993 ssh -n                                                                 | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:11 UTC | 20 Sep 24 17:11 UTC |
	|         | ha-135993-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-135993 cp ha-135993-m03:/home/docker/cp-test.txt                              | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:11 UTC | 20 Sep 24 17:11 UTC |
	|         | ha-135993:/home/docker/cp-test_ha-135993-m03_ha-135993.txt                       |           |         |         |                     |                     |
	| ssh     | ha-135993 ssh -n                                                                 | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:11 UTC | 20 Sep 24 17:11 UTC |
	|         | ha-135993-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-135993 ssh -n ha-135993 sudo cat                                              | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:11 UTC | 20 Sep 24 17:11 UTC |
	|         | /home/docker/cp-test_ha-135993-m03_ha-135993.txt                                 |           |         |         |                     |                     |
	| cp      | ha-135993 cp ha-135993-m03:/home/docker/cp-test.txt                              | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:11 UTC | 20 Sep 24 17:11 UTC |
	|         | ha-135993-m02:/home/docker/cp-test_ha-135993-m03_ha-135993-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-135993 ssh -n                                                                 | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:11 UTC | 20 Sep 24 17:11 UTC |
	|         | ha-135993-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-135993 ssh -n ha-135993-m02 sudo cat                                          | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:11 UTC | 20 Sep 24 17:11 UTC |
	|         | /home/docker/cp-test_ha-135993-m03_ha-135993-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-135993 cp ha-135993-m03:/home/docker/cp-test.txt                              | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:11 UTC | 20 Sep 24 17:11 UTC |
	|         | ha-135993-m04:/home/docker/cp-test_ha-135993-m03_ha-135993-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-135993 ssh -n                                                                 | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:11 UTC | 20 Sep 24 17:11 UTC |
	|         | ha-135993-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-135993 ssh -n ha-135993-m04 sudo cat                                          | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:11 UTC | 20 Sep 24 17:11 UTC |
	|         | /home/docker/cp-test_ha-135993-m03_ha-135993-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-135993 cp testdata/cp-test.txt                                                | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:11 UTC | 20 Sep 24 17:11 UTC |
	|         | ha-135993-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-135993 ssh -n                                                                 | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:11 UTC | 20 Sep 24 17:11 UTC |
	|         | ha-135993-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-135993 cp ha-135993-m04:/home/docker/cp-test.txt                              | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:11 UTC | 20 Sep 24 17:11 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2672215621/001/cp-test_ha-135993-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-135993 ssh -n                                                                 | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:11 UTC | 20 Sep 24 17:11 UTC |
	|         | ha-135993-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-135993 cp ha-135993-m04:/home/docker/cp-test.txt                              | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:11 UTC | 20 Sep 24 17:11 UTC |
	|         | ha-135993:/home/docker/cp-test_ha-135993-m04_ha-135993.txt                       |           |         |         |                     |                     |
	| ssh     | ha-135993 ssh -n                                                                 | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:11 UTC | 20 Sep 24 17:11 UTC |
	|         | ha-135993-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-135993 ssh -n ha-135993 sudo cat                                              | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:11 UTC | 20 Sep 24 17:11 UTC |
	|         | /home/docker/cp-test_ha-135993-m04_ha-135993.txt                                 |           |         |         |                     |                     |
	| cp      | ha-135993 cp ha-135993-m04:/home/docker/cp-test.txt                              | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:11 UTC | 20 Sep 24 17:12 UTC |
	|         | ha-135993-m02:/home/docker/cp-test_ha-135993-m04_ha-135993-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-135993 ssh -n                                                                 | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:12 UTC | 20 Sep 24 17:12 UTC |
	|         | ha-135993-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-135993 ssh -n ha-135993-m02 sudo cat                                          | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:12 UTC | 20 Sep 24 17:12 UTC |
	|         | /home/docker/cp-test_ha-135993-m04_ha-135993-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-135993 cp ha-135993-m04:/home/docker/cp-test.txt                              | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:12 UTC | 20 Sep 24 17:12 UTC |
	|         | ha-135993-m03:/home/docker/cp-test_ha-135993-m04_ha-135993-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-135993 ssh -n                                                                 | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:12 UTC | 20 Sep 24 17:12 UTC |
	|         | ha-135993-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-135993 ssh -n ha-135993-m03 sudo cat                                          | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:12 UTC | 20 Sep 24 17:12 UTC |
	|         | /home/docker/cp-test_ha-135993-m04_ha-135993-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-135993 node stop m02 -v=7                                                     | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:12 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 17:07:28
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 17:07:28.224109   27962 out.go:345] Setting OutFile to fd 1 ...
	I0920 17:07:28.224206   27962 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:07:28.224213   27962 out.go:358] Setting ErrFile to fd 2...
	I0920 17:07:28.224218   27962 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:07:28.224387   27962 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-8777/.minikube/bin
	I0920 17:07:28.224982   27962 out.go:352] Setting JSON to false
	I0920 17:07:28.225784   27962 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2991,"bootTime":1726849057,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 17:07:28.225901   27962 start.go:139] virtualization: kvm guest
	I0920 17:07:28.228074   27962 out.go:177] * [ha-135993] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 17:07:28.229408   27962 notify.go:220] Checking for updates...
	I0920 17:07:28.229444   27962 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 17:07:28.230821   27962 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 17:07:28.231979   27962 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-8777/kubeconfig
	I0920 17:07:28.233045   27962 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-8777/.minikube
	I0920 17:07:28.234136   27962 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 17:07:28.235151   27962 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 17:07:28.236602   27962 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 17:07:28.271877   27962 out.go:177] * Using the kvm2 driver based on user configuration
	I0920 17:07:28.273222   27962 start.go:297] selected driver: kvm2
	I0920 17:07:28.273240   27962 start.go:901] validating driver "kvm2" against <nil>
	I0920 17:07:28.273253   27962 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 17:07:28.274045   27962 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 17:07:28.274154   27962 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19672-8777/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 17:07:28.289424   27962 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 17:07:28.289473   27962 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 17:07:28.289714   27962 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 17:07:28.289743   27962 cni.go:84] Creating CNI manager for ""
	I0920 17:07:28.289789   27962 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0920 17:07:28.289814   27962 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0920 17:07:28.289902   27962 start.go:340] cluster config:
	{Name:ha-135993 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-135993 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0920 17:07:28.290006   27962 iso.go:125] acquiring lock: {Name:mkba95ef0488e46f622333e9f317f43def93040b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 17:07:28.291840   27962 out.go:177] * Starting "ha-135993" primary control-plane node in "ha-135993" cluster
	I0920 17:07:28.292971   27962 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 17:07:28.293012   27962 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0920 17:07:28.293022   27962 cache.go:56] Caching tarball of preloaded images
	I0920 17:07:28.293121   27962 preload.go:172] Found /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 17:07:28.293135   27962 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 17:07:28.293509   27962 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/config.json ...
	I0920 17:07:28.293532   27962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/config.json: {Name:mk8c38de8f77a94cd04edafc97e1e3e5f16f67aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:07:28.293702   27962 start.go:360] acquireMachinesLock for ha-135993: {Name:mkfeedb385cf08b5d2aa00913e85815d02a180c2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 17:07:28.293739   27962 start.go:364] duration metric: took 21.191µs to acquireMachinesLock for "ha-135993"
	I0920 17:07:28.293762   27962 start.go:93] Provisioning new machine with config: &{Name:ha-135993 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-135993 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 17:07:28.293816   27962 start.go:125] createHost starting for "" (driver="kvm2")
	I0920 17:07:28.295606   27962 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 17:07:28.295844   27962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:07:28.295897   27962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:07:28.310515   27962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38263
	I0920 17:07:28.311021   27962 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:07:28.311565   27962 main.go:141] libmachine: Using API Version  1
	I0920 17:07:28.311587   27962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:07:28.311884   27962 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:07:28.312062   27962 main.go:141] libmachine: (ha-135993) Calling .GetMachineName
	I0920 17:07:28.312230   27962 main.go:141] libmachine: (ha-135993) Calling .DriverName
	I0920 17:07:28.312390   27962 start.go:159] libmachine.API.Create for "ha-135993" (driver="kvm2")
	I0920 17:07:28.312423   27962 client.go:168] LocalClient.Create starting
	I0920 17:07:28.312451   27962 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem
	I0920 17:07:28.312493   27962 main.go:141] libmachine: Decoding PEM data...
	I0920 17:07:28.312531   27962 main.go:141] libmachine: Parsing certificate...
	I0920 17:07:28.312583   27962 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem
	I0920 17:07:28.312603   27962 main.go:141] libmachine: Decoding PEM data...
	I0920 17:07:28.312616   27962 main.go:141] libmachine: Parsing certificate...
	I0920 17:07:28.312634   27962 main.go:141] libmachine: Running pre-create checks...
	I0920 17:07:28.312641   27962 main.go:141] libmachine: (ha-135993) Calling .PreCreateCheck
	I0920 17:07:28.313012   27962 main.go:141] libmachine: (ha-135993) Calling .GetConfigRaw
	I0920 17:07:28.313345   27962 main.go:141] libmachine: Creating machine...
	I0920 17:07:28.313358   27962 main.go:141] libmachine: (ha-135993) Calling .Create
	I0920 17:07:28.313496   27962 main.go:141] libmachine: (ha-135993) Creating KVM machine...
	I0920 17:07:28.314784   27962 main.go:141] libmachine: (ha-135993) DBG | found existing default KVM network
	I0920 17:07:28.315382   27962 main.go:141] libmachine: (ha-135993) DBG | I0920 17:07:28.315245   27985 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002211e0}
	I0920 17:07:28.315406   27962 main.go:141] libmachine: (ha-135993) DBG | created network xml: 
	I0920 17:07:28.315419   27962 main.go:141] libmachine: (ha-135993) DBG | <network>
	I0920 17:07:28.315429   27962 main.go:141] libmachine: (ha-135993) DBG |   <name>mk-ha-135993</name>
	I0920 17:07:28.315440   27962 main.go:141] libmachine: (ha-135993) DBG |   <dns enable='no'/>
	I0920 17:07:28.315450   27962 main.go:141] libmachine: (ha-135993) DBG |   
	I0920 17:07:28.315469   27962 main.go:141] libmachine: (ha-135993) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0920 17:07:28.315477   27962 main.go:141] libmachine: (ha-135993) DBG |     <dhcp>
	I0920 17:07:28.315483   27962 main.go:141] libmachine: (ha-135993) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0920 17:07:28.315496   27962 main.go:141] libmachine: (ha-135993) DBG |     </dhcp>
	I0920 17:07:28.315507   27962 main.go:141] libmachine: (ha-135993) DBG |   </ip>
	I0920 17:07:28.315519   27962 main.go:141] libmachine: (ha-135993) DBG |   
	I0920 17:07:28.315530   27962 main.go:141] libmachine: (ha-135993) DBG | </network>
	I0920 17:07:28.315542   27962 main.go:141] libmachine: (ha-135993) DBG | 
	I0920 17:07:28.320907   27962 main.go:141] libmachine: (ha-135993) DBG | trying to create private KVM network mk-ha-135993 192.168.39.0/24...
	I0920 17:07:28.387245   27962 main.go:141] libmachine: (ha-135993) DBG | private KVM network mk-ha-135993 192.168.39.0/24 created
	I0920 17:07:28.387277   27962 main.go:141] libmachine: (ha-135993) DBG | I0920 17:07:28.387214   27985 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19672-8777/.minikube
	I0920 17:07:28.387292   27962 main.go:141] libmachine: (ha-135993) Setting up store path in /home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993 ...
	I0920 17:07:28.387307   27962 main.go:141] libmachine: (ha-135993) Building disk image from file:///home/jenkins/minikube-integration/19672-8777/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso
	I0920 17:07:28.387375   27962 main.go:141] libmachine: (ha-135993) Downloading /home/jenkins/minikube-integration/19672-8777/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19672-8777/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso...
	I0920 17:07:28.647940   27962 main.go:141] libmachine: (ha-135993) DBG | I0920 17:07:28.647805   27985 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993/id_rsa...
	I0920 17:07:28.842374   27962 main.go:141] libmachine: (ha-135993) DBG | I0920 17:07:28.842220   27985 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993/ha-135993.rawdisk...
	I0920 17:07:28.842416   27962 main.go:141] libmachine: (ha-135993) DBG | Writing magic tar header
	I0920 17:07:28.842425   27962 main.go:141] libmachine: (ha-135993) DBG | Writing SSH key tar header
	I0920 17:07:28.842433   27962 main.go:141] libmachine: (ha-135993) DBG | I0920 17:07:28.842377   27985 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993 ...
	I0920 17:07:28.842562   27962 main.go:141] libmachine: (ha-135993) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993
	I0920 17:07:28.842579   27962 main.go:141] libmachine: (ha-135993) Setting executable bit set on /home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993 (perms=drwx------)
	I0920 17:07:28.842585   27962 main.go:141] libmachine: (ha-135993) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-8777/.minikube/machines
	I0920 17:07:28.842594   27962 main.go:141] libmachine: (ha-135993) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-8777/.minikube
	I0920 17:07:28.842600   27962 main.go:141] libmachine: (ha-135993) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-8777
	I0920 17:07:28.842608   27962 main.go:141] libmachine: (ha-135993) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0920 17:07:28.842615   27962 main.go:141] libmachine: (ha-135993) Setting executable bit set on /home/jenkins/minikube-integration/19672-8777/.minikube/machines (perms=drwxr-xr-x)
	I0920 17:07:28.842628   27962 main.go:141] libmachine: (ha-135993) Setting executable bit set on /home/jenkins/minikube-integration/19672-8777/.minikube (perms=drwxr-xr-x)
	I0920 17:07:28.842634   27962 main.go:141] libmachine: (ha-135993) Setting executable bit set on /home/jenkins/minikube-integration/19672-8777 (perms=drwxrwxr-x)
	I0920 17:07:28.842641   27962 main.go:141] libmachine: (ha-135993) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0920 17:07:28.842659   27962 main.go:141] libmachine: (ha-135993) DBG | Checking permissions on dir: /home/jenkins
	I0920 17:07:28.842667   27962 main.go:141] libmachine: (ha-135993) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0920 17:07:28.842678   27962 main.go:141] libmachine: (ha-135993) Creating domain...
	I0920 17:07:28.842684   27962 main.go:141] libmachine: (ha-135993) DBG | Checking permissions on dir: /home
	I0920 17:07:28.842691   27962 main.go:141] libmachine: (ha-135993) DBG | Skipping /home - not owner
	I0920 17:07:28.843894   27962 main.go:141] libmachine: (ha-135993) define libvirt domain using xml: 
	I0920 17:07:28.843929   27962 main.go:141] libmachine: (ha-135993) <domain type='kvm'>
	I0920 17:07:28.843939   27962 main.go:141] libmachine: (ha-135993)   <name>ha-135993</name>
	I0920 17:07:28.843946   27962 main.go:141] libmachine: (ha-135993)   <memory unit='MiB'>2200</memory>
	I0920 17:07:28.843953   27962 main.go:141] libmachine: (ha-135993)   <vcpu>2</vcpu>
	I0920 17:07:28.843960   27962 main.go:141] libmachine: (ha-135993)   <features>
	I0920 17:07:28.843968   27962 main.go:141] libmachine: (ha-135993)     <acpi/>
	I0920 17:07:28.843974   27962 main.go:141] libmachine: (ha-135993)     <apic/>
	I0920 17:07:28.843981   27962 main.go:141] libmachine: (ha-135993)     <pae/>
	I0920 17:07:28.844000   27962 main.go:141] libmachine: (ha-135993)     
	I0920 17:07:28.844009   27962 main.go:141] libmachine: (ha-135993)   </features>
	I0920 17:07:28.844018   27962 main.go:141] libmachine: (ha-135993)   <cpu mode='host-passthrough'>
	I0920 17:07:28.844024   27962 main.go:141] libmachine: (ha-135993)   
	I0920 17:07:28.844044   27962 main.go:141] libmachine: (ha-135993)   </cpu>
	I0920 17:07:28.844054   27962 main.go:141] libmachine: (ha-135993)   <os>
	I0920 17:07:28.844083   27962 main.go:141] libmachine: (ha-135993)     <type>hvm</type>
	I0920 17:07:28.844103   27962 main.go:141] libmachine: (ha-135993)     <boot dev='cdrom'/>
	I0920 17:07:28.844109   27962 main.go:141] libmachine: (ha-135993)     <boot dev='hd'/>
	I0920 17:07:28.844113   27962 main.go:141] libmachine: (ha-135993)     <bootmenu enable='no'/>
	I0920 17:07:28.844118   27962 main.go:141] libmachine: (ha-135993)   </os>
	I0920 17:07:28.844121   27962 main.go:141] libmachine: (ha-135993)   <devices>
	I0920 17:07:28.844128   27962 main.go:141] libmachine: (ha-135993)     <disk type='file' device='cdrom'>
	I0920 17:07:28.844137   27962 main.go:141] libmachine: (ha-135993)       <source file='/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993/boot2docker.iso'/>
	I0920 17:07:28.844142   27962 main.go:141] libmachine: (ha-135993)       <target dev='hdc' bus='scsi'/>
	I0920 17:07:28.844146   27962 main.go:141] libmachine: (ha-135993)       <readonly/>
	I0920 17:07:28.844151   27962 main.go:141] libmachine: (ha-135993)     </disk>
	I0920 17:07:28.844157   27962 main.go:141] libmachine: (ha-135993)     <disk type='file' device='disk'>
	I0920 17:07:28.844164   27962 main.go:141] libmachine: (ha-135993)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0920 17:07:28.844172   27962 main.go:141] libmachine: (ha-135993)       <source file='/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993/ha-135993.rawdisk'/>
	I0920 17:07:28.844194   27962 main.go:141] libmachine: (ha-135993)       <target dev='hda' bus='virtio'/>
	I0920 17:07:28.844214   27962 main.go:141] libmachine: (ha-135993)     </disk>
	I0920 17:07:28.844234   27962 main.go:141] libmachine: (ha-135993)     <interface type='network'>
	I0920 17:07:28.844247   27962 main.go:141] libmachine: (ha-135993)       <source network='mk-ha-135993'/>
	I0920 17:07:28.844256   27962 main.go:141] libmachine: (ha-135993)       <model type='virtio'/>
	I0920 17:07:28.844274   27962 main.go:141] libmachine: (ha-135993)     </interface>
	I0920 17:07:28.844298   27962 main.go:141] libmachine: (ha-135993)     <interface type='network'>
	I0920 17:07:28.844316   27962 main.go:141] libmachine: (ha-135993)       <source network='default'/>
	I0920 17:07:28.844331   27962 main.go:141] libmachine: (ha-135993)       <model type='virtio'/>
	I0920 17:07:28.844342   27962 main.go:141] libmachine: (ha-135993)     </interface>
	I0920 17:07:28.844351   27962 main.go:141] libmachine: (ha-135993)     <serial type='pty'>
	I0920 17:07:28.844360   27962 main.go:141] libmachine: (ha-135993)       <target port='0'/>
	I0920 17:07:28.844366   27962 main.go:141] libmachine: (ha-135993)     </serial>
	I0920 17:07:28.844373   27962 main.go:141] libmachine: (ha-135993)     <console type='pty'>
	I0920 17:07:28.844381   27962 main.go:141] libmachine: (ha-135993)       <target type='serial' port='0'/>
	I0920 17:07:28.844400   27962 main.go:141] libmachine: (ha-135993)     </console>
	I0920 17:07:28.844411   27962 main.go:141] libmachine: (ha-135993)     <rng model='virtio'>
	I0920 17:07:28.844423   27962 main.go:141] libmachine: (ha-135993)       <backend model='random'>/dev/random</backend>
	I0920 17:07:28.844437   27962 main.go:141] libmachine: (ha-135993)     </rng>
	I0920 17:07:28.844445   27962 main.go:141] libmachine: (ha-135993)     
	I0920 17:07:28.844456   27962 main.go:141] libmachine: (ha-135993)     
	I0920 17:07:28.844462   27962 main.go:141] libmachine: (ha-135993)   </devices>
	I0920 17:07:28.844471   27962 main.go:141] libmachine: (ha-135993) </domain>
	I0920 17:07:28.844477   27962 main.go:141] libmachine: (ha-135993) 
	I0920 17:07:28.849080   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:80:85:3f in network default
	I0920 17:07:28.849710   27962 main.go:141] libmachine: (ha-135993) Ensuring networks are active...
	I0920 17:07:28.849730   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:28.850712   27962 main.go:141] libmachine: (ha-135993) Ensuring network default is active
	I0920 17:07:28.850972   27962 main.go:141] libmachine: (ha-135993) Ensuring network mk-ha-135993 is active
	I0920 17:07:28.851547   27962 main.go:141] libmachine: (ha-135993) Getting domain xml...
	I0920 17:07:28.852218   27962 main.go:141] libmachine: (ha-135993) Creating domain...
	I0920 17:07:30.058549   27962 main.go:141] libmachine: (ha-135993) Waiting to get IP...
	I0920 17:07:30.059436   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:30.059857   27962 main.go:141] libmachine: (ha-135993) DBG | unable to find current IP address of domain ha-135993 in network mk-ha-135993
	I0920 17:07:30.059875   27962 main.go:141] libmachine: (ha-135993) DBG | I0920 17:07:30.059831   27985 retry.go:31] will retry after 273.871147ms: waiting for machine to come up
	I0920 17:07:30.335232   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:30.335705   27962 main.go:141] libmachine: (ha-135993) DBG | unable to find current IP address of domain ha-135993 in network mk-ha-135993
	I0920 17:07:30.335727   27962 main.go:141] libmachine: (ha-135993) DBG | I0920 17:07:30.335673   27985 retry.go:31] will retry after 312.261403ms: waiting for machine to come up
	I0920 17:07:30.649140   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:30.649587   27962 main.go:141] libmachine: (ha-135993) DBG | unable to find current IP address of domain ha-135993 in network mk-ha-135993
	I0920 17:07:30.649616   27962 main.go:141] libmachine: (ha-135993) DBG | I0920 17:07:30.649539   27985 retry.go:31] will retry after 394.960563ms: waiting for machine to come up
	I0920 17:07:31.046134   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:31.046737   27962 main.go:141] libmachine: (ha-135993) DBG | unable to find current IP address of domain ha-135993 in network mk-ha-135993
	I0920 17:07:31.046803   27962 main.go:141] libmachine: (ha-135993) DBG | I0920 17:07:31.046706   27985 retry.go:31] will retry after 406.180853ms: waiting for machine to come up
	I0920 17:07:31.454086   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:31.454470   27962 main.go:141] libmachine: (ha-135993) DBG | unable to find current IP address of domain ha-135993 in network mk-ha-135993
	I0920 17:07:31.454493   27962 main.go:141] libmachine: (ha-135993) DBG | I0920 17:07:31.454441   27985 retry.go:31] will retry after 507.991566ms: waiting for machine to come up
	I0920 17:07:31.964134   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:31.964550   27962 main.go:141] libmachine: (ha-135993) DBG | unable to find current IP address of domain ha-135993 in network mk-ha-135993
	I0920 17:07:31.964579   27962 main.go:141] libmachine: (ha-135993) DBG | I0920 17:07:31.964520   27985 retry.go:31] will retry after 921.386836ms: waiting for machine to come up
	I0920 17:07:32.887074   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:32.887532   27962 main.go:141] libmachine: (ha-135993) DBG | unable to find current IP address of domain ha-135993 in network mk-ha-135993
	I0920 17:07:32.887576   27962 main.go:141] libmachine: (ha-135993) DBG | I0920 17:07:32.887477   27985 retry.go:31] will retry after 836.533379ms: waiting for machine to come up
	I0920 17:07:33.725040   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:33.725632   27962 main.go:141] libmachine: (ha-135993) DBG | unable to find current IP address of domain ha-135993 in network mk-ha-135993
	I0920 17:07:33.725663   27962 main.go:141] libmachine: (ha-135993) DBG | I0920 17:07:33.725548   27985 retry.go:31] will retry after 1.249731704s: waiting for machine to come up
	I0920 17:07:34.976928   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:34.977332   27962 main.go:141] libmachine: (ha-135993) DBG | unable to find current IP address of domain ha-135993 in network mk-ha-135993
	I0920 17:07:34.977363   27962 main.go:141] libmachine: (ha-135993) DBG | I0920 17:07:34.977281   27985 retry.go:31] will retry after 1.538905112s: waiting for machine to come up
	I0920 17:07:36.517997   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:36.518523   27962 main.go:141] libmachine: (ha-135993) DBG | unable to find current IP address of domain ha-135993 in network mk-ha-135993
	I0920 17:07:36.518558   27962 main.go:141] libmachine: (ha-135993) DBG | I0920 17:07:36.518494   27985 retry.go:31] will retry after 1.90472576s: waiting for machine to come up
	I0920 17:07:38.424570   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:38.424987   27962 main.go:141] libmachine: (ha-135993) DBG | unable to find current IP address of domain ha-135993 in network mk-ha-135993
	I0920 17:07:38.425014   27962 main.go:141] libmachine: (ha-135993) DBG | I0920 17:07:38.424942   27985 retry.go:31] will retry after 2.741058611s: waiting for machine to come up
	I0920 17:07:41.169975   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:41.170341   27962 main.go:141] libmachine: (ha-135993) DBG | unable to find current IP address of domain ha-135993 in network mk-ha-135993
	I0920 17:07:41.170384   27962 main.go:141] libmachine: (ha-135993) DBG | I0920 17:07:41.170291   27985 retry.go:31] will retry after 3.268233116s: waiting for machine to come up
	I0920 17:07:44.440089   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:44.440457   27962 main.go:141] libmachine: (ha-135993) DBG | unable to find current IP address of domain ha-135993 in network mk-ha-135993
	I0920 17:07:44.440479   27962 main.go:141] libmachine: (ha-135993) DBG | I0920 17:07:44.440421   27985 retry.go:31] will retry after 4.54359632s: waiting for machine to come up
	I0920 17:07:48.986065   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:48.986437   27962 main.go:141] libmachine: (ha-135993) Found IP for machine: 192.168.39.60
	I0920 17:07:48.986462   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has current primary IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:48.986471   27962 main.go:141] libmachine: (ha-135993) Reserving static IP address...
	I0920 17:07:48.986867   27962 main.go:141] libmachine: (ha-135993) DBG | unable to find host DHCP lease matching {name: "ha-135993", mac: "52:54:00:99:26:09", ip: "192.168.39.60"} in network mk-ha-135993
	I0920 17:07:49.060367   27962 main.go:141] libmachine: (ha-135993) DBG | Getting to WaitForSSH function...
	I0920 17:07:49.060399   27962 main.go:141] libmachine: (ha-135993) Reserved static IP address: 192.168.39.60
	I0920 17:07:49.060416   27962 main.go:141] libmachine: (ha-135993) Waiting for SSH to be available...
	I0920 17:07:49.063301   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:49.063688   27962 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:minikube Clientid:01:52:54:00:99:26:09}
	I0920 17:07:49.063720   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:49.063827   27962 main.go:141] libmachine: (ha-135993) DBG | Using SSH client type: external
	I0920 17:07:49.063851   27962 main.go:141] libmachine: (ha-135993) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993/id_rsa (-rw-------)
	I0920 17:07:49.063904   27962 main.go:141] libmachine: (ha-135993) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.60 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 17:07:49.063928   27962 main.go:141] libmachine: (ha-135993) DBG | About to run SSH command:
	I0920 17:07:49.063942   27962 main.go:141] libmachine: (ha-135993) DBG | exit 0
	I0920 17:07:49.193721   27962 main.go:141] libmachine: (ha-135993) DBG | SSH cmd err, output: <nil>: 
	I0920 17:07:49.194050   27962 main.go:141] libmachine: (ha-135993) KVM machine creation complete!
	I0920 17:07:49.194374   27962 main.go:141] libmachine: (ha-135993) Calling .GetConfigRaw
	I0920 17:07:49.195018   27962 main.go:141] libmachine: (ha-135993) Calling .DriverName
	I0920 17:07:49.195196   27962 main.go:141] libmachine: (ha-135993) Calling .DriverName
	I0920 17:07:49.195368   27962 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0920 17:07:49.195383   27962 main.go:141] libmachine: (ha-135993) Calling .GetState
	I0920 17:07:49.196554   27962 main.go:141] libmachine: Detecting operating system of created instance...
	I0920 17:07:49.196568   27962 main.go:141] libmachine: Waiting for SSH to be available...
	I0920 17:07:49.196573   27962 main.go:141] libmachine: Getting to WaitForSSH function...
	I0920 17:07:49.196578   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHHostname
	I0920 17:07:49.199132   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:49.199593   27962 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:07:49.199612   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:49.199789   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHPort
	I0920 17:07:49.199931   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:07:49.200061   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:07:49.200187   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHUsername
	I0920 17:07:49.200332   27962 main.go:141] libmachine: Using SSH client type: native
	I0920 17:07:49.200544   27962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0920 17:07:49.200555   27962 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0920 17:07:49.309150   27962 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 17:07:49.309171   27962 main.go:141] libmachine: Detecting the provisioner...
	I0920 17:07:49.309178   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHHostname
	I0920 17:07:49.311937   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:49.312313   27962 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:07:49.312340   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:49.312539   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHPort
	I0920 17:07:49.312760   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:07:49.312905   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:07:49.313028   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHUsername
	I0920 17:07:49.313214   27962 main.go:141] libmachine: Using SSH client type: native
	I0920 17:07:49.313445   27962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0920 17:07:49.313459   27962 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0920 17:07:49.422616   27962 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0920 17:07:49.422713   27962 main.go:141] libmachine: found compatible host: buildroot
	I0920 17:07:49.422725   27962 main.go:141] libmachine: Provisioning with buildroot...
	I0920 17:07:49.422735   27962 main.go:141] libmachine: (ha-135993) Calling .GetMachineName
	I0920 17:07:49.422993   27962 buildroot.go:166] provisioning hostname "ha-135993"
	I0920 17:07:49.423024   27962 main.go:141] libmachine: (ha-135993) Calling .GetMachineName
	I0920 17:07:49.423217   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHHostname
	I0920 17:07:49.425983   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:49.426356   27962 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:07:49.426386   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:49.426537   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHPort
	I0920 17:07:49.426731   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:07:49.426884   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:07:49.427002   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHUsername
	I0920 17:07:49.427182   27962 main.go:141] libmachine: Using SSH client type: native
	I0920 17:07:49.427358   27962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0920 17:07:49.427369   27962 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-135993 && echo "ha-135993" | sudo tee /etc/hostname
	I0920 17:07:49.546887   27962 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-135993
	
	I0920 17:07:49.546939   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHHostname
	I0920 17:07:49.549688   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:49.550074   27962 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:07:49.550101   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:49.550275   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHPort
	I0920 17:07:49.550460   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:07:49.550617   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:07:49.550748   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHUsername
	I0920 17:07:49.550889   27962 main.go:141] libmachine: Using SSH client type: native
	I0920 17:07:49.551094   27962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0920 17:07:49.551110   27962 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-135993' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-135993/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-135993' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 17:07:49.666876   27962 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 17:07:49.666908   27962 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-8777/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-8777/.minikube}
	I0920 17:07:49.666933   27962 buildroot.go:174] setting up certificates
	I0920 17:07:49.666946   27962 provision.go:84] configureAuth start
	I0920 17:07:49.666956   27962 main.go:141] libmachine: (ha-135993) Calling .GetMachineName
	I0920 17:07:49.667278   27962 main.go:141] libmachine: (ha-135993) Calling .GetIP
	I0920 17:07:49.670314   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:49.670647   27962 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:07:49.670670   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:49.670822   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHHostname
	I0920 17:07:49.672840   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:49.673146   27962 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:07:49.673169   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:49.673340   27962 provision.go:143] copyHostCerts
	I0920 17:07:49.673366   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem
	I0920 17:07:49.673396   27962 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem, removing ...
	I0920 17:07:49.673411   27962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem
	I0920 17:07:49.673481   27962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem (1082 bytes)
	I0920 17:07:49.673583   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem
	I0920 17:07:49.673609   27962 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem, removing ...
	I0920 17:07:49.673619   27962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem
	I0920 17:07:49.673659   27962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem (1123 bytes)
	I0920 17:07:49.673727   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem
	I0920 17:07:49.673743   27962 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem, removing ...
	I0920 17:07:49.673749   27962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem
	I0920 17:07:49.673771   27962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem (1675 bytes)
	I0920 17:07:49.673820   27962 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem org=jenkins.ha-135993 san=[127.0.0.1 192.168.39.60 ha-135993 localhost minikube]
	I0920 17:07:49.869795   27962 provision.go:177] copyRemoteCerts
	I0920 17:07:49.869886   27962 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 17:07:49.869910   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHHostname
	I0920 17:07:49.872957   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:49.873263   27962 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:07:49.873287   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:49.873619   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHPort
	I0920 17:07:49.874014   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:07:49.874211   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHUsername
	I0920 17:07:49.874372   27962 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993/id_rsa Username:docker}
	I0920 17:07:49.959921   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0920 17:07:49.960005   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0920 17:07:49.984738   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0920 17:07:49.984817   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0920 17:07:50.008778   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0920 17:07:50.008846   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0920 17:07:50.031838   27962 provision.go:87] duration metric: took 364.880224ms to configureAuth
	I0920 17:07:50.031867   27962 buildroot.go:189] setting minikube options for container-runtime
	I0920 17:07:50.032039   27962 config.go:182] Loaded profile config "ha-135993": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 17:07:50.032140   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHHostname
	I0920 17:07:50.034890   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:50.035323   27962 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:07:50.035358   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:50.035520   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHPort
	I0920 17:07:50.035689   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:07:50.035831   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:07:50.035997   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHUsername
	I0920 17:07:50.036173   27962 main.go:141] libmachine: Using SSH client type: native
	I0920 17:07:50.036358   27962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0920 17:07:50.036378   27962 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 17:07:50.251754   27962 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 17:07:50.251780   27962 main.go:141] libmachine: Checking connection to Docker...
	I0920 17:07:50.251789   27962 main.go:141] libmachine: (ha-135993) Calling .GetURL
	I0920 17:07:50.253114   27962 main.go:141] libmachine: (ha-135993) DBG | Using libvirt version 6000000
	I0920 17:07:50.254998   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:50.255262   27962 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:07:50.255284   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:50.255431   27962 main.go:141] libmachine: Docker is up and running!
	I0920 17:07:50.255453   27962 main.go:141] libmachine: Reticulating splines...
	I0920 17:07:50.255462   27962 client.go:171] duration metric: took 21.943029238s to LocalClient.Create
	I0920 17:07:50.255485   27962 start.go:167] duration metric: took 21.94309612s to libmachine.API.Create "ha-135993"
	I0920 17:07:50.255496   27962 start.go:293] postStartSetup for "ha-135993" (driver="kvm2")
	I0920 17:07:50.255512   27962 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 17:07:50.255535   27962 main.go:141] libmachine: (ha-135993) Calling .DriverName
	I0920 17:07:50.255798   27962 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 17:07:50.255830   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHHostname
	I0920 17:07:50.258006   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:50.258354   27962 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:07:50.258386   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:50.258536   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHPort
	I0920 17:07:50.258726   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:07:50.258853   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHUsername
	I0920 17:07:50.259008   27962 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993/id_rsa Username:docker}
	I0920 17:07:50.343779   27962 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 17:07:50.347644   27962 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 17:07:50.347675   27962 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/addons for local assets ...
	I0920 17:07:50.347738   27962 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/files for local assets ...
	I0920 17:07:50.347830   27962 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem -> 159732.pem in /etc/ssl/certs
	I0920 17:07:50.347842   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem -> /etc/ssl/certs/159732.pem
	I0920 17:07:50.347940   27962 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 17:07:50.356818   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /etc/ssl/certs/159732.pem (1708 bytes)
	I0920 17:07:50.380005   27962 start.go:296] duration metric: took 124.491428ms for postStartSetup
	I0920 17:07:50.380073   27962 main.go:141] libmachine: (ha-135993) Calling .GetConfigRaw
	I0920 17:07:50.380667   27962 main.go:141] libmachine: (ha-135993) Calling .GetIP
	I0920 17:07:50.383411   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:50.383719   27962 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:07:50.383749   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:50.384003   27962 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/config.json ...
	I0920 17:07:50.384196   27962 start.go:128] duration metric: took 22.090370371s to createHost
	I0920 17:07:50.384222   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHHostname
	I0920 17:07:50.386519   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:50.386950   27962 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:07:50.386966   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:50.387165   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHPort
	I0920 17:07:50.387336   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:07:50.387480   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:07:50.387623   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHUsername
	I0920 17:07:50.387744   27962 main.go:141] libmachine: Using SSH client type: native
	I0920 17:07:50.387905   27962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0920 17:07:50.387916   27962 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 17:07:50.498520   27962 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726852070.471027061
	
	I0920 17:07:50.498552   27962 fix.go:216] guest clock: 1726852070.471027061
	I0920 17:07:50.498562   27962 fix.go:229] Guest: 2024-09-20 17:07:50.471027061 +0000 UTC Remote: 2024-09-20 17:07:50.384207902 +0000 UTC m=+22.194917586 (delta=86.819159ms)
	I0920 17:07:50.498623   27962 fix.go:200] guest clock delta is within tolerance: 86.819159ms
	I0920 17:07:50.498637   27962 start.go:83] releasing machines lock for "ha-135993", held for 22.204885202s
	I0920 17:07:50.498672   27962 main.go:141] libmachine: (ha-135993) Calling .DriverName
	I0920 17:07:50.498937   27962 main.go:141] libmachine: (ha-135993) Calling .GetIP
	I0920 17:07:50.501692   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:50.502068   27962 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:07:50.502095   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:50.502251   27962 main.go:141] libmachine: (ha-135993) Calling .DriverName
	I0920 17:07:50.502720   27962 main.go:141] libmachine: (ha-135993) Calling .DriverName
	I0920 17:07:50.502881   27962 main.go:141] libmachine: (ha-135993) Calling .DriverName
	I0920 17:07:50.502969   27962 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 17:07:50.503024   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHHostname
	I0920 17:07:50.503115   27962 ssh_runner.go:195] Run: cat /version.json
	I0920 17:07:50.503135   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHHostname
	I0920 17:07:50.505769   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:50.506399   27962 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:07:50.506780   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:50.506810   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:50.507015   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHPort
	I0920 17:07:50.507188   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:07:50.507286   27962 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:07:50.507312   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:50.507447   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHUsername
	I0920 17:07:50.507463   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHPort
	I0920 17:07:50.507586   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:07:50.507587   27962 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993/id_rsa Username:docker}
	I0920 17:07:50.507682   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHUsername
	I0920 17:07:50.507776   27962 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993/id_rsa Username:docker}
	I0920 17:07:50.586773   27962 ssh_runner.go:195] Run: systemctl --version
	I0920 17:07:50.621546   27962 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 17:07:50.780598   27962 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 17:07:50.786517   27962 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 17:07:50.786583   27962 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 17:07:50.802071   27962 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 17:07:50.802094   27962 start.go:495] detecting cgroup driver to use...
	I0920 17:07:50.802161   27962 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 17:07:50.818377   27962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 17:07:50.832630   27962 docker.go:217] disabling cri-docker service (if available) ...
	I0920 17:07:50.832707   27962 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 17:07:50.846087   27962 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 17:07:50.860151   27962 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 17:07:50.975426   27962 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 17:07:51.126213   27962 docker.go:233] disabling docker service ...
	I0920 17:07:51.126291   27962 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 17:07:51.140089   27962 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 17:07:51.152679   27962 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 17:07:51.283500   27962 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 17:07:51.390304   27962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 17:07:51.403627   27962 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 17:07:51.421174   27962 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 17:07:51.421242   27962 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:07:51.431235   27962 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 17:07:51.431310   27962 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:07:51.442561   27962 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:07:51.452862   27962 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:07:51.463189   27962 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 17:07:51.473283   27962 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:07:51.483302   27962 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:07:51.500456   27962 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:07:51.510444   27962 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 17:07:51.519365   27962 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 17:07:51.519445   27962 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 17:07:51.532282   27962 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 17:07:51.541316   27962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:07:51.653648   27962 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 17:07:51.739658   27962 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 17:07:51.739747   27962 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 17:07:51.744441   27962 start.go:563] Will wait 60s for crictl version
	I0920 17:07:51.744510   27962 ssh_runner.go:195] Run: which crictl
	I0920 17:07:51.747928   27962 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 17:07:51.785033   27962 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 17:07:51.785130   27962 ssh_runner.go:195] Run: crio --version
	I0920 17:07:51.813367   27962 ssh_runner.go:195] Run: crio --version
	I0920 17:07:51.843606   27962 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 17:07:51.844877   27962 main.go:141] libmachine: (ha-135993) Calling .GetIP
	I0920 17:07:51.847711   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:51.848041   27962 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:07:51.848067   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:51.848302   27962 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 17:07:51.852330   27962 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 17:07:51.865291   27962 kubeadm.go:883] updating cluster {Name:ha-135993 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-135993 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 17:07:51.865398   27962 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 17:07:51.865449   27962 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 17:07:51.899883   27962 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 17:07:51.899943   27962 ssh_runner.go:195] Run: which lz4
	I0920 17:07:51.903807   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0920 17:07:51.903901   27962 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 17:07:51.907726   27962 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 17:07:51.907767   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0920 17:07:53.234059   27962 crio.go:462] duration metric: took 1.330180344s to copy over tarball
	I0920 17:07:53.234125   27962 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 17:07:55.407532   27962 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.173354398s)
	I0920 17:07:55.407570   27962 crio.go:469] duration metric: took 2.173487919s to extract the tarball
	I0920 17:07:55.407579   27962 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 17:07:55.444916   27962 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 17:07:55.491028   27962 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 17:07:55.491053   27962 cache_images.go:84] Images are preloaded, skipping loading
	I0920 17:07:55.491061   27962 kubeadm.go:934] updating node { 192.168.39.60 8443 v1.31.1 crio true true} ...
	I0920 17:07:55.491157   27962 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-135993 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.60
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-135993 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 17:07:55.491229   27962 ssh_runner.go:195] Run: crio config
	I0920 17:07:55.542472   27962 cni.go:84] Creating CNI manager for ""
	I0920 17:07:55.542496   27962 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0920 17:07:55.542509   27962 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 17:07:55.542534   27962 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.60 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-135993 NodeName:ha-135993 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.60"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.60 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 17:07:55.542711   27962 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.60
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-135993"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.60
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.60"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 17:07:55.542744   27962 kube-vip.go:115] generating kube-vip config ...
	I0920 17:07:55.542799   27962 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0920 17:07:55.561052   27962 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0920 17:07:55.561147   27962 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0920 17:07:55.561195   27962 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 17:07:55.571044   27962 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 17:07:55.571106   27962 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0920 17:07:55.580660   27962 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0920 17:07:55.598713   27962 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 17:07:55.616229   27962 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0920 17:07:55.634067   27962 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0920 17:07:55.651892   27962 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0920 17:07:55.655923   27962 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 17:07:55.667484   27962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:07:55.788088   27962 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 17:07:55.804588   27962 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993 for IP: 192.168.39.60
	I0920 17:07:55.804611   27962 certs.go:194] generating shared ca certs ...
	I0920 17:07:55.804631   27962 certs.go:226] acquiring lock for ca certs: {Name:mkc7ef6c737c6bdc3fdd9dcff8f57029c020d8f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:07:55.804804   27962 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key
	I0920 17:07:55.804860   27962 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key
	I0920 17:07:55.804874   27962 certs.go:256] generating profile certs ...
	I0920 17:07:55.804946   27962 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/client.key
	I0920 17:07:55.804963   27962 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/client.crt with IP's: []
	I0920 17:07:56.041638   27962 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/client.crt ...
	I0920 17:07:56.041670   27962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/client.crt: {Name:mk77b02a314748d6817683dcddc9e50a9706a3fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:07:56.041866   27962 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/client.key ...
	I0920 17:07:56.041881   27962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/client.key: {Name:mkce8a68ad81e086e143b0882e17cc856a54adae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:07:56.042064   27962 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.key.be3ca380
	I0920 17:07:56.042085   27962 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.crt.be3ca380 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.60 192.168.39.254]
	I0920 17:07:56.245960   27962 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.crt.be3ca380 ...
	I0920 17:07:56.245992   27962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.crt.be3ca380: {Name:mka9503983e8ca6a4d05f68e1a88c79ee07a7913 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:07:56.246164   27962 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.key.be3ca380 ...
	I0920 17:07:56.246181   27962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.key.be3ca380: {Name:mk892756342d52e742959b6836b3a7605e9575d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:07:56.246306   27962 certs.go:381] copying /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.crt.be3ca380 -> /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.crt
	I0920 17:07:56.246416   27962 certs.go:385] copying /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.key.be3ca380 -> /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.key
	I0920 17:07:56.246500   27962 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/proxy-client.key
	I0920 17:07:56.246524   27962 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/proxy-client.crt with IP's: []
	I0920 17:07:56.401234   27962 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/proxy-client.crt ...
	I0920 17:07:56.401270   27962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/proxy-client.crt: {Name:mk970b226fef3a4347b937972fcb4fd73f00dc38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:07:56.401441   27962 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/proxy-client.key ...
	I0920 17:07:56.401452   27962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/proxy-client.key: {Name:mke4168ed8a5ff16fb6768d15dd8e4f984e56621 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:07:56.401519   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0920 17:07:56.401536   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0920 17:07:56.401547   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0920 17:07:56.401558   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0920 17:07:56.401568   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0920 17:07:56.401579   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0920 17:07:56.401588   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0920 17:07:56.401600   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0920 17:07:56.401644   27962 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem (1338 bytes)
	W0920 17:07:56.401677   27962 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973_empty.pem, impossibly tiny 0 bytes
	I0920 17:07:56.401684   27962 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 17:07:56.401706   27962 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem (1082 bytes)
	I0920 17:07:56.401730   27962 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem (1123 bytes)
	I0920 17:07:56.401754   27962 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem (1675 bytes)
	I0920 17:07:56.401789   27962 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem (1708 bytes)
	I0920 17:07:56.401817   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem -> /usr/share/ca-certificates/15973.pem
	I0920 17:07:56.401847   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem -> /usr/share/ca-certificates/159732.pem
	I0920 17:07:56.401862   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:07:56.402409   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 17:07:56.427996   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 17:07:56.451855   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 17:07:56.475801   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0920 17:07:56.499662   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0920 17:07:56.522944   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 17:07:56.548908   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 17:07:56.575686   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 17:07:56.604616   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem --> /usr/share/ca-certificates/15973.pem (1338 bytes)
	I0920 17:07:56.627314   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /usr/share/ca-certificates/159732.pem (1708 bytes)
	I0920 17:07:56.649875   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 17:07:56.673591   27962 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 17:07:56.694627   27962 ssh_runner.go:195] Run: openssl version
	I0920 17:07:56.700654   27962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 17:07:56.711864   27962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:07:56.716521   27962 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:07:56.716587   27962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:07:56.722355   27962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 17:07:56.733975   27962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15973.pem && ln -fs /usr/share/ca-certificates/15973.pem /etc/ssl/certs/15973.pem"
	I0920 17:07:56.745449   27962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15973.pem
	I0920 17:07:56.749937   27962 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:04 /usr/share/ca-certificates/15973.pem
	I0920 17:07:56.750010   27962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15973.pem
	I0920 17:07:56.755845   27962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15973.pem /etc/ssl/certs/51391683.0"
	I0920 17:07:56.766910   27962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/159732.pem && ln -fs /usr/share/ca-certificates/159732.pem /etc/ssl/certs/159732.pem"
	I0920 17:07:56.777908   27962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/159732.pem
	I0920 17:07:56.782437   27962 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:04 /usr/share/ca-certificates/159732.pem
	I0920 17:07:56.782504   27962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/159732.pem
	I0920 17:07:56.788567   27962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/159732.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 17:07:56.800002   27962 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 17:07:56.804473   27962 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 17:07:56.804532   27962 kubeadm.go:392] StartCluster: {Name:ha-135993 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-135993 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 17:07:56.804601   27962 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 17:07:56.804641   27962 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 17:07:56.847709   27962 cri.go:89] found id: ""
	I0920 17:07:56.847785   27962 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 17:07:56.859005   27962 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 17:07:56.869479   27962 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 17:07:56.879263   27962 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 17:07:56.879288   27962 kubeadm.go:157] found existing configuration files:
	
	I0920 17:07:56.879350   27962 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 17:07:56.888673   27962 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 17:07:56.888748   27962 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 17:07:56.898330   27962 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 17:07:56.908293   27962 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 17:07:56.908361   27962 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 17:07:56.918173   27962 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 17:07:56.926869   27962 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 17:07:56.926939   27962 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 17:07:56.935901   27962 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 17:07:56.944708   27962 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 17:07:56.944774   27962 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 17:07:56.954425   27962 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 17:07:57.049417   27962 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 17:07:57.049552   27962 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 17:07:57.158652   27962 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 17:07:57.158798   27962 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 17:07:57.158931   27962 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 17:07:57.167722   27962 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 17:07:57.313232   27962 out.go:235]   - Generating certificates and keys ...
	I0920 17:07:57.313352   27962 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 17:07:57.313425   27962 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 17:07:57.313486   27962 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0920 17:07:57.601566   27962 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0920 17:07:57.893152   27962 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0920 17:07:58.140227   27962 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0920 17:07:58.556100   27962 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0920 17:07:58.556284   27962 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-135993 localhost] and IPs [192.168.39.60 127.0.0.1 ::1]
	I0920 17:07:58.800301   27962 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0920 17:07:58.800437   27962 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-135993 localhost] and IPs [192.168.39.60 127.0.0.1 ::1]
	I0920 17:07:58.953666   27962 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0920 17:07:59.106407   27962 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0920 17:07:59.233998   27962 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0920 17:07:59.234129   27962 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 17:07:59.525137   27962 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 17:07:59.766968   27962 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 17:08:00.120492   27962 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 17:08:00.216832   27962 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 17:08:00.360049   27962 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 17:08:00.360513   27962 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 17:08:00.363304   27962 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 17:08:00.365927   27962 out.go:235]   - Booting up control plane ...
	I0920 17:08:00.366064   27962 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 17:08:00.366181   27962 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 17:08:00.366311   27962 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 17:08:00.379619   27962 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 17:08:00.385661   27962 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 17:08:00.385729   27962 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 17:08:00.519566   27962 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 17:08:00.519711   27962 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 17:08:01.020357   27962 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.387016ms
	I0920 17:08:01.020471   27962 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 17:08:07.015773   27962 kubeadm.go:310] [api-check] The API server is healthy after 5.999233043s
	I0920 17:08:07.031789   27962 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 17:08:07.055338   27962 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 17:08:07.096965   27962 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 17:08:07.097212   27962 kubeadm.go:310] [mark-control-plane] Marking the node ha-135993 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 17:08:07.111378   27962 kubeadm.go:310] [bootstrap-token] Using token: xrduw1.53792puohqvk415u
	I0920 17:08:07.112987   27962 out.go:235]   - Configuring RBAC rules ...
	I0920 17:08:07.113105   27962 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 17:08:07.126679   27962 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 17:08:07.140129   27962 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 17:08:07.144364   27962 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 17:08:07.148863   27962 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 17:08:07.153587   27962 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 17:08:07.423299   27962 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 17:08:07.856227   27962 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 17:08:08.423318   27962 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 17:08:08.423341   27962 kubeadm.go:310] 
	I0920 17:08:08.423388   27962 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 17:08:08.423393   27962 kubeadm.go:310] 
	I0920 17:08:08.423477   27962 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 17:08:08.423485   27962 kubeadm.go:310] 
	I0920 17:08:08.423525   27962 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 17:08:08.423586   27962 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 17:08:08.423645   27962 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 17:08:08.423658   27962 kubeadm.go:310] 
	I0920 17:08:08.423712   27962 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 17:08:08.423722   27962 kubeadm.go:310] 
	I0920 17:08:08.423765   27962 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 17:08:08.423774   27962 kubeadm.go:310] 
	I0920 17:08:08.423861   27962 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 17:08:08.423966   27962 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 17:08:08.424052   27962 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 17:08:08.424086   27962 kubeadm.go:310] 
	I0920 17:08:08.424207   27962 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 17:08:08.424318   27962 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 17:08:08.424327   27962 kubeadm.go:310] 
	I0920 17:08:08.424428   27962 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token xrduw1.53792puohqvk415u \
	I0920 17:08:08.424587   27962 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:736f3fb521855813decb4a7214f2b8beff5f81c3971b60decfa20a8807626a2d \
	I0920 17:08:08.424622   27962 kubeadm.go:310] 	--control-plane 
	I0920 17:08:08.424629   27962 kubeadm.go:310] 
	I0920 17:08:08.424753   27962 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 17:08:08.424765   27962 kubeadm.go:310] 
	I0920 17:08:08.424873   27962 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token xrduw1.53792puohqvk415u \
	I0920 17:08:08.425013   27962 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:736f3fb521855813decb4a7214f2b8beff5f81c3971b60decfa20a8807626a2d 
	I0920 17:08:08.425950   27962 kubeadm.go:310] W0920 17:07:57.025597     832 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 17:08:08.426273   27962 kubeadm.go:310] W0920 17:07:57.026508     832 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 17:08:08.426428   27962 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 17:08:08.426462   27962 cni.go:84] Creating CNI manager for ""
	I0920 17:08:08.426477   27962 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0920 17:08:08.428341   27962 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0920 17:08:08.429841   27962 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0920 17:08:08.435818   27962 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0920 17:08:08.435838   27962 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0920 17:08:08.455244   27962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0920 17:08:08.799287   27962 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 17:08:08.799381   27962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:08:08.799436   27962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-135993 minikube.k8s.io/updated_at=2024_09_20T17_08_08_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=0626f22cf0d915d75e291a5bce701f94395056e1 minikube.k8s.io/name=ha-135993 minikube.k8s.io/primary=true
	I0920 17:08:08.948517   27962 ops.go:34] apiserver oom_adj: -16
	I0920 17:08:08.948664   27962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:08:09.449228   27962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:08:09.949041   27962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:08:10.449579   27962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:08:10.949086   27962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:08:11.449011   27962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:08:11.949120   27962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:08:12.448969   27962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:08:12.581415   27962 kubeadm.go:1113] duration metric: took 3.782097256s to wait for elevateKubeSystemPrivileges
	I0920 17:08:12.581460   27962 kubeadm.go:394] duration metric: took 15.776931504s to StartCluster
	I0920 17:08:12.581484   27962 settings.go:142] acquiring lock: {Name:mk2a13c58cdc0faf8cddca5d6716175d45db9bfd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:08:12.581582   27962 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19672-8777/kubeconfig
	I0920 17:08:12.582546   27962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/kubeconfig: {Name:mkf32a4c736808e023459b2f0e40188618a38db1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:08:12.582827   27962 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0920 17:08:12.582838   27962 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 17:08:12.582868   27962 start.go:241] waiting for startup goroutines ...
	I0920 17:08:12.582877   27962 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 17:08:12.582961   27962 addons.go:69] Setting storage-provisioner=true in profile "ha-135993"
	I0920 17:08:12.582983   27962 addons.go:234] Setting addon storage-provisioner=true in "ha-135993"
	I0920 17:08:12.582992   27962 addons.go:69] Setting default-storageclass=true in profile "ha-135993"
	I0920 17:08:12.583015   27962 host.go:66] Checking if "ha-135993" exists ...
	I0920 17:08:12.583021   27962 config.go:182] Loaded profile config "ha-135993": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 17:08:12.583016   27962 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-135993"
	I0920 17:08:12.583508   27962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:08:12.583545   27962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:08:12.583546   27962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:08:12.583578   27962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:08:12.598612   27962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36417
	I0920 17:08:12.598702   27962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37497
	I0920 17:08:12.599159   27962 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:08:12.599205   27962 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:08:12.599708   27962 main.go:141] libmachine: Using API Version  1
	I0920 17:08:12.599711   27962 main.go:141] libmachine: Using API Version  1
	I0920 17:08:12.599730   27962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:08:12.599732   27962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:08:12.600086   27962 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:08:12.600096   27962 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:08:12.600272   27962 main.go:141] libmachine: (ha-135993) Calling .GetState
	I0920 17:08:12.600654   27962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:08:12.600687   27962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:08:12.602399   27962 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19672-8777/kubeconfig
	I0920 17:08:12.602624   27962 kapi.go:59] client config for ha-135993: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/client.crt", KeyFile:"/home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/client.key", CAFile:"/home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0920 17:08:12.603002   27962 cert_rotation.go:140] Starting client certificate rotation controller
	I0920 17:08:12.603197   27962 addons.go:234] Setting addon default-storageclass=true in "ha-135993"
	I0920 17:08:12.603229   27962 host.go:66] Checking if "ha-135993" exists ...
	I0920 17:08:12.603512   27962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:08:12.603547   27962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:08:12.615990   27962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34795
	I0920 17:08:12.616508   27962 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:08:12.617237   27962 main.go:141] libmachine: Using API Version  1
	I0920 17:08:12.617264   27962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:08:12.617610   27962 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:08:12.617796   27962 main.go:141] libmachine: (ha-135993) Calling .GetState
	I0920 17:08:12.619399   27962 main.go:141] libmachine: (ha-135993) Calling .DriverName
	I0920 17:08:12.621713   27962 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 17:08:12.623141   27962 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 17:08:12.623157   27962 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 17:08:12.623178   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHHostname
	I0920 17:08:12.623273   27962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33981
	I0920 17:08:12.623802   27962 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:08:12.624342   27962 main.go:141] libmachine: Using API Version  1
	I0920 17:08:12.624366   27962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:08:12.624828   27962 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:08:12.625480   27962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:08:12.625530   27962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:08:12.626097   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:08:12.626527   27962 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:08:12.626552   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:08:12.626807   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHPort
	I0920 17:08:12.626980   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:08:12.627125   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHUsername
	I0920 17:08:12.627264   27962 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993/id_rsa Username:docker}
	I0920 17:08:12.642774   27962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36005
	I0920 17:08:12.643262   27962 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:08:12.643818   27962 main.go:141] libmachine: Using API Version  1
	I0920 17:08:12.643841   27962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:08:12.644239   27962 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:08:12.644440   27962 main.go:141] libmachine: (ha-135993) Calling .GetState
	I0920 17:08:12.645924   27962 main.go:141] libmachine: (ha-135993) Calling .DriverName
	I0920 17:08:12.646117   27962 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 17:08:12.646130   27962 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 17:08:12.646144   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHHostname
	I0920 17:08:12.649003   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:08:12.649483   27962 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:08:12.649502   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:08:12.649607   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHPort
	I0920 17:08:12.649789   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:08:12.649942   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHUsername
	I0920 17:08:12.650098   27962 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993/id_rsa Username:docker}
	I0920 17:08:12.744585   27962 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0920 17:08:12.762429   27962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 17:08:12.828758   27962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 17:08:13.268354   27962 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0920 17:08:13.434438   27962 main.go:141] libmachine: Making call to close driver server
	I0920 17:08:13.434476   27962 main.go:141] libmachine: (ha-135993) Calling .Close
	I0920 17:08:13.434519   27962 main.go:141] libmachine: Making call to close driver server
	I0920 17:08:13.434543   27962 main.go:141] libmachine: (ha-135993) Calling .Close
	I0920 17:08:13.434773   27962 main.go:141] libmachine: (ha-135993) DBG | Closing plugin on server side
	I0920 17:08:13.434818   27962 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:08:13.434827   27962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:08:13.434838   27962 main.go:141] libmachine: Making call to close driver server
	I0920 17:08:13.434847   27962 main.go:141] libmachine: (ha-135993) Calling .Close
	I0920 17:08:13.434882   27962 main.go:141] libmachine: (ha-135993) DBG | Closing plugin on server side
	I0920 17:08:13.434897   27962 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:08:13.434914   27962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:08:13.434931   27962 main.go:141] libmachine: Making call to close driver server
	I0920 17:08:13.434943   27962 main.go:141] libmachine: (ha-135993) Calling .Close
	I0920 17:08:13.435090   27962 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:08:13.435107   27962 main.go:141] libmachine: (ha-135993) DBG | Closing plugin on server side
	I0920 17:08:13.435115   27962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:08:13.435168   27962 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:08:13.435183   27962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:08:13.435240   27962 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0920 17:08:13.435265   27962 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0920 17:08:13.435361   27962 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0920 17:08:13.435370   27962 round_trippers.go:469] Request Headers:
	I0920 17:08:13.435380   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:08:13.435388   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:08:13.451251   27962 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0920 17:08:13.451915   27962 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0920 17:08:13.451933   27962 round_trippers.go:469] Request Headers:
	I0920 17:08:13.451945   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:08:13.451951   27962 round_trippers.go:473]     Content-Type: application/json
	I0920 17:08:13.451959   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:08:13.455819   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:08:13.456046   27962 main.go:141] libmachine: Making call to close driver server
	I0920 17:08:13.456063   27962 main.go:141] libmachine: (ha-135993) Calling .Close
	I0920 17:08:13.456328   27962 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:08:13.456345   27962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:08:13.457999   27962 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0920 17:08:13.459046   27962 addons.go:510] duration metric: took 876.16629ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0920 17:08:13.459075   27962 start.go:246] waiting for cluster config update ...
	I0920 17:08:13.459086   27962 start.go:255] writing updated cluster config ...
	I0920 17:08:13.460310   27962 out.go:201] 
	I0920 17:08:13.461415   27962 config.go:182] Loaded profile config "ha-135993": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 17:08:13.461487   27962 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/config.json ...
	I0920 17:08:13.462998   27962 out.go:177] * Starting "ha-135993-m02" control-plane node in "ha-135993" cluster
	I0920 17:08:13.463913   27962 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 17:08:13.463932   27962 cache.go:56] Caching tarball of preloaded images
	I0920 17:08:13.464013   27962 preload.go:172] Found /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 17:08:13.464026   27962 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 17:08:13.464094   27962 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/config.json ...
	I0920 17:08:13.464275   27962 start.go:360] acquireMachinesLock for ha-135993-m02: {Name:mkfeedb385cf08b5d2aa00913e85815d02a180c2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 17:08:13.464329   27962 start.go:364] duration metric: took 31.835µs to acquireMachinesLock for "ha-135993-m02"
	I0920 17:08:13.464351   27962 start.go:93] Provisioning new machine with config: &{Name:ha-135993 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-135993 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 17:08:13.464449   27962 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0920 17:08:13.466601   27962 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 17:08:13.466688   27962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:08:13.466714   27962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:08:13.482616   27962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46331
	I0920 17:08:13.483161   27962 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:08:13.483661   27962 main.go:141] libmachine: Using API Version  1
	I0920 17:08:13.483682   27962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:08:13.484002   27962 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:08:13.484185   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetMachineName
	I0920 17:08:13.484325   27962 main.go:141] libmachine: (ha-135993-m02) Calling .DriverName
	I0920 17:08:13.484522   27962 start.go:159] libmachine.API.Create for "ha-135993" (driver="kvm2")
	I0920 17:08:13.484544   27962 client.go:168] LocalClient.Create starting
	I0920 17:08:13.484569   27962 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem
	I0920 17:08:13.484600   27962 main.go:141] libmachine: Decoding PEM data...
	I0920 17:08:13.484614   27962 main.go:141] libmachine: Parsing certificate...
	I0920 17:08:13.484662   27962 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem
	I0920 17:08:13.484680   27962 main.go:141] libmachine: Decoding PEM data...
	I0920 17:08:13.484691   27962 main.go:141] libmachine: Parsing certificate...
	I0920 17:08:13.484704   27962 main.go:141] libmachine: Running pre-create checks...
	I0920 17:08:13.484711   27962 main.go:141] libmachine: (ha-135993-m02) Calling .PreCreateCheck
	I0920 17:08:13.484853   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetConfigRaw
	I0920 17:08:13.485217   27962 main.go:141] libmachine: Creating machine...
	I0920 17:08:13.485230   27962 main.go:141] libmachine: (ha-135993-m02) Calling .Create
	I0920 17:08:13.485333   27962 main.go:141] libmachine: (ha-135993-m02) Creating KVM machine...
	I0920 17:08:13.486545   27962 main.go:141] libmachine: (ha-135993-m02) DBG | found existing default KVM network
	I0920 17:08:13.486700   27962 main.go:141] libmachine: (ha-135993-m02) DBG | found existing private KVM network mk-ha-135993
	I0920 17:08:13.486822   27962 main.go:141] libmachine: (ha-135993-m02) Setting up store path in /home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m02 ...
	I0920 17:08:13.486843   27962 main.go:141] libmachine: (ha-135993-m02) Building disk image from file:///home/jenkins/minikube-integration/19672-8777/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso
	I0920 17:08:13.486907   27962 main.go:141] libmachine: (ha-135993-m02) DBG | I0920 17:08:13.486794   28324 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19672-8777/.minikube
	I0920 17:08:13.486988   27962 main.go:141] libmachine: (ha-135993-m02) Downloading /home/jenkins/minikube-integration/19672-8777/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19672-8777/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso...
	I0920 17:08:13.739935   27962 main.go:141] libmachine: (ha-135993-m02) DBG | I0920 17:08:13.739800   28324 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m02/id_rsa...
	I0920 17:08:13.830603   27962 main.go:141] libmachine: (ha-135993-m02) DBG | I0920 17:08:13.830462   28324 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m02/ha-135993-m02.rawdisk...
	I0920 17:08:13.830640   27962 main.go:141] libmachine: (ha-135993-m02) DBG | Writing magic tar header
	I0920 17:08:13.830656   27962 main.go:141] libmachine: (ha-135993-m02) DBG | Writing SSH key tar header
	I0920 17:08:13.830668   27962 main.go:141] libmachine: (ha-135993-m02) DBG | I0920 17:08:13.830608   28324 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m02 ...
	I0920 17:08:13.830709   27962 main.go:141] libmachine: (ha-135993-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m02
	I0920 17:08:13.830748   27962 main.go:141] libmachine: (ha-135993-m02) Setting executable bit set on /home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m02 (perms=drwx------)
	I0920 17:08:13.830769   27962 main.go:141] libmachine: (ha-135993-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-8777/.minikube/machines
	I0920 17:08:13.830782   27962 main.go:141] libmachine: (ha-135993-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-8777/.minikube
	I0920 17:08:13.830799   27962 main.go:141] libmachine: (ha-135993-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-8777
	I0920 17:08:13.830811   27962 main.go:141] libmachine: (ha-135993-m02) Setting executable bit set on /home/jenkins/minikube-integration/19672-8777/.minikube/machines (perms=drwxr-xr-x)
	I0920 17:08:13.830822   27962 main.go:141] libmachine: (ha-135993-m02) Setting executable bit set on /home/jenkins/minikube-integration/19672-8777/.minikube (perms=drwxr-xr-x)
	I0920 17:08:13.830830   27962 main.go:141] libmachine: (ha-135993-m02) Setting executable bit set on /home/jenkins/minikube-integration/19672-8777 (perms=drwxrwxr-x)
	I0920 17:08:13.830839   27962 main.go:141] libmachine: (ha-135993-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0920 17:08:13.830852   27962 main.go:141] libmachine: (ha-135993-m02) DBG | Checking permissions on dir: /home/jenkins
	I0920 17:08:13.830862   27962 main.go:141] libmachine: (ha-135993-m02) DBG | Checking permissions on dir: /home
	I0920 17:08:13.830873   27962 main.go:141] libmachine: (ha-135993-m02) DBG | Skipping /home - not owner
	I0920 17:08:13.830885   27962 main.go:141] libmachine: (ha-135993-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0920 17:08:13.830900   27962 main.go:141] libmachine: (ha-135993-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0920 17:08:13.830909   27962 main.go:141] libmachine: (ha-135993-m02) Creating domain...
	I0920 17:08:13.831832   27962 main.go:141] libmachine: (ha-135993-m02) define libvirt domain using xml: 
	I0920 17:08:13.831858   27962 main.go:141] libmachine: (ha-135993-m02) <domain type='kvm'>
	I0920 17:08:13.831868   27962 main.go:141] libmachine: (ha-135993-m02)   <name>ha-135993-m02</name>
	I0920 17:08:13.831879   27962 main.go:141] libmachine: (ha-135993-m02)   <memory unit='MiB'>2200</memory>
	I0920 17:08:13.831891   27962 main.go:141] libmachine: (ha-135993-m02)   <vcpu>2</vcpu>
	I0920 17:08:13.831897   27962 main.go:141] libmachine: (ha-135993-m02)   <features>
	I0920 17:08:13.831904   27962 main.go:141] libmachine: (ha-135993-m02)     <acpi/>
	I0920 17:08:13.831913   27962 main.go:141] libmachine: (ha-135993-m02)     <apic/>
	I0920 17:08:13.831922   27962 main.go:141] libmachine: (ha-135993-m02)     <pae/>
	I0920 17:08:13.831931   27962 main.go:141] libmachine: (ha-135993-m02)     
	I0920 17:08:13.831943   27962 main.go:141] libmachine: (ha-135993-m02)   </features>
	I0920 17:08:13.831953   27962 main.go:141] libmachine: (ha-135993-m02)   <cpu mode='host-passthrough'>
	I0920 17:08:13.831960   27962 main.go:141] libmachine: (ha-135993-m02)   
	I0920 17:08:13.831967   27962 main.go:141] libmachine: (ha-135993-m02)   </cpu>
	I0920 17:08:13.831975   27962 main.go:141] libmachine: (ha-135993-m02)   <os>
	I0920 17:08:13.831983   27962 main.go:141] libmachine: (ha-135993-m02)     <type>hvm</type>
	I0920 17:08:13.831995   27962 main.go:141] libmachine: (ha-135993-m02)     <boot dev='cdrom'/>
	I0920 17:08:13.832003   27962 main.go:141] libmachine: (ha-135993-m02)     <boot dev='hd'/>
	I0920 17:08:13.832013   27962 main.go:141] libmachine: (ha-135993-m02)     <bootmenu enable='no'/>
	I0920 17:08:13.832023   27962 main.go:141] libmachine: (ha-135993-m02)   </os>
	I0920 17:08:13.832038   27962 main.go:141] libmachine: (ha-135993-m02)   <devices>
	I0920 17:08:13.832051   27962 main.go:141] libmachine: (ha-135993-m02)     <disk type='file' device='cdrom'>
	I0920 17:08:13.832071   27962 main.go:141] libmachine: (ha-135993-m02)       <source file='/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m02/boot2docker.iso'/>
	I0920 17:08:13.832084   27962 main.go:141] libmachine: (ha-135993-m02)       <target dev='hdc' bus='scsi'/>
	I0920 17:08:13.832095   27962 main.go:141] libmachine: (ha-135993-m02)       <readonly/>
	I0920 17:08:13.832104   27962 main.go:141] libmachine: (ha-135993-m02)     </disk>
	I0920 17:08:13.832113   27962 main.go:141] libmachine: (ha-135993-m02)     <disk type='file' device='disk'>
	I0920 17:08:13.832122   27962 main.go:141] libmachine: (ha-135993-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0920 17:08:13.832133   27962 main.go:141] libmachine: (ha-135993-m02)       <source file='/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m02/ha-135993-m02.rawdisk'/>
	I0920 17:08:13.832144   27962 main.go:141] libmachine: (ha-135993-m02)       <target dev='hda' bus='virtio'/>
	I0920 17:08:13.832153   27962 main.go:141] libmachine: (ha-135993-m02)     </disk>
	I0920 17:08:13.832164   27962 main.go:141] libmachine: (ha-135993-m02)     <interface type='network'>
	I0920 17:08:13.832173   27962 main.go:141] libmachine: (ha-135993-m02)       <source network='mk-ha-135993'/>
	I0920 17:08:13.832186   27962 main.go:141] libmachine: (ha-135993-m02)       <model type='virtio'/>
	I0920 17:08:13.832197   27962 main.go:141] libmachine: (ha-135993-m02)     </interface>
	I0920 17:08:13.832209   27962 main.go:141] libmachine: (ha-135993-m02)     <interface type='network'>
	I0920 17:08:13.832217   27962 main.go:141] libmachine: (ha-135993-m02)       <source network='default'/>
	I0920 17:08:13.832232   27962 main.go:141] libmachine: (ha-135993-m02)       <model type='virtio'/>
	I0920 17:08:13.832243   27962 main.go:141] libmachine: (ha-135993-m02)     </interface>
	I0920 17:08:13.832253   27962 main.go:141] libmachine: (ha-135993-m02)     <serial type='pty'>
	I0920 17:08:13.832261   27962 main.go:141] libmachine: (ha-135993-m02)       <target port='0'/>
	I0920 17:08:13.832270   27962 main.go:141] libmachine: (ha-135993-m02)     </serial>
	I0920 17:08:13.832278   27962 main.go:141] libmachine: (ha-135993-m02)     <console type='pty'>
	I0920 17:08:13.832288   27962 main.go:141] libmachine: (ha-135993-m02)       <target type='serial' port='0'/>
	I0920 17:08:13.832293   27962 main.go:141] libmachine: (ha-135993-m02)     </console>
	I0920 17:08:13.832301   27962 main.go:141] libmachine: (ha-135993-m02)     <rng model='virtio'>
	I0920 17:08:13.832311   27962 main.go:141] libmachine: (ha-135993-m02)       <backend model='random'>/dev/random</backend>
	I0920 17:08:13.832320   27962 main.go:141] libmachine: (ha-135993-m02)     </rng>
	I0920 17:08:13.832333   27962 main.go:141] libmachine: (ha-135993-m02)     
	I0920 17:08:13.832354   27962 main.go:141] libmachine: (ha-135993-m02)     
	I0920 17:08:13.832409   27962 main.go:141] libmachine: (ha-135993-m02)   </devices>
	I0920 17:08:13.832434   27962 main.go:141] libmachine: (ha-135993-m02) </domain>
	I0920 17:08:13.832443   27962 main.go:141] libmachine: (ha-135993-m02) 
	I0920 17:08:13.839347   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:40:3b:17 in network default
	I0920 17:08:13.839981   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:13.840002   27962 main.go:141] libmachine: (ha-135993-m02) Ensuring networks are active...
	I0920 17:08:13.840774   27962 main.go:141] libmachine: (ha-135993-m02) Ensuring network default is active
	I0920 17:08:13.841013   27962 main.go:141] libmachine: (ha-135993-m02) Ensuring network mk-ha-135993 is active
	I0920 17:08:13.841381   27962 main.go:141] libmachine: (ha-135993-m02) Getting domain xml...
	I0920 17:08:13.842134   27962 main.go:141] libmachine: (ha-135993-m02) Creating domain...
	I0920 17:08:15.062497   27962 main.go:141] libmachine: (ha-135993-m02) Waiting to get IP...
	I0920 17:08:15.063280   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:15.063771   27962 main.go:141] libmachine: (ha-135993-m02) DBG | unable to find current IP address of domain ha-135993-m02 in network mk-ha-135993
	I0920 17:08:15.063837   27962 main.go:141] libmachine: (ha-135993-m02) DBG | I0920 17:08:15.063776   28324 retry.go:31] will retry after 209.317935ms: waiting for machine to come up
	I0920 17:08:15.275351   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:15.275800   27962 main.go:141] libmachine: (ha-135993-m02) DBG | unable to find current IP address of domain ha-135993-m02 in network mk-ha-135993
	I0920 17:08:15.275825   27962 main.go:141] libmachine: (ha-135993-m02) DBG | I0920 17:08:15.275759   28324 retry.go:31] will retry after 321.648558ms: waiting for machine to come up
	I0920 17:08:15.599294   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:15.599955   27962 main.go:141] libmachine: (ha-135993-m02) DBG | unable to find current IP address of domain ha-135993-m02 in network mk-ha-135993
	I0920 17:08:15.599981   27962 main.go:141] libmachine: (ha-135993-m02) DBG | I0920 17:08:15.599902   28324 retry.go:31] will retry after 379.94005ms: waiting for machine to come up
	I0920 17:08:15.981649   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:15.982207   27962 main.go:141] libmachine: (ha-135993-m02) DBG | unable to find current IP address of domain ha-135993-m02 in network mk-ha-135993
	I0920 17:08:15.982258   27962 main.go:141] libmachine: (ha-135993-m02) DBG | I0920 17:08:15.982185   28324 retry.go:31] will retry after 407.2672ms: waiting for machine to come up
	I0920 17:08:16.390723   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:16.391164   27962 main.go:141] libmachine: (ha-135993-m02) DBG | unable to find current IP address of domain ha-135993-m02 in network mk-ha-135993
	I0920 17:08:16.391190   27962 main.go:141] libmachine: (ha-135993-m02) DBG | I0920 17:08:16.391121   28324 retry.go:31] will retry after 540.634265ms: waiting for machine to come up
	I0920 17:08:16.933924   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:16.934354   27962 main.go:141] libmachine: (ha-135993-m02) DBG | unable to find current IP address of domain ha-135993-m02 in network mk-ha-135993
	I0920 17:08:16.934380   27962 main.go:141] libmachine: (ha-135993-m02) DBG | I0920 17:08:16.934280   28324 retry.go:31] will retry after 944.239732ms: waiting for machine to come up
	I0920 17:08:17.880458   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:17.880905   27962 main.go:141] libmachine: (ha-135993-m02) DBG | unable to find current IP address of domain ha-135993-m02 in network mk-ha-135993
	I0920 17:08:17.880937   27962 main.go:141] libmachine: (ha-135993-m02) DBG | I0920 17:08:17.880855   28324 retry.go:31] will retry after 1.092727798s: waiting for machine to come up
	I0920 17:08:18.975422   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:18.975784   27962 main.go:141] libmachine: (ha-135993-m02) DBG | unable to find current IP address of domain ha-135993-m02 in network mk-ha-135993
	I0920 17:08:18.975813   27962 main.go:141] libmachine: (ha-135993-m02) DBG | I0920 17:08:18.975727   28324 retry.go:31] will retry after 1.481134943s: waiting for machine to come up
	I0920 17:08:20.459346   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:20.459802   27962 main.go:141] libmachine: (ha-135993-m02) DBG | unable to find current IP address of domain ha-135993-m02 in network mk-ha-135993
	I0920 17:08:20.459819   27962 main.go:141] libmachine: (ha-135993-m02) DBG | I0920 17:08:20.459747   28324 retry.go:31] will retry after 1.808510088s: waiting for machine to come up
	I0920 17:08:22.270788   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:22.271210   27962 main.go:141] libmachine: (ha-135993-m02) DBG | unable to find current IP address of domain ha-135993-m02 in network mk-ha-135993
	I0920 17:08:22.271239   27962 main.go:141] libmachine: (ha-135993-m02) DBG | I0920 17:08:22.271135   28324 retry.go:31] will retry after 1.59499674s: waiting for machine to come up
	I0920 17:08:23.868039   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:23.868429   27962 main.go:141] libmachine: (ha-135993-m02) DBG | unable to find current IP address of domain ha-135993-m02 in network mk-ha-135993
	I0920 17:08:23.868456   27962 main.go:141] libmachine: (ha-135993-m02) DBG | I0920 17:08:23.868389   28324 retry.go:31] will retry after 2.718058875s: waiting for machine to come up
	I0920 17:08:26.587523   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:26.588013   27962 main.go:141] libmachine: (ha-135993-m02) DBG | unable to find current IP address of domain ha-135993-m02 in network mk-ha-135993
	I0920 17:08:26.588042   27962 main.go:141] libmachine: (ha-135993-m02) DBG | I0920 17:08:26.587966   28324 retry.go:31] will retry after 2.496735484s: waiting for machine to come up
	I0920 17:08:29.085932   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:29.086306   27962 main.go:141] libmachine: (ha-135993-m02) DBG | unable to find current IP address of domain ha-135993-m02 in network mk-ha-135993
	I0920 17:08:29.086335   27962 main.go:141] libmachine: (ha-135993-m02) DBG | I0920 17:08:29.086239   28324 retry.go:31] will retry after 2.750361097s: waiting for machine to come up
	I0920 17:08:31.838828   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:31.839392   27962 main.go:141] libmachine: (ha-135993-m02) DBG | unable to find current IP address of domain ha-135993-m02 in network mk-ha-135993
	I0920 17:08:31.839414   27962 main.go:141] libmachine: (ha-135993-m02) DBG | I0920 17:08:31.839344   28324 retry.go:31] will retry after 4.254809645s: waiting for machine to come up
	I0920 17:08:36.096360   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:36.096729   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has current primary IP address 192.168.39.227 and MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:36.096746   27962 main.go:141] libmachine: (ha-135993-m02) Found IP for machine: 192.168.39.227
	I0920 17:08:36.096755   27962 main.go:141] libmachine: (ha-135993-m02) Reserving static IP address...
	I0920 17:08:36.097098   27962 main.go:141] libmachine: (ha-135993-m02) DBG | unable to find host DHCP lease matching {name: "ha-135993-m02", mac: "52:54:00:87:dc:24", ip: "192.168.39.227"} in network mk-ha-135993
	I0920 17:08:36.167513   27962 main.go:141] libmachine: (ha-135993-m02) DBG | Getting to WaitForSSH function...
	I0920 17:08:36.167545   27962 main.go:141] libmachine: (ha-135993-m02) Reserved static IP address: 192.168.39.227
	I0920 17:08:36.167558   27962 main.go:141] libmachine: (ha-135993-m02) Waiting for SSH to be available...
	I0920 17:08:36.170087   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:36.170491   27962 main.go:141] libmachine: (ha-135993-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:dc:24", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:08:28 +0000 UTC Type:0 Mac:52:54:00:87:dc:24 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:minikube Clientid:01:52:54:00:87:dc:24}
	I0920 17:08:36.170519   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:36.170690   27962 main.go:141] libmachine: (ha-135993-m02) DBG | Using SSH client type: external
	I0920 17:08:36.170712   27962 main.go:141] libmachine: (ha-135993-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m02/id_rsa (-rw-------)
	I0920 17:08:36.170731   27962 main.go:141] libmachine: (ha-135993-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.227 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 17:08:36.170745   27962 main.go:141] libmachine: (ha-135993-m02) DBG | About to run SSH command:
	I0920 17:08:36.170753   27962 main.go:141] libmachine: (ha-135993-m02) DBG | exit 0
	I0920 17:08:36.294607   27962 main.go:141] libmachine: (ha-135993-m02) DBG | SSH cmd err, output: <nil>: 
	I0920 17:08:36.294933   27962 main.go:141] libmachine: (ha-135993-m02) KVM machine creation complete!
	I0920 17:08:36.295321   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetConfigRaw
	I0920 17:08:36.295951   27962 main.go:141] libmachine: (ha-135993-m02) Calling .DriverName
	I0920 17:08:36.296272   27962 main.go:141] libmachine: (ha-135993-m02) Calling .DriverName
	I0920 17:08:36.296483   27962 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0920 17:08:36.296509   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetState
	I0920 17:08:36.298367   27962 main.go:141] libmachine: Detecting operating system of created instance...
	I0920 17:08:36.298385   27962 main.go:141] libmachine: Waiting for SSH to be available...
	I0920 17:08:36.298392   27962 main.go:141] libmachine: Getting to WaitForSSH function...
	I0920 17:08:36.298400   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHHostname
	I0920 17:08:36.301173   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:36.301568   27962 main.go:141] libmachine: (ha-135993-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:dc:24", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:08:28 +0000 UTC Type:0 Mac:52:54:00:87:dc:24 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-135993-m02 Clientid:01:52:54:00:87:dc:24}
	I0920 17:08:36.301596   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:36.301712   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHPort
	I0920 17:08:36.301889   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHKeyPath
	I0920 17:08:36.302037   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHKeyPath
	I0920 17:08:36.302163   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHUsername
	I0920 17:08:36.302363   27962 main.go:141] libmachine: Using SSH client type: native
	I0920 17:08:36.302570   27962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0920 17:08:36.302587   27962 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0920 17:08:36.409296   27962 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 17:08:36.409321   27962 main.go:141] libmachine: Detecting the provisioner...
	I0920 17:08:36.409329   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHHostname
	I0920 17:08:36.412054   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:36.412453   27962 main.go:141] libmachine: (ha-135993-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:dc:24", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:08:28 +0000 UTC Type:0 Mac:52:54:00:87:dc:24 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-135993-m02 Clientid:01:52:54:00:87:dc:24}
	I0920 17:08:36.412473   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:36.412680   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHPort
	I0920 17:08:36.412859   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHKeyPath
	I0920 17:08:36.413003   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHKeyPath
	I0920 17:08:36.413158   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHUsername
	I0920 17:08:36.413299   27962 main.go:141] libmachine: Using SSH client type: native
	I0920 17:08:36.413464   27962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0920 17:08:36.413474   27962 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0920 17:08:36.522550   27962 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0920 17:08:36.522639   27962 main.go:141] libmachine: found compatible host: buildroot
	I0920 17:08:36.522653   27962 main.go:141] libmachine: Provisioning with buildroot...
	I0920 17:08:36.522668   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetMachineName
	I0920 17:08:36.522875   27962 buildroot.go:166] provisioning hostname "ha-135993-m02"
	I0920 17:08:36.522896   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetMachineName
	I0920 17:08:36.523039   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHHostname
	I0920 17:08:36.525697   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:36.526081   27962 main.go:141] libmachine: (ha-135993-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:dc:24", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:08:28 +0000 UTC Type:0 Mac:52:54:00:87:dc:24 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-135993-m02 Clientid:01:52:54:00:87:dc:24}
	I0920 17:08:36.526108   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:36.526279   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHPort
	I0920 17:08:36.526447   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHKeyPath
	I0920 17:08:36.526596   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHKeyPath
	I0920 17:08:36.526717   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHUsername
	I0920 17:08:36.526893   27962 main.go:141] libmachine: Using SSH client type: native
	I0920 17:08:36.527091   27962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0920 17:08:36.527103   27962 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-135993-m02 && echo "ha-135993-m02" | sudo tee /etc/hostname
	I0920 17:08:36.648108   27962 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-135993-m02
	
	I0920 17:08:36.648139   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHHostname
	I0920 17:08:36.651735   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:36.652103   27962 main.go:141] libmachine: (ha-135993-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:dc:24", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:08:28 +0000 UTC Type:0 Mac:52:54:00:87:dc:24 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-135993-m02 Clientid:01:52:54:00:87:dc:24}
	I0920 17:08:36.652141   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:36.652372   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHPort
	I0920 17:08:36.652553   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHKeyPath
	I0920 17:08:36.652726   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHKeyPath
	I0920 17:08:36.652907   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHUsername
	I0920 17:08:36.653066   27962 main.go:141] libmachine: Using SSH client type: native
	I0920 17:08:36.653241   27962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0920 17:08:36.653262   27962 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-135993-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-135993-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-135993-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 17:08:36.767084   27962 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 17:08:36.767120   27962 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-8777/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-8777/.minikube}
	I0920 17:08:36.767142   27962 buildroot.go:174] setting up certificates
	I0920 17:08:36.767150   27962 provision.go:84] configureAuth start
	I0920 17:08:36.767159   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetMachineName
	I0920 17:08:36.767459   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetIP
	I0920 17:08:36.770189   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:36.770520   27962 main.go:141] libmachine: (ha-135993-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:dc:24", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:08:28 +0000 UTC Type:0 Mac:52:54:00:87:dc:24 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-135993-m02 Clientid:01:52:54:00:87:dc:24}
	I0920 17:08:36.770547   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:36.770672   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHHostname
	I0920 17:08:36.772567   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:36.772866   27962 main.go:141] libmachine: (ha-135993-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:dc:24", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:08:28 +0000 UTC Type:0 Mac:52:54:00:87:dc:24 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-135993-m02 Clientid:01:52:54:00:87:dc:24}
	I0920 17:08:36.772893   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:36.773001   27962 provision.go:143] copyHostCerts
	I0920 17:08:36.773032   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem
	I0920 17:08:36.773066   27962 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem, removing ...
	I0920 17:08:36.773075   27962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem
	I0920 17:08:36.773139   27962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem (1082 bytes)
	I0920 17:08:36.773212   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem
	I0920 17:08:36.773230   27962 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem, removing ...
	I0920 17:08:36.773237   27962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem
	I0920 17:08:36.773260   27962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem (1123 bytes)
	I0920 17:08:36.773312   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem
	I0920 17:08:36.773331   27962 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem, removing ...
	I0920 17:08:36.773337   27962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem
	I0920 17:08:36.773357   27962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem (1675 bytes)
	I0920 17:08:36.773424   27962 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem org=jenkins.ha-135993-m02 san=[127.0.0.1 192.168.39.227 ha-135993-m02 localhost minikube]
	I0920 17:08:36.941019   27962 provision.go:177] copyRemoteCerts
	I0920 17:08:36.941075   27962 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 17:08:36.941096   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHHostname
	I0920 17:08:36.943678   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:36.944038   27962 main.go:141] libmachine: (ha-135993-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:dc:24", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:08:28 +0000 UTC Type:0 Mac:52:54:00:87:dc:24 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-135993-m02 Clientid:01:52:54:00:87:dc:24}
	I0920 17:08:36.944072   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:36.944262   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHPort
	I0920 17:08:36.944447   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHKeyPath
	I0920 17:08:36.944600   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHUsername
	I0920 17:08:36.944758   27962 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m02/id_rsa Username:docker}
	I0920 17:08:37.028603   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0920 17:08:37.028690   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0920 17:08:37.052665   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0920 17:08:37.052750   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0920 17:08:37.077892   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0920 17:08:37.077976   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0920 17:08:37.100815   27962 provision.go:87] duration metric: took 333.648023ms to configureAuth
	I0920 17:08:37.100849   27962 buildroot.go:189] setting minikube options for container-runtime
	I0920 17:08:37.101060   27962 config.go:182] Loaded profile config "ha-135993": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 17:08:37.101132   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHHostname
	I0920 17:08:37.103680   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:37.104025   27962 main.go:141] libmachine: (ha-135993-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:dc:24", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:08:28 +0000 UTC Type:0 Mac:52:54:00:87:dc:24 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-135993-m02 Clientid:01:52:54:00:87:dc:24}
	I0920 17:08:37.104065   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:37.104260   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHPort
	I0920 17:08:37.104442   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHKeyPath
	I0920 17:08:37.104572   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHKeyPath
	I0920 17:08:37.104716   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHUsername
	I0920 17:08:37.104930   27962 main.go:141] libmachine: Using SSH client type: native
	I0920 17:08:37.105131   27962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0920 17:08:37.105151   27962 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 17:08:37.328322   27962 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 17:08:37.328359   27962 main.go:141] libmachine: Checking connection to Docker...
	I0920 17:08:37.328371   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetURL
	I0920 17:08:37.329623   27962 main.go:141] libmachine: (ha-135993-m02) DBG | Using libvirt version 6000000
	I0920 17:08:37.331823   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:37.332143   27962 main.go:141] libmachine: (ha-135993-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:dc:24", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:08:28 +0000 UTC Type:0 Mac:52:54:00:87:dc:24 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-135993-m02 Clientid:01:52:54:00:87:dc:24}
	I0920 17:08:37.332167   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:37.332339   27962 main.go:141] libmachine: Docker is up and running!
	I0920 17:08:37.332353   27962 main.go:141] libmachine: Reticulating splines...
	I0920 17:08:37.332361   27962 client.go:171] duration metric: took 23.847807748s to LocalClient.Create
	I0920 17:08:37.332387   27962 start.go:167] duration metric: took 23.84786362s to libmachine.API.Create "ha-135993"
	I0920 17:08:37.332399   27962 start.go:293] postStartSetup for "ha-135993-m02" (driver="kvm2")
	I0920 17:08:37.332415   27962 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 17:08:37.332439   27962 main.go:141] libmachine: (ha-135993-m02) Calling .DriverName
	I0920 17:08:37.332705   27962 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 17:08:37.332736   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHHostname
	I0920 17:08:37.334802   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:37.335108   27962 main.go:141] libmachine: (ha-135993-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:dc:24", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:08:28 +0000 UTC Type:0 Mac:52:54:00:87:dc:24 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-135993-m02 Clientid:01:52:54:00:87:dc:24}
	I0920 17:08:37.335134   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:37.335218   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHPort
	I0920 17:08:37.335362   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHKeyPath
	I0920 17:08:37.335477   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHUsername
	I0920 17:08:37.335595   27962 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m02/id_rsa Username:docker}
	I0920 17:08:37.416843   27962 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 17:08:37.421359   27962 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 17:08:37.421384   27962 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/addons for local assets ...
	I0920 17:08:37.421448   27962 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/files for local assets ...
	I0920 17:08:37.421538   27962 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem -> 159732.pem in /etc/ssl/certs
	I0920 17:08:37.421549   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem -> /etc/ssl/certs/159732.pem
	I0920 17:08:37.421657   27962 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 17:08:37.431863   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /etc/ssl/certs/159732.pem (1708 bytes)
	I0920 17:08:37.454586   27962 start.go:296] duration metric: took 122.170431ms for postStartSetup
	I0920 17:08:37.454638   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetConfigRaw
	I0920 17:08:37.455188   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetIP
	I0920 17:08:37.457599   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:37.457923   27962 main.go:141] libmachine: (ha-135993-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:dc:24", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:08:28 +0000 UTC Type:0 Mac:52:54:00:87:dc:24 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-135993-m02 Clientid:01:52:54:00:87:dc:24}
	I0920 17:08:37.457945   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:37.458188   27962 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/config.json ...
	I0920 17:08:37.458382   27962 start.go:128] duration metric: took 23.993921825s to createHost
	I0920 17:08:37.458410   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHHostname
	I0920 17:08:37.460848   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:37.461348   27962 main.go:141] libmachine: (ha-135993-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:dc:24", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:08:28 +0000 UTC Type:0 Mac:52:54:00:87:dc:24 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-135993-m02 Clientid:01:52:54:00:87:dc:24}
	I0920 17:08:37.461378   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:37.461561   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHPort
	I0920 17:08:37.461755   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHKeyPath
	I0920 17:08:37.461935   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHKeyPath
	I0920 17:08:37.462069   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHUsername
	I0920 17:08:37.462223   27962 main.go:141] libmachine: Using SSH client type: native
	I0920 17:08:37.462383   27962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0920 17:08:37.462392   27962 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 17:08:37.570351   27962 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726852117.546992904
	
	I0920 17:08:37.570372   27962 fix.go:216] guest clock: 1726852117.546992904
	I0920 17:08:37.570379   27962 fix.go:229] Guest: 2024-09-20 17:08:37.546992904 +0000 UTC Remote: 2024-09-20 17:08:37.458395452 +0000 UTC m=+69.269105040 (delta=88.597452ms)
	I0920 17:08:37.570394   27962 fix.go:200] guest clock delta is within tolerance: 88.597452ms
	I0920 17:08:37.570398   27962 start.go:83] releasing machines lock for "ha-135993-m02", held for 24.10605904s
	I0920 17:08:37.570419   27962 main.go:141] libmachine: (ha-135993-m02) Calling .DriverName
	I0920 17:08:37.570730   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetIP
	I0920 17:08:37.573185   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:37.573501   27962 main.go:141] libmachine: (ha-135993-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:dc:24", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:08:28 +0000 UTC Type:0 Mac:52:54:00:87:dc:24 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-135993-m02 Clientid:01:52:54:00:87:dc:24}
	I0920 17:08:37.573529   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:37.576260   27962 out.go:177] * Found network options:
	I0920 17:08:37.577727   27962 out.go:177]   - NO_PROXY=192.168.39.60
	W0920 17:08:37.578902   27962 proxy.go:119] fail to check proxy env: Error ip not in block
	I0920 17:08:37.578937   27962 main.go:141] libmachine: (ha-135993-m02) Calling .DriverName
	I0920 17:08:37.579631   27962 main.go:141] libmachine: (ha-135993-m02) Calling .DriverName
	I0920 17:08:37.579801   27962 main.go:141] libmachine: (ha-135993-m02) Calling .DriverName
	I0920 17:08:37.579884   27962 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 17:08:37.579926   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHHostname
	W0920 17:08:37.580027   27962 proxy.go:119] fail to check proxy env: Error ip not in block
	I0920 17:08:37.580105   27962 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 17:08:37.580127   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHHostname
	I0920 17:08:37.582896   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:37.583131   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:37.583396   27962 main.go:141] libmachine: (ha-135993-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:dc:24", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:08:28 +0000 UTC Type:0 Mac:52:54:00:87:dc:24 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-135993-m02 Clientid:01:52:54:00:87:dc:24}
	I0920 17:08:37.583422   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:37.583562   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHPort
	I0920 17:08:37.583712   27962 main.go:141] libmachine: (ha-135993-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:dc:24", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:08:28 +0000 UTC Type:0 Mac:52:54:00:87:dc:24 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-135993-m02 Clientid:01:52:54:00:87:dc:24}
	I0920 17:08:37.583731   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:37.583738   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHKeyPath
	I0920 17:08:37.583921   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHUsername
	I0920 17:08:37.583953   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHPort
	I0920 17:08:37.584099   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHKeyPath
	I0920 17:08:37.584097   27962 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m02/id_rsa Username:docker}
	I0920 17:08:37.584246   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHUsername
	I0920 17:08:37.584390   27962 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m02/id_rsa Username:docker}
	I0920 17:08:37.841918   27962 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 17:08:37.847702   27962 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 17:08:37.847782   27962 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 17:08:37.865314   27962 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 17:08:37.865341   27962 start.go:495] detecting cgroup driver to use...
	I0920 17:08:37.865402   27962 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 17:08:37.882395   27962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 17:08:37.898199   27962 docker.go:217] disabling cri-docker service (if available) ...
	I0920 17:08:37.898256   27962 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 17:08:37.914375   27962 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 17:08:37.929731   27962 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 17:08:38.054897   27962 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 17:08:38.213720   27962 docker.go:233] disabling docker service ...
	I0920 17:08:38.213781   27962 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 17:08:38.228604   27962 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 17:08:38.241927   27962 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 17:08:38.372497   27962 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 17:08:38.492012   27962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 17:08:38.505545   27962 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 17:08:38.522859   27962 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 17:08:38.522917   27962 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:08:38.533670   27962 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 17:08:38.533742   27962 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:08:38.543534   27962 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:08:38.553115   27962 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:08:38.563278   27962 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 17:08:38.573734   27962 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:08:38.585820   27962 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:08:38.602582   27962 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:08:38.612986   27962 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 17:08:38.625878   27962 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 17:08:38.625952   27962 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 17:08:38.640746   27962 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 17:08:38.650259   27962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:08:38.774025   27962 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 17:08:38.868968   27962 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 17:08:38.869037   27962 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 17:08:38.873544   27962 start.go:563] Will wait 60s for crictl version
	I0920 17:08:38.873611   27962 ssh_runner.go:195] Run: which crictl
	I0920 17:08:38.877199   27962 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 17:08:38.914545   27962 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 17:08:38.914652   27962 ssh_runner.go:195] Run: crio --version
	I0920 17:08:38.942570   27962 ssh_runner.go:195] Run: crio --version
	I0920 17:08:38.974013   27962 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 17:08:38.975371   27962 out.go:177]   - env NO_PROXY=192.168.39.60
	I0920 17:08:38.976693   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetIP
	I0920 17:08:38.979315   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:38.979662   27962 main.go:141] libmachine: (ha-135993-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:dc:24", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:08:28 +0000 UTC Type:0 Mac:52:54:00:87:dc:24 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-135993-m02 Clientid:01:52:54:00:87:dc:24}
	I0920 17:08:38.979686   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:38.979928   27962 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 17:08:38.984450   27962 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 17:08:38.996637   27962 mustload.go:65] Loading cluster: ha-135993
	I0920 17:08:38.996863   27962 config.go:182] Loaded profile config "ha-135993": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 17:08:38.997116   27962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:08:38.997144   27962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:08:39.011615   27962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36791
	I0920 17:08:39.012110   27962 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:08:39.012595   27962 main.go:141] libmachine: Using API Version  1
	I0920 17:08:39.012618   27962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:08:39.012951   27962 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:08:39.013120   27962 main.go:141] libmachine: (ha-135993) Calling .GetState
	I0920 17:08:39.014524   27962 host.go:66] Checking if "ha-135993" exists ...
	I0920 17:08:39.014807   27962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:08:39.014829   27962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:08:39.028965   27962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39851
	I0920 17:08:39.029376   27962 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:08:39.029829   27962 main.go:141] libmachine: Using API Version  1
	I0920 17:08:39.029863   27962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:08:39.030149   27962 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:08:39.030299   27962 main.go:141] libmachine: (ha-135993) Calling .DriverName
	I0920 17:08:39.030433   27962 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993 for IP: 192.168.39.227
	I0920 17:08:39.030445   27962 certs.go:194] generating shared ca certs ...
	I0920 17:08:39.030462   27962 certs.go:226] acquiring lock for ca certs: {Name:mkc7ef6c737c6bdc3fdd9dcff8f57029c020d8f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:08:39.030587   27962 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key
	I0920 17:08:39.030622   27962 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key
	I0920 17:08:39.030631   27962 certs.go:256] generating profile certs ...
	I0920 17:08:39.030698   27962 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/client.key
	I0920 17:08:39.030722   27962 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.key.8c3b1447
	I0920 17:08:39.030736   27962 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.crt.8c3b1447 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.60 192.168.39.227 192.168.39.254]
	I0920 17:08:39.095051   27962 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.crt.8c3b1447 ...
	I0920 17:08:39.095081   27962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.crt.8c3b1447: {Name:mke080ae3589481bb1ac84166b67a86b0862deca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:08:39.095299   27962 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.key.8c3b1447 ...
	I0920 17:08:39.095313   27962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.key.8c3b1447: {Name:mk0aaeb424c58a29d9543a386b9ebefcbd99d38f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:08:39.095401   27962 certs.go:381] copying /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.crt.8c3b1447 -> /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.crt
	I0920 17:08:39.095524   27962 certs.go:385] copying /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.key.8c3b1447 -> /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.key
	I0920 17:08:39.095653   27962 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/proxy-client.key
	I0920 17:08:39.095667   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0920 17:08:39.095679   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0920 17:08:39.095689   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0920 17:08:39.095702   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0920 17:08:39.095712   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0920 17:08:39.095724   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0920 17:08:39.095736   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0920 17:08:39.095749   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0920 17:08:39.095802   27962 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem (1338 bytes)
	W0920 17:08:39.095830   27962 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973_empty.pem, impossibly tiny 0 bytes
	I0920 17:08:39.095839   27962 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 17:08:39.095858   27962 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem (1082 bytes)
	I0920 17:08:39.095878   27962 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem (1123 bytes)
	I0920 17:08:39.095901   27962 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem (1675 bytes)
	I0920 17:08:39.095936   27962 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem (1708 bytes)
	I0920 17:08:39.095961   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:08:39.095977   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem -> /usr/share/ca-certificates/15973.pem
	I0920 17:08:39.095989   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem -> /usr/share/ca-certificates/159732.pem
	I0920 17:08:39.096019   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHHostname
	I0920 17:08:39.099130   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:08:39.099635   27962 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:08:39.099664   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:08:39.099789   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHPort
	I0920 17:08:39.100010   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:08:39.100156   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHUsername
	I0920 17:08:39.100302   27962 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993/id_rsa Username:docker}
	I0920 17:08:39.178198   27962 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0920 17:08:39.183212   27962 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0920 17:08:39.194269   27962 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0920 17:08:39.198144   27962 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0920 17:08:39.207842   27962 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0920 17:08:39.212563   27962 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0920 17:08:39.225008   27962 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0920 17:08:39.228957   27962 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0920 17:08:39.240966   27962 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0920 17:08:39.244710   27962 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0920 17:08:39.255704   27962 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0920 17:08:39.261179   27962 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0920 17:08:39.272522   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 17:08:39.298671   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 17:08:39.323122   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 17:08:39.347904   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0920 17:08:39.372895   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0920 17:08:39.396433   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 17:08:39.420958   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 17:08:39.444600   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 17:08:39.468099   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 17:08:39.492182   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem --> /usr/share/ca-certificates/15973.pem (1338 bytes)
	I0920 17:08:39.516275   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /usr/share/ca-certificates/159732.pem (1708 bytes)
	I0920 17:08:39.538881   27962 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0920 17:08:39.554623   27962 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0920 17:08:39.569829   27962 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0920 17:08:39.585133   27962 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0920 17:08:39.601137   27962 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0920 17:08:39.617605   27962 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0920 17:08:39.633667   27962 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0920 17:08:39.650104   27962 ssh_runner.go:195] Run: openssl version
	I0920 17:08:39.656001   27962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 17:08:39.667261   27962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:08:39.671479   27962 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:08:39.671552   27962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:08:39.677168   27962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 17:08:39.687694   27962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15973.pem && ln -fs /usr/share/ca-certificates/15973.pem /etc/ssl/certs/15973.pem"
	I0920 17:08:39.697763   27962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15973.pem
	I0920 17:08:39.702178   27962 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:04 /usr/share/ca-certificates/15973.pem
	I0920 17:08:39.702233   27962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15973.pem
	I0920 17:08:39.708012   27962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15973.pem /etc/ssl/certs/51391683.0"
	I0920 17:08:39.718526   27962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/159732.pem && ln -fs /usr/share/ca-certificates/159732.pem /etc/ssl/certs/159732.pem"
	I0920 17:08:39.729775   27962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/159732.pem
	I0920 17:08:39.734571   27962 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:04 /usr/share/ca-certificates/159732.pem
	I0920 17:08:39.734627   27962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/159732.pem
	I0920 17:08:39.740342   27962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/159732.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 17:08:39.751136   27962 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 17:08:39.755553   27962 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 17:08:39.755646   27962 kubeadm.go:934] updating node {m02 192.168.39.227 8443 v1.31.1 crio true true} ...
	I0920 17:08:39.755760   27962 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-135993-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.227
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-135993 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 17:08:39.755800   27962 kube-vip.go:115] generating kube-vip config ...
	I0920 17:08:39.755854   27962 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0920 17:08:39.773764   27962 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0920 17:08:39.773847   27962 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0920 17:08:39.773905   27962 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 17:08:39.783942   27962 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0920 17:08:39.784007   27962 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0920 17:08:39.793636   27962 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0920 17:08:39.793672   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0920 17:08:39.793735   27962 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0920 17:08:39.793780   27962 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19672-8777/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I0920 17:08:39.793842   27962 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19672-8777/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I0920 17:08:39.798080   27962 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0920 17:08:39.798118   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0920 17:08:40.867820   27962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 17:08:40.882080   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0920 17:08:40.882178   27962 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0920 17:08:40.886572   27962 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0920 17:08:40.886607   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0920 17:08:41.226998   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0920 17:08:41.227076   27962 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0920 17:08:41.238040   27962 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0920 17:08:41.238078   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0920 17:08:41.520778   27962 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0920 17:08:41.530138   27962 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0920 17:08:41.546031   27962 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 17:08:41.561648   27962 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0920 17:08:41.577512   27962 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0920 17:08:41.581127   27962 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 17:08:41.593044   27962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:08:41.727078   27962 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 17:08:41.743823   27962 host.go:66] Checking if "ha-135993" exists ...
	I0920 17:08:41.744278   27962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:08:41.744326   27962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:08:41.759319   27962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43037
	I0920 17:08:41.759806   27962 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:08:41.760334   27962 main.go:141] libmachine: Using API Version  1
	I0920 17:08:41.760365   27962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:08:41.760710   27962 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:08:41.760950   27962 main.go:141] libmachine: (ha-135993) Calling .DriverName
	I0920 17:08:41.761092   27962 start.go:317] joinCluster: &{Name:ha-135993 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-135993 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 17:08:41.761208   27962 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0920 17:08:41.761228   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHHostname
	I0920 17:08:41.764476   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:08:41.765051   27962 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:08:41.765084   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:08:41.765229   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHPort
	I0920 17:08:41.765376   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:08:41.765547   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHUsername
	I0920 17:08:41.765689   27962 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993/id_rsa Username:docker}
	I0920 17:08:41.915104   27962 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 17:08:41.915146   27962 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 76lnz4.6r1ezgurod2l1q25 --discovery-token-ca-cert-hash sha256:736f3fb521855813decb4a7214f2b8beff5f81c3971b60decfa20a8807626a2d --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-135993-m02 --control-plane --apiserver-advertise-address=192.168.39.227 --apiserver-bind-port=8443"
	I0920 17:09:04.881318   27962 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 76lnz4.6r1ezgurod2l1q25 --discovery-token-ca-cert-hash sha256:736f3fb521855813decb4a7214f2b8beff5f81c3971b60decfa20a8807626a2d --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-135993-m02 --control-plane --apiserver-advertise-address=192.168.39.227 --apiserver-bind-port=8443": (22.966149697s)
	I0920 17:09:04.881355   27962 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0920 17:09:05.471754   27962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-135993-m02 minikube.k8s.io/updated_at=2024_09_20T17_09_05_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=0626f22cf0d915d75e291a5bce701f94395056e1 minikube.k8s.io/name=ha-135993 minikube.k8s.io/primary=false
	I0920 17:09:05.593812   27962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-135993-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0920 17:09:05.743557   27962 start.go:319] duration metric: took 23.982457966s to joinCluster
	I0920 17:09:05.743641   27962 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 17:09:05.743939   27962 config.go:182] Loaded profile config "ha-135993": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 17:09:05.745782   27962 out.go:177] * Verifying Kubernetes components...
	I0920 17:09:05.747592   27962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:09:06.068898   27962 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 17:09:06.098222   27962 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19672-8777/kubeconfig
	I0920 17:09:06.098478   27962 kapi.go:59] client config for ha-135993: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/client.crt", KeyFile:"/home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/client.key", CAFile:"/home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0920 17:09:06.098546   27962 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.60:8443
	I0920 17:09:06.098829   27962 node_ready.go:35] waiting up to 6m0s for node "ha-135993-m02" to be "Ready" ...
	I0920 17:09:06.098967   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:06.098980   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:06.098991   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:06.098997   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:06.110154   27962 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0920 17:09:06.599028   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:06.599058   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:06.599068   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:06.599080   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:06.607526   27962 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0920 17:09:07.100044   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:07.100066   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:07.100080   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:07.100088   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:07.104606   27962 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 17:09:07.599532   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:07.599561   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:07.599573   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:07.599592   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:07.603898   27962 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 17:09:08.099892   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:08.099925   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:08.099936   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:08.099939   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:08.104089   27962 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 17:09:08.104669   27962 node_ready.go:53] node "ha-135993-m02" has status "Ready":"False"
	I0920 17:09:08.599188   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:08.599216   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:08.599232   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:08.599237   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:08.602674   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:09.099543   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:09.099573   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:09.099590   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:09.099595   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:09.103157   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:09.599047   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:09.599068   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:09.599079   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:09.599083   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:09.602661   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:10.099869   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:10.099898   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:10.099910   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:10.099917   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:10.104382   27962 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 17:09:10.105025   27962 node_ready.go:53] node "ha-135993-m02" has status "Ready":"False"
	I0920 17:09:10.599990   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:10.600015   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:10.600025   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:10.600040   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:10.604181   27962 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 17:09:11.100016   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:11.100036   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:11.100044   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:11.100048   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:11.104486   27962 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 17:09:11.599135   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:11.599157   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:11.599167   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:11.599172   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:11.603466   27962 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 17:09:12.099094   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:12.099116   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:12.099124   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:12.099128   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:12.102631   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:12.600054   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:12.600077   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:12.600087   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:12.600091   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:12.603960   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:12.604540   27962 node_ready.go:53] node "ha-135993-m02" has status "Ready":"False"
	I0920 17:09:13.099920   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:13.099940   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:13.099947   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:13.099951   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:13.104962   27962 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 17:09:13.599362   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:13.599385   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:13.599392   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:13.599397   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:13.602694   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:14.099536   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:14.099555   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:14.099563   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:14.099566   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:14.110011   27962 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0920 17:09:14.600088   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:14.600116   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:14.600127   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:14.600132   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:14.603733   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:15.099810   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:15.099833   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:15.099842   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:15.099847   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:15.103493   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:15.106748   27962 node_ready.go:53] node "ha-135993-m02" has status "Ready":"False"
	I0920 17:09:15.599114   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:15.599137   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:15.599145   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:15.599149   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:15.602587   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:16.099797   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:16.099819   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:16.099836   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:16.099841   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:16.104385   27962 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 17:09:16.599221   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:16.599261   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:16.599273   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:16.599281   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:16.602198   27962 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 17:09:17.099641   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:17.099665   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:17.099674   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:17.099679   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:17.102538   27962 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 17:09:17.599451   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:17.599479   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:17.599488   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:17.599493   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:17.604108   27962 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 17:09:17.604651   27962 node_ready.go:53] node "ha-135993-m02" has status "Ready":"False"
	I0920 17:09:18.099653   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:18.099682   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:18.099694   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:18.099698   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:18.103414   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:18.599738   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:18.599765   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:18.599774   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:18.599781   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:18.603208   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:19.100125   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:19.100153   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:19.100166   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:19.100175   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:19.184153   27962 round_trippers.go:574] Response Status: 200 OK in 83 milliseconds
	I0920 17:09:19.600050   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:19.600072   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:19.600080   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:19.600085   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:19.603736   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:20.099655   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:20.099677   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:20.099685   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:20.099689   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:20.103774   27962 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 17:09:20.104534   27962 node_ready.go:53] node "ha-135993-m02" has status "Ready":"False"
	I0920 17:09:20.599975   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:20.599999   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:20.600008   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:20.600012   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:20.603324   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:21.099118   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:21.099157   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:21.099168   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:21.099174   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:21.102835   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:21.599923   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:21.599950   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:21.599959   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:21.599963   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:21.604036   27962 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 17:09:22.099740   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:22.099765   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:22.099774   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:22.099779   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:22.103432   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:22.599193   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:22.599216   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:22.599225   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:22.599230   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:22.602523   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:22.603230   27962 node_ready.go:53] node "ha-135993-m02" has status "Ready":"False"
	I0920 17:09:23.099535   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:23.099562   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:23.099571   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:23.099575   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:23.103060   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:23.600005   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:23.600028   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:23.600037   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:23.600042   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:23.602925   27962 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 17:09:24.099721   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:24.099748   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:24.099760   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:24.099768   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:24.103420   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:24.599142   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:24.599163   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:24.599171   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:24.599175   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:24.601879   27962 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 17:09:25.099978   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:25.100008   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:25.100020   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:25.100025   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:25.103311   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:25.104017   27962 node_ready.go:49] node "ha-135993-m02" has status "Ready":"True"
	I0920 17:09:25.104039   27962 node_ready.go:38] duration metric: took 19.005166756s for node "ha-135993-m02" to be "Ready" ...
	I0920 17:09:25.104051   27962 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 17:09:25.104149   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods
	I0920 17:09:25.104165   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:25.104177   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:25.104185   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:25.108765   27962 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 17:09:25.115719   27962 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-gcvg4" in "kube-system" namespace to be "Ready" ...
	I0920 17:09:25.115809   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-gcvg4
	I0920 17:09:25.115817   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:25.115832   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:25.115839   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:25.118912   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:25.119515   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993
	I0920 17:09:25.119530   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:25.119545   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:25.119553   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:25.122165   27962 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 17:09:25.123205   27962 pod_ready.go:93] pod "coredns-7c65d6cfc9-gcvg4" in "kube-system" namespace has status "Ready":"True"
	I0920 17:09:25.123229   27962 pod_ready.go:82] duration metric: took 7.483763ms for pod "coredns-7c65d6cfc9-gcvg4" in "kube-system" namespace to be "Ready" ...
	I0920 17:09:25.123245   27962 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-kpbhk" in "kube-system" namespace to be "Ready" ...
	I0920 17:09:25.123328   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-kpbhk
	I0920 17:09:25.123336   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:25.123346   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:25.123362   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:25.127621   27962 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 17:09:25.128286   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993
	I0920 17:09:25.128301   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:25.128309   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:25.128312   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:25.130781   27962 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 17:09:25.131328   27962 pod_ready.go:93] pod "coredns-7c65d6cfc9-kpbhk" in "kube-system" namespace has status "Ready":"True"
	I0920 17:09:25.131344   27962 pod_ready.go:82] duration metric: took 8.091385ms for pod "coredns-7c65d6cfc9-kpbhk" in "kube-system" namespace to be "Ready" ...
	I0920 17:09:25.131353   27962 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-135993" in "kube-system" namespace to be "Ready" ...
	I0920 17:09:25.131430   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-135993
	I0920 17:09:25.131441   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:25.131447   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:25.131452   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:25.133900   27962 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 17:09:25.134469   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993
	I0920 17:09:25.134482   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:25.134489   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:25.134491   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:25.136541   27962 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 17:09:25.137016   27962 pod_ready.go:93] pod "etcd-ha-135993" in "kube-system" namespace has status "Ready":"True"
	I0920 17:09:25.137035   27962 pod_ready.go:82] duration metric: took 5.675303ms for pod "etcd-ha-135993" in "kube-system" namespace to be "Ready" ...
	I0920 17:09:25.137046   27962 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-135993-m02" in "kube-system" namespace to be "Ready" ...
	I0920 17:09:25.137099   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-135993-m02
	I0920 17:09:25.137110   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:25.137120   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:25.137129   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:25.139596   27962 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 17:09:25.140245   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:25.140261   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:25.140268   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:25.140275   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:25.143653   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:25.144087   27962 pod_ready.go:93] pod "etcd-ha-135993-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 17:09:25.144104   27962 pod_ready.go:82] duration metric: took 7.049824ms for pod "etcd-ha-135993-m02" in "kube-system" namespace to be "Ready" ...
	I0920 17:09:25.144123   27962 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-135993" in "kube-system" namespace to be "Ready" ...
	I0920 17:09:25.300530   27962 request.go:632] Waited for 156.341043ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-135993
	I0920 17:09:25.300600   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-135993
	I0920 17:09:25.300608   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:25.300615   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:25.300619   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:25.303926   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:25.500905   27962 request.go:632] Waited for 196.365656ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-135993
	I0920 17:09:25.500972   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993
	I0920 17:09:25.500979   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:25.500991   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:25.501002   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:25.504242   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:25.504741   27962 pod_ready.go:93] pod "kube-apiserver-ha-135993" in "kube-system" namespace has status "Ready":"True"
	I0920 17:09:25.504761   27962 pod_ready.go:82] duration metric: took 360.627268ms for pod "kube-apiserver-ha-135993" in "kube-system" namespace to be "Ready" ...
	I0920 17:09:25.504775   27962 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-135993-m02" in "kube-system" namespace to be "Ready" ...
	I0920 17:09:25.700017   27962 request.go:632] Waited for 195.167851ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-135993-m02
	I0920 17:09:25.700099   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-135993-m02
	I0920 17:09:25.700105   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:25.700111   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:25.700116   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:25.703342   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:25.900444   27962 request.go:632] Waited for 196.370493ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:25.900528   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:25.900536   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:25.900546   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:25.900556   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:25.904185   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:25.904729   27962 pod_ready.go:93] pod "kube-apiserver-ha-135993-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 17:09:25.904749   27962 pod_ready.go:82] duration metric: took 399.965762ms for pod "kube-apiserver-ha-135993-m02" in "kube-system" namespace to be "Ready" ...
	I0920 17:09:25.904762   27962 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-135993" in "kube-system" namespace to be "Ready" ...
	I0920 17:09:26.100837   27962 request.go:632] Waited for 195.996544ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-135993
	I0920 17:09:26.100911   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-135993
	I0920 17:09:26.100922   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:26.100930   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:26.100934   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:26.104514   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:26.300664   27962 request.go:632] Waited for 195.385658ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-135993
	I0920 17:09:26.300743   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993
	I0920 17:09:26.300751   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:26.300761   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:26.300767   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:26.304576   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:26.305216   27962 pod_ready.go:93] pod "kube-controller-manager-ha-135993" in "kube-system" namespace has status "Ready":"True"
	I0920 17:09:26.305236   27962 pod_ready.go:82] duration metric: took 400.465668ms for pod "kube-controller-manager-ha-135993" in "kube-system" namespace to be "Ready" ...
	I0920 17:09:26.305250   27962 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-135993-m02" in "kube-system" namespace to be "Ready" ...
	I0920 17:09:26.500476   27962 request.go:632] Waited for 195.132114ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-135993-m02
	I0920 17:09:26.500563   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-135993-m02
	I0920 17:09:26.500573   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:26.500585   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:26.500595   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:26.503974   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:26.700109   27962 request.go:632] Waited for 195.31021ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:26.700178   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:26.700184   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:26.700192   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:26.700197   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:26.703786   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:26.704325   27962 pod_ready.go:93] pod "kube-controller-manager-ha-135993-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 17:09:26.704346   27962 pod_ready.go:82] duration metric: took 399.089711ms for pod "kube-controller-manager-ha-135993-m02" in "kube-system" namespace to be "Ready" ...
	I0920 17:09:26.704359   27962 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-52r49" in "kube-system" namespace to be "Ready" ...
	I0920 17:09:26.900914   27962 request.go:632] Waited for 196.454204ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-proxy-52r49
	I0920 17:09:26.900979   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-proxy-52r49
	I0920 17:09:26.900988   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:26.900999   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:26.901008   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:26.904465   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:27.100636   27962 request.go:632] Waited for 195.370556ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-135993
	I0920 17:09:27.100694   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993
	I0920 17:09:27.100700   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:27.100707   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:27.100713   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:27.104136   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:27.104731   27962 pod_ready.go:93] pod "kube-proxy-52r49" in "kube-system" namespace has status "Ready":"True"
	I0920 17:09:27.104752   27962 pod_ready.go:82] duration metric: took 400.38236ms for pod "kube-proxy-52r49" in "kube-system" namespace to be "Ready" ...
	I0920 17:09:27.104762   27962 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-z6xqt" in "kube-system" namespace to be "Ready" ...
	I0920 17:09:27.300919   27962 request.go:632] Waited for 196.074087ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z6xqt
	I0920 17:09:27.300987   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z6xqt
	I0920 17:09:27.300993   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:27.301002   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:27.301038   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:27.304315   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:27.500226   27962 request.go:632] Waited for 195.315282ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:27.500323   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:27.500337   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:27.500347   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:27.500353   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:27.503809   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:27.504585   27962 pod_ready.go:93] pod "kube-proxy-z6xqt" in "kube-system" namespace has status "Ready":"True"
	I0920 17:09:27.504607   27962 pod_ready.go:82] duration metric: took 399.833703ms for pod "kube-proxy-z6xqt" in "kube-system" namespace to be "Ready" ...
	I0920 17:09:27.504623   27962 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-135993" in "kube-system" namespace to be "Ready" ...
	I0920 17:09:27.700599   27962 request.go:632] Waited for 195.904246ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-135993
	I0920 17:09:27.700671   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-135993
	I0920 17:09:27.700677   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:27.700684   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:27.700691   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:27.704470   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:27.900633   27962 request.go:632] Waited for 195.387225ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-135993
	I0920 17:09:27.900688   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993
	I0920 17:09:27.900695   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:27.900708   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:27.900716   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:27.903956   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:27.904541   27962 pod_ready.go:93] pod "kube-scheduler-ha-135993" in "kube-system" namespace has status "Ready":"True"
	I0920 17:09:27.904563   27962 pod_ready.go:82] duration metric: took 399.932453ms for pod "kube-scheduler-ha-135993" in "kube-system" namespace to be "Ready" ...
	I0920 17:09:27.904573   27962 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-135993-m02" in "kube-system" namespace to be "Ready" ...
	I0920 17:09:28.100547   27962 request.go:632] Waited for 195.899157ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-135993-m02
	I0920 17:09:28.100623   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-135993-m02
	I0920 17:09:28.100628   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:28.100637   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:28.100642   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:28.104043   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:28.299961   27962 request.go:632] Waited for 195.327445ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:28.300029   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:28.300037   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:28.300046   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:28.300054   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:28.303288   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:28.303968   27962 pod_ready.go:93] pod "kube-scheduler-ha-135993-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 17:09:28.303986   27962 pod_ready.go:82] duration metric: took 399.402915ms for pod "kube-scheduler-ha-135993-m02" in "kube-system" namespace to be "Ready" ...
	I0920 17:09:28.304000   27962 pod_ready.go:39] duration metric: took 3.199931535s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 17:09:28.304019   27962 api_server.go:52] waiting for apiserver process to appear ...
	I0920 17:09:28.304077   27962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 17:09:28.320006   27962 api_server.go:72] duration metric: took 22.576329593s to wait for apiserver process to appear ...
	I0920 17:09:28.320037   27962 api_server.go:88] waiting for apiserver healthz status ...
	I0920 17:09:28.320064   27962 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0920 17:09:28.324668   27962 api_server.go:279] https://192.168.39.60:8443/healthz returned 200:
	ok
	I0920 17:09:28.324734   27962 round_trippers.go:463] GET https://192.168.39.60:8443/version
	I0920 17:09:28.324739   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:28.324747   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:28.324752   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:28.325606   27962 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0920 17:09:28.325696   27962 api_server.go:141] control plane version: v1.31.1
	I0920 17:09:28.325719   27962 api_server.go:131] duration metric: took 5.673918ms to wait for apiserver health ...
	I0920 17:09:28.325728   27962 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 17:09:28.500898   27962 request.go:632] Waited for 175.10825ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods
	I0920 17:09:28.500971   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods
	I0920 17:09:28.500978   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:28.500986   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:28.500995   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:28.506063   27962 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0920 17:09:28.510476   27962 system_pods.go:59] 17 kube-system pods found
	I0920 17:09:28.510506   27962 system_pods.go:61] "coredns-7c65d6cfc9-gcvg4" [899b2b8c-9009-46c0-816b-781e85eb8b19] Running
	I0920 17:09:28.510512   27962 system_pods.go:61] "coredns-7c65d6cfc9-kpbhk" [0dfd9f1a-148c-4dba-884a-8618b74f82d0] Running
	I0920 17:09:28.510516   27962 system_pods.go:61] "etcd-ha-135993" [d5e44451-ab41-4bdf-8f34-c8202bc33cda] Running
	I0920 17:09:28.510520   27962 system_pods.go:61] "etcd-ha-135993-m02" [415d8f47-3bc4-42fc-ae75-b8217f5a731d] Running
	I0920 17:09:28.510524   27962 system_pods.go:61] "kindnet-5m4r8" [c799ac5a-69f7-45d5-a291-61edfd753404] Running
	I0920 17:09:28.510528   27962 system_pods.go:61] "kindnet-6clt2" [d73a0817-d84f-4269-9de0-1532287a07db] Running
	I0920 17:09:28.510532   27962 system_pods.go:61] "kube-apiserver-ha-135993" [0c2adef2-0b98-4752-ba6c-0719b723c93c] Running
	I0920 17:09:28.510536   27962 system_pods.go:61] "kube-apiserver-ha-135993-m02" [e74438ac-347a-4f38-ba96-c7687763c669] Running
	I0920 17:09:28.510539   27962 system_pods.go:61] "kube-controller-manager-ha-135993" [ba82afa0-d0a1-4515-bd0f-574f03c2a5a5] Running
	I0920 17:09:28.510543   27962 system_pods.go:61] "kube-controller-manager-ha-135993-m02" [ba4a59ae-5348-43b7-b80d-96d5d63afea2] Running
	I0920 17:09:28.510548   27962 system_pods.go:61] "kube-proxy-52r49" [8d1124bd-e7cb-4239-a29d-c1d5b8870aff] Running
	I0920 17:09:28.510551   27962 system_pods.go:61] "kube-proxy-z6xqt" [70b8c8ab-77cc-4681-a9b1-3c28fc0f2674] Running
	I0920 17:09:28.510555   27962 system_pods.go:61] "kube-scheduler-ha-135993" [25eaf632-035a-44d9-8ce0-e6c3c0e4e0c4] Running
	I0920 17:09:28.510558   27962 system_pods.go:61] "kube-scheduler-ha-135993-m02" [6daee23a-d627-41e8-81b1-ceb4fa70ec3e] Running
	I0920 17:09:28.510563   27962 system_pods.go:61] "kube-vip-ha-135993" [6aa396e1-76b2-4911-bc93-660c51cef03d] Running
	I0920 17:09:28.510566   27962 system_pods.go:61] "kube-vip-ha-135993-m02" [2e9d1f8a-1ce9-46f9-b58b-f13302832062] Running
	I0920 17:09:28.510571   27962 system_pods.go:61] "storage-provisioner" [57137bee-9a7b-4659-a855-0da82d137cb0] Running
	I0920 17:09:28.510576   27962 system_pods.go:74] duration metric: took 184.843309ms to wait for pod list to return data ...
	I0920 17:09:28.510583   27962 default_sa.go:34] waiting for default service account to be created ...
	I0920 17:09:28.701010   27962 request.go:632] Waited for 190.33295ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/default/serviceaccounts
	I0920 17:09:28.701070   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/default/serviceaccounts
	I0920 17:09:28.701075   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:28.701082   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:28.701086   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:28.704833   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:28.705046   27962 default_sa.go:45] found service account: "default"
	I0920 17:09:28.705060   27962 default_sa.go:55] duration metric: took 194.471281ms for default service account to be created ...
	I0920 17:09:28.705068   27962 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 17:09:28.900520   27962 request.go:632] Waited for 195.386336ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods
	I0920 17:09:28.900601   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods
	I0920 17:09:28.900607   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:28.900614   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:28.900622   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:28.905157   27962 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 17:09:28.910152   27962 system_pods.go:86] 17 kube-system pods found
	I0920 17:09:28.910177   27962 system_pods.go:89] "coredns-7c65d6cfc9-gcvg4" [899b2b8c-9009-46c0-816b-781e85eb8b19] Running
	I0920 17:09:28.910183   27962 system_pods.go:89] "coredns-7c65d6cfc9-kpbhk" [0dfd9f1a-148c-4dba-884a-8618b74f82d0] Running
	I0920 17:09:28.910188   27962 system_pods.go:89] "etcd-ha-135993" [d5e44451-ab41-4bdf-8f34-c8202bc33cda] Running
	I0920 17:09:28.910193   27962 system_pods.go:89] "etcd-ha-135993-m02" [415d8f47-3bc4-42fc-ae75-b8217f5a731d] Running
	I0920 17:09:28.910197   27962 system_pods.go:89] "kindnet-5m4r8" [c799ac5a-69f7-45d5-a291-61edfd753404] Running
	I0920 17:09:28.910200   27962 system_pods.go:89] "kindnet-6clt2" [d73a0817-d84f-4269-9de0-1532287a07db] Running
	I0920 17:09:28.910204   27962 system_pods.go:89] "kube-apiserver-ha-135993" [0c2adef2-0b98-4752-ba6c-0719b723c93c] Running
	I0920 17:09:28.910210   27962 system_pods.go:89] "kube-apiserver-ha-135993-m02" [e74438ac-347a-4f38-ba96-c7687763c669] Running
	I0920 17:09:28.910216   27962 system_pods.go:89] "kube-controller-manager-ha-135993" [ba82afa0-d0a1-4515-bd0f-574f03c2a5a5] Running
	I0920 17:09:28.910221   27962 system_pods.go:89] "kube-controller-manager-ha-135993-m02" [ba4a59ae-5348-43b7-b80d-96d5d63afea2] Running
	I0920 17:09:28.910224   27962 system_pods.go:89] "kube-proxy-52r49" [8d1124bd-e7cb-4239-a29d-c1d5b8870aff] Running
	I0920 17:09:28.910232   27962 system_pods.go:89] "kube-proxy-z6xqt" [70b8c8ab-77cc-4681-a9b1-3c28fc0f2674] Running
	I0920 17:09:28.910236   27962 system_pods.go:89] "kube-scheduler-ha-135993" [25eaf632-035a-44d9-8ce0-e6c3c0e4e0c4] Running
	I0920 17:09:28.910240   27962 system_pods.go:89] "kube-scheduler-ha-135993-m02" [6daee23a-d627-41e8-81b1-ceb4fa70ec3e] Running
	I0920 17:09:28.910243   27962 system_pods.go:89] "kube-vip-ha-135993" [6aa396e1-76b2-4911-bc93-660c51cef03d] Running
	I0920 17:09:28.910246   27962 system_pods.go:89] "kube-vip-ha-135993-m02" [2e9d1f8a-1ce9-46f9-b58b-f13302832062] Running
	I0920 17:09:28.910249   27962 system_pods.go:89] "storage-provisioner" [57137bee-9a7b-4659-a855-0da82d137cb0] Running
	I0920 17:09:28.910257   27962 system_pods.go:126] duration metric: took 205.181263ms to wait for k8s-apps to be running ...
	I0920 17:09:28.910266   27962 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 17:09:28.910308   27962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 17:09:28.926895   27962 system_svc.go:56] duration metric: took 16.618557ms WaitForService to wait for kubelet
	I0920 17:09:28.926931   27962 kubeadm.go:582] duration metric: took 23.18325481s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 17:09:28.926955   27962 node_conditions.go:102] verifying NodePressure condition ...
	I0920 17:09:29.100293   27962 request.go:632] Waited for 173.230558ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes
	I0920 17:09:29.100347   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes
	I0920 17:09:29.100351   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:29.100362   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:29.100368   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:29.104004   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:29.104756   27962 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 17:09:29.104780   27962 node_conditions.go:123] node cpu capacity is 2
	I0920 17:09:29.104790   27962 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 17:09:29.104794   27962 node_conditions.go:123] node cpu capacity is 2
	I0920 17:09:29.104798   27962 node_conditions.go:105] duration metric: took 177.838136ms to run NodePressure ...
	I0920 17:09:29.104811   27962 start.go:241] waiting for startup goroutines ...
	I0920 17:09:29.104835   27962 start.go:255] writing updated cluster config ...
	I0920 17:09:29.107129   27962 out.go:201] 
	I0920 17:09:29.108641   27962 config.go:182] Loaded profile config "ha-135993": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 17:09:29.108741   27962 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/config.json ...
	I0920 17:09:29.110401   27962 out.go:177] * Starting "ha-135993-m03" control-plane node in "ha-135993" cluster
	I0920 17:09:29.111695   27962 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 17:09:29.111718   27962 cache.go:56] Caching tarball of preloaded images
	I0920 17:09:29.111819   27962 preload.go:172] Found /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 17:09:29.111832   27962 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 17:09:29.111919   27962 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/config.json ...
	I0920 17:09:29.112087   27962 start.go:360] acquireMachinesLock for ha-135993-m03: {Name:mkfeedb385cf08b5d2aa00913e85815d02a180c2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 17:09:29.112125   27962 start.go:364] duration metric: took 21.568µs to acquireMachinesLock for "ha-135993-m03"
	I0920 17:09:29.112142   27962 start.go:93] Provisioning new machine with config: &{Name:ha-135993 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-135993 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor
-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 17:09:29.112230   27962 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0920 17:09:29.114039   27962 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 17:09:29.114124   27962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:09:29.114159   27962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:09:29.130067   27962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37683
	I0920 17:09:29.130534   27962 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:09:29.131025   27962 main.go:141] libmachine: Using API Version  1
	I0920 17:09:29.131052   27962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:09:29.131373   27962 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:09:29.131541   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetMachineName
	I0920 17:09:29.131727   27962 main.go:141] libmachine: (ha-135993-m03) Calling .DriverName
	I0920 17:09:29.131887   27962 start.go:159] libmachine.API.Create for "ha-135993" (driver="kvm2")
	I0920 17:09:29.131918   27962 client.go:168] LocalClient.Create starting
	I0920 17:09:29.131956   27962 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem
	I0920 17:09:29.131998   27962 main.go:141] libmachine: Decoding PEM data...
	I0920 17:09:29.132021   27962 main.go:141] libmachine: Parsing certificate...
	I0920 17:09:29.132086   27962 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem
	I0920 17:09:29.132115   27962 main.go:141] libmachine: Decoding PEM data...
	I0920 17:09:29.132130   27962 main.go:141] libmachine: Parsing certificate...
	I0920 17:09:29.132158   27962 main.go:141] libmachine: Running pre-create checks...
	I0920 17:09:29.132169   27962 main.go:141] libmachine: (ha-135993-m03) Calling .PreCreateCheck
	I0920 17:09:29.132361   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetConfigRaw
	I0920 17:09:29.132775   27962 main.go:141] libmachine: Creating machine...
	I0920 17:09:29.132791   27962 main.go:141] libmachine: (ha-135993-m03) Calling .Create
	I0920 17:09:29.132937   27962 main.go:141] libmachine: (ha-135993-m03) Creating KVM machine...
	I0920 17:09:29.134340   27962 main.go:141] libmachine: (ha-135993-m03) DBG | found existing default KVM network
	I0920 17:09:29.134482   27962 main.go:141] libmachine: (ha-135993-m03) DBG | found existing private KVM network mk-ha-135993
	I0920 17:09:29.134586   27962 main.go:141] libmachine: (ha-135993-m03) Setting up store path in /home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m03 ...
	I0920 17:09:29.134610   27962 main.go:141] libmachine: (ha-135993-m03) Building disk image from file:///home/jenkins/minikube-integration/19672-8777/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso
	I0920 17:09:29.134709   27962 main.go:141] libmachine: (ha-135993-m03) DBG | I0920 17:09:29.134570   28745 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19672-8777/.minikube
	I0920 17:09:29.134788   27962 main.go:141] libmachine: (ha-135993-m03) Downloading /home/jenkins/minikube-integration/19672-8777/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19672-8777/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso...
	I0920 17:09:29.623687   27962 main.go:141] libmachine: (ha-135993-m03) DBG | I0920 17:09:29.623559   28745 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m03/id_rsa...
	I0920 17:09:29.849339   27962 main.go:141] libmachine: (ha-135993-m03) DBG | I0920 17:09:29.849213   28745 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m03/ha-135993-m03.rawdisk...
	I0920 17:09:29.849379   27962 main.go:141] libmachine: (ha-135993-m03) DBG | Writing magic tar header
	I0920 17:09:29.849390   27962 main.go:141] libmachine: (ha-135993-m03) DBG | Writing SSH key tar header
	I0920 17:09:29.849398   27962 main.go:141] libmachine: (ha-135993-m03) DBG | I0920 17:09:29.849332   28745 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m03 ...
	I0920 17:09:29.849416   27962 main.go:141] libmachine: (ha-135993-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m03
	I0920 17:09:29.849450   27962 main.go:141] libmachine: (ha-135993-m03) Setting executable bit set on /home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m03 (perms=drwx------)
	I0920 17:09:29.849472   27962 main.go:141] libmachine: (ha-135993-m03) Setting executable bit set on /home/jenkins/minikube-integration/19672-8777/.minikube/machines (perms=drwxr-xr-x)
	I0920 17:09:29.849487   27962 main.go:141] libmachine: (ha-135993-m03) Setting executable bit set on /home/jenkins/minikube-integration/19672-8777/.minikube (perms=drwxr-xr-x)
	I0920 17:09:29.849501   27962 main.go:141] libmachine: (ha-135993-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-8777/.minikube/machines
	I0920 17:09:29.849511   27962 main.go:141] libmachine: (ha-135993-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-8777/.minikube
	I0920 17:09:29.849524   27962 main.go:141] libmachine: (ha-135993-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-8777
	I0920 17:09:29.849537   27962 main.go:141] libmachine: (ha-135993-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0920 17:09:29.849559   27962 main.go:141] libmachine: (ha-135993-m03) Setting executable bit set on /home/jenkins/minikube-integration/19672-8777 (perms=drwxrwxr-x)
	I0920 17:09:29.849572   27962 main.go:141] libmachine: (ha-135993-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0920 17:09:29.849581   27962 main.go:141] libmachine: (ha-135993-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0920 17:09:29.849589   27962 main.go:141] libmachine: (ha-135993-m03) DBG | Checking permissions on dir: /home/jenkins
	I0920 17:09:29.849596   27962 main.go:141] libmachine: (ha-135993-m03) DBG | Checking permissions on dir: /home
	I0920 17:09:29.849612   27962 main.go:141] libmachine: (ha-135993-m03) DBG | Skipping /home - not owner
	I0920 17:09:29.849623   27962 main.go:141] libmachine: (ha-135993-m03) Creating domain...
	I0920 17:09:29.850674   27962 main.go:141] libmachine: (ha-135993-m03) define libvirt domain using xml: 
	I0920 17:09:29.850697   27962 main.go:141] libmachine: (ha-135993-m03) <domain type='kvm'>
	I0920 17:09:29.850706   27962 main.go:141] libmachine: (ha-135993-m03)   <name>ha-135993-m03</name>
	I0920 17:09:29.850718   27962 main.go:141] libmachine: (ha-135993-m03)   <memory unit='MiB'>2200</memory>
	I0920 17:09:29.850725   27962 main.go:141] libmachine: (ha-135993-m03)   <vcpu>2</vcpu>
	I0920 17:09:29.850730   27962 main.go:141] libmachine: (ha-135993-m03)   <features>
	I0920 17:09:29.850737   27962 main.go:141] libmachine: (ha-135993-m03)     <acpi/>
	I0920 17:09:29.850744   27962 main.go:141] libmachine: (ha-135993-m03)     <apic/>
	I0920 17:09:29.850757   27962 main.go:141] libmachine: (ha-135993-m03)     <pae/>
	I0920 17:09:29.850769   27962 main.go:141] libmachine: (ha-135993-m03)     
	I0920 17:09:29.850776   27962 main.go:141] libmachine: (ha-135993-m03)   </features>
	I0920 17:09:29.850783   27962 main.go:141] libmachine: (ha-135993-m03)   <cpu mode='host-passthrough'>
	I0920 17:09:29.850803   27962 main.go:141] libmachine: (ha-135993-m03)   
	I0920 17:09:29.850826   27962 main.go:141] libmachine: (ha-135993-m03)   </cpu>
	I0920 17:09:29.850834   27962 main.go:141] libmachine: (ha-135993-m03)   <os>
	I0920 17:09:29.850839   27962 main.go:141] libmachine: (ha-135993-m03)     <type>hvm</type>
	I0920 17:09:29.850844   27962 main.go:141] libmachine: (ha-135993-m03)     <boot dev='cdrom'/>
	I0920 17:09:29.850850   27962 main.go:141] libmachine: (ha-135993-m03)     <boot dev='hd'/>
	I0920 17:09:29.850855   27962 main.go:141] libmachine: (ha-135993-m03)     <bootmenu enable='no'/>
	I0920 17:09:29.850866   27962 main.go:141] libmachine: (ha-135993-m03)   </os>
	I0920 17:09:29.850873   27962 main.go:141] libmachine: (ha-135993-m03)   <devices>
	I0920 17:09:29.850878   27962 main.go:141] libmachine: (ha-135993-m03)     <disk type='file' device='cdrom'>
	I0920 17:09:29.850887   27962 main.go:141] libmachine: (ha-135993-m03)       <source file='/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m03/boot2docker.iso'/>
	I0920 17:09:29.850894   27962 main.go:141] libmachine: (ha-135993-m03)       <target dev='hdc' bus='scsi'/>
	I0920 17:09:29.850925   27962 main.go:141] libmachine: (ha-135993-m03)       <readonly/>
	I0920 17:09:29.850951   27962 main.go:141] libmachine: (ha-135993-m03)     </disk>
	I0920 17:09:29.850962   27962 main.go:141] libmachine: (ha-135993-m03)     <disk type='file' device='disk'>
	I0920 17:09:29.850974   27962 main.go:141] libmachine: (ha-135993-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0920 17:09:29.850990   27962 main.go:141] libmachine: (ha-135993-m03)       <source file='/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m03/ha-135993-m03.rawdisk'/>
	I0920 17:09:29.851010   27962 main.go:141] libmachine: (ha-135993-m03)       <target dev='hda' bus='virtio'/>
	I0920 17:09:29.851030   27962 main.go:141] libmachine: (ha-135993-m03)     </disk>
	I0920 17:09:29.851045   27962 main.go:141] libmachine: (ha-135993-m03)     <interface type='network'>
	I0920 17:09:29.851055   27962 main.go:141] libmachine: (ha-135993-m03)       <source network='mk-ha-135993'/>
	I0920 17:09:29.851062   27962 main.go:141] libmachine: (ha-135993-m03)       <model type='virtio'/>
	I0920 17:09:29.851069   27962 main.go:141] libmachine: (ha-135993-m03)     </interface>
	I0920 17:09:29.851077   27962 main.go:141] libmachine: (ha-135993-m03)     <interface type='network'>
	I0920 17:09:29.851085   27962 main.go:141] libmachine: (ha-135993-m03)       <source network='default'/>
	I0920 17:09:29.851090   27962 main.go:141] libmachine: (ha-135993-m03)       <model type='virtio'/>
	I0920 17:09:29.851095   27962 main.go:141] libmachine: (ha-135993-m03)     </interface>
	I0920 17:09:29.851101   27962 main.go:141] libmachine: (ha-135993-m03)     <serial type='pty'>
	I0920 17:09:29.851109   27962 main.go:141] libmachine: (ha-135993-m03)       <target port='0'/>
	I0920 17:09:29.851115   27962 main.go:141] libmachine: (ha-135993-m03)     </serial>
	I0920 17:09:29.851133   27962 main.go:141] libmachine: (ha-135993-m03)     <console type='pty'>
	I0920 17:09:29.851153   27962 main.go:141] libmachine: (ha-135993-m03)       <target type='serial' port='0'/>
	I0920 17:09:29.851165   27962 main.go:141] libmachine: (ha-135993-m03)     </console>
	I0920 17:09:29.851172   27962 main.go:141] libmachine: (ha-135993-m03)     <rng model='virtio'>
	I0920 17:09:29.851184   27962 main.go:141] libmachine: (ha-135993-m03)       <backend model='random'>/dev/random</backend>
	I0920 17:09:29.851194   27962 main.go:141] libmachine: (ha-135993-m03)     </rng>
	I0920 17:09:29.851201   27962 main.go:141] libmachine: (ha-135993-m03)     
	I0920 17:09:29.851209   27962 main.go:141] libmachine: (ha-135993-m03)     
	I0920 17:09:29.851215   27962 main.go:141] libmachine: (ha-135993-m03)   </devices>
	I0920 17:09:29.851224   27962 main.go:141] libmachine: (ha-135993-m03) </domain>
	I0920 17:09:29.851251   27962 main.go:141] libmachine: (ha-135993-m03) 
	I0920 17:09:29.858905   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:e3:0b:70 in network default
	I0920 17:09:29.859443   27962 main.go:141] libmachine: (ha-135993-m03) Ensuring networks are active...
	I0920 17:09:29.859461   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:29.860217   27962 main.go:141] libmachine: (ha-135993-m03) Ensuring network default is active
	I0920 17:09:29.860531   27962 main.go:141] libmachine: (ha-135993-m03) Ensuring network mk-ha-135993 is active
	I0920 17:09:29.860904   27962 main.go:141] libmachine: (ha-135993-m03) Getting domain xml...
	I0920 17:09:29.861590   27962 main.go:141] libmachine: (ha-135993-m03) Creating domain...
	I0920 17:09:31.187018   27962 main.go:141] libmachine: (ha-135993-m03) Waiting to get IP...
	I0920 17:09:31.187715   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:31.188084   27962 main.go:141] libmachine: (ha-135993-m03) DBG | unable to find current IP address of domain ha-135993-m03 in network mk-ha-135993
	I0920 17:09:31.188106   27962 main.go:141] libmachine: (ha-135993-m03) DBG | I0920 17:09:31.188068   28745 retry.go:31] will retry after 213.512063ms: waiting for machine to come up
	I0920 17:09:31.403627   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:31.404039   27962 main.go:141] libmachine: (ha-135993-m03) DBG | unable to find current IP address of domain ha-135993-m03 in network mk-ha-135993
	I0920 17:09:31.404070   27962 main.go:141] libmachine: (ha-135993-m03) DBG | I0920 17:09:31.403991   28745 retry.go:31] will retry after 361.212458ms: waiting for machine to come up
	I0920 17:09:31.766642   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:31.767089   27962 main.go:141] libmachine: (ha-135993-m03) DBG | unable to find current IP address of domain ha-135993-m03 in network mk-ha-135993
	I0920 17:09:31.767116   27962 main.go:141] libmachine: (ha-135993-m03) DBG | I0920 17:09:31.767037   28745 retry.go:31] will retry after 376.833715ms: waiting for machine to come up
	I0920 17:09:32.145427   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:32.145898   27962 main.go:141] libmachine: (ha-135993-m03) DBG | unable to find current IP address of domain ha-135993-m03 in network mk-ha-135993
	I0920 17:09:32.145947   27962 main.go:141] libmachine: (ha-135993-m03) DBG | I0920 17:09:32.145871   28745 retry.go:31] will retry after 557.65015ms: waiting for machine to come up
	I0920 17:09:32.705540   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:32.705975   27962 main.go:141] libmachine: (ha-135993-m03) DBG | unable to find current IP address of domain ha-135993-m03 in network mk-ha-135993
	I0920 17:09:32.706023   27962 main.go:141] libmachine: (ha-135993-m03) DBG | I0920 17:09:32.705956   28745 retry.go:31] will retry after 695.507494ms: waiting for machine to come up
	I0920 17:09:33.402909   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:33.403356   27962 main.go:141] libmachine: (ha-135993-m03) DBG | unable to find current IP address of domain ha-135993-m03 in network mk-ha-135993
	I0920 17:09:33.403389   27962 main.go:141] libmachine: (ha-135993-m03) DBG | I0920 17:09:33.403304   28745 retry.go:31] will retry after 645.712565ms: waiting for machine to come up
	I0920 17:09:34.051477   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:34.052378   27962 main.go:141] libmachine: (ha-135993-m03) DBG | unable to find current IP address of domain ha-135993-m03 in network mk-ha-135993
	I0920 17:09:34.052405   27962 main.go:141] libmachine: (ha-135993-m03) DBG | I0920 17:09:34.052280   28745 retry.go:31] will retry after 770.593421ms: waiting for machine to come up
	I0920 17:09:34.824986   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:34.825490   27962 main.go:141] libmachine: (ha-135993-m03) DBG | unable to find current IP address of domain ha-135993-m03 in network mk-ha-135993
	I0920 17:09:34.825514   27962 main.go:141] libmachine: (ha-135993-m03) DBG | I0920 17:09:34.825451   28745 retry.go:31] will retry after 1.327368797s: waiting for machine to come up
	I0920 17:09:36.154205   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:36.154624   27962 main.go:141] libmachine: (ha-135993-m03) DBG | unable to find current IP address of domain ha-135993-m03 in network mk-ha-135993
	I0920 17:09:36.154646   27962 main.go:141] libmachine: (ha-135993-m03) DBG | I0920 17:09:36.154579   28745 retry.go:31] will retry after 1.581269715s: waiting for machine to come up
	I0920 17:09:37.738322   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:37.738736   27962 main.go:141] libmachine: (ha-135993-m03) DBG | unable to find current IP address of domain ha-135993-m03 in network mk-ha-135993
	I0920 17:09:37.738762   27962 main.go:141] libmachine: (ha-135993-m03) DBG | I0920 17:09:37.738689   28745 retry.go:31] will retry after 1.459267896s: waiting for machine to come up
	I0920 17:09:39.199274   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:39.199678   27962 main.go:141] libmachine: (ha-135993-m03) DBG | unable to find current IP address of domain ha-135993-m03 in network mk-ha-135993
	I0920 17:09:39.199706   27962 main.go:141] libmachine: (ha-135993-m03) DBG | I0920 17:09:39.199627   28745 retry.go:31] will retry after 2.386585249s: waiting for machine to come up
	I0920 17:09:41.588281   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:41.588804   27962 main.go:141] libmachine: (ha-135993-m03) DBG | unable to find current IP address of domain ha-135993-m03 in network mk-ha-135993
	I0920 17:09:41.588834   27962 main.go:141] libmachine: (ha-135993-m03) DBG | I0920 17:09:41.588752   28745 retry.go:31] will retry after 2.639705596s: waiting for machine to come up
	I0920 17:09:44.229971   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:44.230371   27962 main.go:141] libmachine: (ha-135993-m03) DBG | unable to find current IP address of domain ha-135993-m03 in network mk-ha-135993
	I0920 17:09:44.230422   27962 main.go:141] libmachine: (ha-135993-m03) DBG | I0920 17:09:44.230347   28745 retry.go:31] will retry after 3.819742823s: waiting for machine to come up
	I0920 17:09:48.054340   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:48.054705   27962 main.go:141] libmachine: (ha-135993-m03) DBG | unable to find current IP address of domain ha-135993-m03 in network mk-ha-135993
	I0920 17:09:48.054731   27962 main.go:141] libmachine: (ha-135993-m03) DBG | I0920 17:09:48.054671   28745 retry.go:31] will retry after 4.961691445s: waiting for machine to come up
	I0920 17:09:53.018825   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:53.019259   27962 main.go:141] libmachine: (ha-135993-m03) Found IP for machine: 192.168.39.133
	I0920 17:09:53.019281   27962 main.go:141] libmachine: (ha-135993-m03) Reserving static IP address...
	I0920 17:09:53.019295   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has current primary IP address 192.168.39.133 and MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:53.019682   27962 main.go:141] libmachine: (ha-135993-m03) DBG | unable to find host DHCP lease matching {name: "ha-135993-m03", mac: "52:54:00:4a:49:98", ip: "192.168.39.133"} in network mk-ha-135993
	I0920 17:09:53.093855   27962 main.go:141] libmachine: (ha-135993-m03) DBG | Getting to WaitForSSH function...
	I0920 17:09:53.093888   27962 main.go:141] libmachine: (ha-135993-m03) Reserved static IP address: 192.168.39.133
	I0920 17:09:53.093913   27962 main.go:141] libmachine: (ha-135993-m03) Waiting for SSH to be available...
	I0920 17:09:53.096549   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:53.096917   27962 main.go:141] libmachine: (ha-135993-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:49:98", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:09:45 +0000 UTC Type:0 Mac:52:54:00:4a:49:98 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:minikube Clientid:01:52:54:00:4a:49:98}
	I0920 17:09:53.096942   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:53.097072   27962 main.go:141] libmachine: (ha-135993-m03) DBG | Using SSH client type: external
	I0920 17:09:53.097099   27962 main.go:141] libmachine: (ha-135993-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m03/id_rsa (-rw-------)
	I0920 17:09:53.097137   27962 main.go:141] libmachine: (ha-135993-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.133 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 17:09:53.097159   27962 main.go:141] libmachine: (ha-135993-m03) DBG | About to run SSH command:
	I0920 17:09:53.097174   27962 main.go:141] libmachine: (ha-135993-m03) DBG | exit 0
	I0920 17:09:53.225462   27962 main.go:141] libmachine: (ha-135993-m03) DBG | SSH cmd err, output: <nil>: 
	I0920 17:09:53.225738   27962 main.go:141] libmachine: (ha-135993-m03) KVM machine creation complete!
	I0920 17:09:53.226079   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetConfigRaw
	I0920 17:09:53.226700   27962 main.go:141] libmachine: (ha-135993-m03) Calling .DriverName
	I0920 17:09:53.226858   27962 main.go:141] libmachine: (ha-135993-m03) Calling .DriverName
	I0920 17:09:53.226985   27962 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0920 17:09:53.226999   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetState
	I0920 17:09:53.228014   27962 main.go:141] libmachine: Detecting operating system of created instance...
	I0920 17:09:53.228026   27962 main.go:141] libmachine: Waiting for SSH to be available...
	I0920 17:09:53.228031   27962 main.go:141] libmachine: Getting to WaitForSSH function...
	I0920 17:09:53.228038   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHHostname
	I0920 17:09:53.230141   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:53.230494   27962 main.go:141] libmachine: (ha-135993-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:49:98", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:09:45 +0000 UTC Type:0 Mac:52:54:00:4a:49:98 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-135993-m03 Clientid:01:52:54:00:4a:49:98}
	I0920 17:09:53.230517   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:53.230669   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHPort
	I0920 17:09:53.230844   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHKeyPath
	I0920 17:09:53.230948   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHKeyPath
	I0920 17:09:53.231082   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHUsername
	I0920 17:09:53.231200   27962 main.go:141] libmachine: Using SSH client type: native
	I0920 17:09:53.231420   27962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.133 22 <nil> <nil>}
	I0920 17:09:53.231435   27962 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0920 17:09:53.341375   27962 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 17:09:53.341396   27962 main.go:141] libmachine: Detecting the provisioner...
	I0920 17:09:53.341403   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHHostname
	I0920 17:09:53.344112   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:53.344480   27962 main.go:141] libmachine: (ha-135993-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:49:98", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:09:45 +0000 UTC Type:0 Mac:52:54:00:4a:49:98 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-135993-m03 Clientid:01:52:54:00:4a:49:98}
	I0920 17:09:53.344511   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:53.344666   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHPort
	I0920 17:09:53.344839   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHKeyPath
	I0920 17:09:53.344987   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHKeyPath
	I0920 17:09:53.345174   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHUsername
	I0920 17:09:53.345354   27962 main.go:141] libmachine: Using SSH client type: native
	I0920 17:09:53.345510   27962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.133 22 <nil> <nil>}
	I0920 17:09:53.345521   27962 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0920 17:09:53.458337   27962 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0920 17:09:53.458388   27962 main.go:141] libmachine: found compatible host: buildroot
	I0920 17:09:53.458394   27962 main.go:141] libmachine: Provisioning with buildroot...
	I0920 17:09:53.458407   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetMachineName
	I0920 17:09:53.458649   27962 buildroot.go:166] provisioning hostname "ha-135993-m03"
	I0920 17:09:53.458675   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetMachineName
	I0920 17:09:53.458849   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHHostname
	I0920 17:09:53.461596   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:53.461987   27962 main.go:141] libmachine: (ha-135993-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:49:98", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:09:45 +0000 UTC Type:0 Mac:52:54:00:4a:49:98 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-135993-m03 Clientid:01:52:54:00:4a:49:98}
	I0920 17:09:53.462013   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:53.462204   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHPort
	I0920 17:09:53.462360   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHKeyPath
	I0920 17:09:53.462538   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHKeyPath
	I0920 17:09:53.462693   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHUsername
	I0920 17:09:53.462836   27962 main.go:141] libmachine: Using SSH client type: native
	I0920 17:09:53.463061   27962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.133 22 <nil> <nil>}
	I0920 17:09:53.463079   27962 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-135993-m03 && echo "ha-135993-m03" | sudo tee /etc/hostname
	I0920 17:09:53.590131   27962 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-135993-m03
	
	I0920 17:09:53.590160   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHHostname
	I0920 17:09:53.592877   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:53.593210   27962 main.go:141] libmachine: (ha-135993-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:49:98", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:09:45 +0000 UTC Type:0 Mac:52:54:00:4a:49:98 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-135993-m03 Clientid:01:52:54:00:4a:49:98}
	I0920 17:09:53.593257   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:53.593412   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHPort
	I0920 17:09:53.593615   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHKeyPath
	I0920 17:09:53.593758   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHKeyPath
	I0920 17:09:53.593944   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHUsername
	I0920 17:09:53.594124   27962 main.go:141] libmachine: Using SSH client type: native
	I0920 17:09:53.594335   27962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.133 22 <nil> <nil>}
	I0920 17:09:53.594356   27962 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-135993-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-135993-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-135993-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 17:09:53.715013   27962 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 17:09:53.715044   27962 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-8777/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-8777/.minikube}
	I0920 17:09:53.715074   27962 buildroot.go:174] setting up certificates
	I0920 17:09:53.715086   27962 provision.go:84] configureAuth start
	I0920 17:09:53.715098   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetMachineName
	I0920 17:09:53.715402   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetIP
	I0920 17:09:53.718102   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:53.718382   27962 main.go:141] libmachine: (ha-135993-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:49:98", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:09:45 +0000 UTC Type:0 Mac:52:54:00:4a:49:98 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-135993-m03 Clientid:01:52:54:00:4a:49:98}
	I0920 17:09:53.718400   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:53.718579   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHHostname
	I0920 17:09:53.720967   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:53.721315   27962 main.go:141] libmachine: (ha-135993-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:49:98", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:09:45 +0000 UTC Type:0 Mac:52:54:00:4a:49:98 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-135993-m03 Clientid:01:52:54:00:4a:49:98}
	I0920 17:09:53.721341   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:53.721476   27962 provision.go:143] copyHostCerts
	I0920 17:09:53.721506   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem
	I0920 17:09:53.721536   27962 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem, removing ...
	I0920 17:09:53.721544   27962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem
	I0920 17:09:53.721632   27962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem (1082 bytes)
	I0920 17:09:53.721706   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem
	I0920 17:09:53.721728   27962 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem, removing ...
	I0920 17:09:53.721734   27962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem
	I0920 17:09:53.721757   27962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem (1123 bytes)
	I0920 17:09:53.721801   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem
	I0920 17:09:53.721822   27962 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem, removing ...
	I0920 17:09:53.721828   27962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem
	I0920 17:09:53.721880   27962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem (1675 bytes)
	I0920 17:09:53.721951   27962 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem org=jenkins.ha-135993-m03 san=[127.0.0.1 192.168.39.133 ha-135993-m03 localhost minikube]
	I0920 17:09:53.848713   27962 provision.go:177] copyRemoteCerts
	I0920 17:09:53.848773   27962 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 17:09:53.848800   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHHostname
	I0920 17:09:53.851795   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:53.852202   27962 main.go:141] libmachine: (ha-135993-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:49:98", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:09:45 +0000 UTC Type:0 Mac:52:54:00:4a:49:98 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-135993-m03 Clientid:01:52:54:00:4a:49:98}
	I0920 17:09:53.852234   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:53.852521   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHPort
	I0920 17:09:53.852708   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHKeyPath
	I0920 17:09:53.852882   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHUsername
	I0920 17:09:53.853058   27962 sshutil.go:53] new ssh client: &{IP:192.168.39.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m03/id_rsa Username:docker}
	I0920 17:09:53.939365   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0920 17:09:53.939433   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0920 17:09:53.962495   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0920 17:09:53.962567   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0920 17:09:53.985499   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0920 17:09:53.985574   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0920 17:09:54.008320   27962 provision.go:87] duration metric: took 293.220585ms to configureAuth
	I0920 17:09:54.008349   27962 buildroot.go:189] setting minikube options for container-runtime
	I0920 17:09:54.008604   27962 config.go:182] Loaded profile config "ha-135993": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 17:09:54.008700   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHHostname
	I0920 17:09:54.011605   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:54.011968   27962 main.go:141] libmachine: (ha-135993-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:49:98", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:09:45 +0000 UTC Type:0 Mac:52:54:00:4a:49:98 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-135993-m03 Clientid:01:52:54:00:4a:49:98}
	I0920 17:09:54.012001   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:54.012140   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHPort
	I0920 17:09:54.012318   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHKeyPath
	I0920 17:09:54.012493   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHKeyPath
	I0920 17:09:54.012609   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHUsername
	I0920 17:09:54.012754   27962 main.go:141] libmachine: Using SSH client type: native
	I0920 17:09:54.012956   27962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.133 22 <nil> <nil>}
	I0920 17:09:54.012972   27962 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 17:09:54.245416   27962 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 17:09:54.245443   27962 main.go:141] libmachine: Checking connection to Docker...
	I0920 17:09:54.245453   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetURL
	I0920 17:09:54.246780   27962 main.go:141] libmachine: (ha-135993-m03) DBG | Using libvirt version 6000000
	I0920 17:09:54.249527   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:54.249947   27962 main.go:141] libmachine: (ha-135993-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:49:98", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:09:45 +0000 UTC Type:0 Mac:52:54:00:4a:49:98 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-135993-m03 Clientid:01:52:54:00:4a:49:98}
	I0920 17:09:54.249971   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:54.250158   27962 main.go:141] libmachine: Docker is up and running!
	I0920 17:09:54.250187   27962 main.go:141] libmachine: Reticulating splines...
	I0920 17:09:54.250195   27962 client.go:171] duration metric: took 25.118268806s to LocalClient.Create
	I0920 17:09:54.250222   27962 start.go:167] duration metric: took 25.118338101s to libmachine.API.Create "ha-135993"
	I0920 17:09:54.250241   27962 start.go:293] postStartSetup for "ha-135993-m03" (driver="kvm2")
	I0920 17:09:54.250252   27962 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 17:09:54.250268   27962 main.go:141] libmachine: (ha-135993-m03) Calling .DriverName
	I0920 17:09:54.250588   27962 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 17:09:54.250617   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHHostname
	I0920 17:09:54.252892   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:54.253325   27962 main.go:141] libmachine: (ha-135993-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:49:98", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:09:45 +0000 UTC Type:0 Mac:52:54:00:4a:49:98 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-135993-m03 Clientid:01:52:54:00:4a:49:98}
	I0920 17:09:54.253360   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:54.253498   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHPort
	I0920 17:09:54.253673   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHKeyPath
	I0920 17:09:54.253825   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHUsername
	I0920 17:09:54.253986   27962 sshutil.go:53] new ssh client: &{IP:192.168.39.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m03/id_rsa Username:docker}
	I0920 17:09:54.339595   27962 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 17:09:54.343490   27962 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 17:09:54.343513   27962 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/addons for local assets ...
	I0920 17:09:54.343594   27962 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/files for local assets ...
	I0920 17:09:54.343690   27962 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem -> 159732.pem in /etc/ssl/certs
	I0920 17:09:54.343700   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem -> /etc/ssl/certs/159732.pem
	I0920 17:09:54.343811   27962 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 17:09:54.352574   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /etc/ssl/certs/159732.pem (1708 bytes)
	I0920 17:09:54.376021   27962 start.go:296] duration metric: took 125.763298ms for postStartSetup
	I0920 17:09:54.376085   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetConfigRaw
	I0920 17:09:54.376726   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetIP
	I0920 17:09:54.379455   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:54.379860   27962 main.go:141] libmachine: (ha-135993-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:49:98", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:09:45 +0000 UTC Type:0 Mac:52:54:00:4a:49:98 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-135993-m03 Clientid:01:52:54:00:4a:49:98}
	I0920 17:09:54.379889   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:54.380133   27962 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/config.json ...
	I0920 17:09:54.380334   27962 start.go:128] duration metric: took 25.268094288s to createHost
	I0920 17:09:54.380356   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHHostname
	I0920 17:09:54.382551   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:54.382926   27962 main.go:141] libmachine: (ha-135993-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:49:98", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:09:45 +0000 UTC Type:0 Mac:52:54:00:4a:49:98 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-135993-m03 Clientid:01:52:54:00:4a:49:98}
	I0920 17:09:54.382948   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:54.383118   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHPort
	I0920 17:09:54.383308   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHKeyPath
	I0920 17:09:54.383448   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHKeyPath
	I0920 17:09:54.383614   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHUsername
	I0920 17:09:54.383768   27962 main.go:141] libmachine: Using SSH client type: native
	I0920 17:09:54.383925   27962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.133 22 <nil> <nil>}
	I0920 17:09:54.383934   27962 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 17:09:54.498180   27962 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726852194.467876031
	
	I0920 17:09:54.498204   27962 fix.go:216] guest clock: 1726852194.467876031
	I0920 17:09:54.498211   27962 fix.go:229] Guest: 2024-09-20 17:09:54.467876031 +0000 UTC Remote: 2024-09-20 17:09:54.38034625 +0000 UTC m=+146.191055828 (delta=87.529781ms)
	I0920 17:09:54.498227   27962 fix.go:200] guest clock delta is within tolerance: 87.529781ms
	I0920 17:09:54.498231   27962 start.go:83] releasing machines lock for "ha-135993-m03", held for 25.386097949s
	I0920 17:09:54.498253   27962 main.go:141] libmachine: (ha-135993-m03) Calling .DriverName
	I0920 17:09:54.498534   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetIP
	I0920 17:09:54.501028   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:54.501386   27962 main.go:141] libmachine: (ha-135993-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:49:98", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:09:45 +0000 UTC Type:0 Mac:52:54:00:4a:49:98 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-135993-m03 Clientid:01:52:54:00:4a:49:98}
	I0920 17:09:54.501414   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:54.503574   27962 out.go:177] * Found network options:
	I0920 17:09:54.504800   27962 out.go:177]   - NO_PROXY=192.168.39.60,192.168.39.227
	W0920 17:09:54.505950   27962 proxy.go:119] fail to check proxy env: Error ip not in block
	W0920 17:09:54.505970   27962 proxy.go:119] fail to check proxy env: Error ip not in block
	I0920 17:09:54.505986   27962 main.go:141] libmachine: (ha-135993-m03) Calling .DriverName
	I0920 17:09:54.506533   27962 main.go:141] libmachine: (ha-135993-m03) Calling .DriverName
	I0920 17:09:54.506677   27962 main.go:141] libmachine: (ha-135993-m03) Calling .DriverName
	I0920 17:09:54.506748   27962 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 17:09:54.506777   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHHostname
	W0920 17:09:54.506811   27962 proxy.go:119] fail to check proxy env: Error ip not in block
	W0920 17:09:54.506837   27962 proxy.go:119] fail to check proxy env: Error ip not in block
	I0920 17:09:54.506918   27962 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 17:09:54.506942   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHHostname
	I0920 17:09:54.510430   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:54.510572   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:54.510840   27962 main.go:141] libmachine: (ha-135993-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:49:98", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:09:45 +0000 UTC Type:0 Mac:52:54:00:4a:49:98 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-135993-m03 Clientid:01:52:54:00:4a:49:98}
	I0920 17:09:54.510857   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:54.511009   27962 main.go:141] libmachine: (ha-135993-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:49:98", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:09:45 +0000 UTC Type:0 Mac:52:54:00:4a:49:98 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-135993-m03 Clientid:01:52:54:00:4a:49:98}
	I0920 17:09:54.511022   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHPort
	I0920 17:09:54.511025   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:54.511158   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHPort
	I0920 17:09:54.511238   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHKeyPath
	I0920 17:09:54.511306   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHKeyPath
	I0920 17:09:54.511366   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHUsername
	I0920 17:09:54.511419   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHUsername
	I0920 17:09:54.511477   27962 sshutil.go:53] new ssh client: &{IP:192.168.39.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m03/id_rsa Username:docker}
	I0920 17:09:54.511516   27962 sshutil.go:53] new ssh client: &{IP:192.168.39.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m03/id_rsa Username:docker}
	I0920 17:09:54.752778   27962 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 17:09:54.758470   27962 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 17:09:54.758545   27962 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 17:09:54.777293   27962 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 17:09:54.777314   27962 start.go:495] detecting cgroup driver to use...
	I0920 17:09:54.777373   27962 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 17:09:54.794867   27962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 17:09:54.812379   27962 docker.go:217] disabling cri-docker service (if available) ...
	I0920 17:09:54.812435   27962 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 17:09:54.829513   27962 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 17:09:54.844058   27962 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 17:09:54.965032   27962 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 17:09:55.105410   27962 docker.go:233] disabling docker service ...
	I0920 17:09:55.105473   27962 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 17:09:55.119024   27962 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 17:09:55.131474   27962 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 17:09:55.280550   27962 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 17:09:55.424589   27962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 17:09:55.438591   27962 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 17:09:55.457023   27962 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 17:09:55.457079   27962 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:09:55.469113   27962 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 17:09:55.469204   27962 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:09:55.480768   27962 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:09:55.491997   27962 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:09:55.503252   27962 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 17:09:55.515007   27962 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:09:55.527072   27962 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:09:55.544868   27962 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:09:55.556070   27962 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 17:09:55.566274   27962 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 17:09:55.566347   27962 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 17:09:55.579815   27962 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 17:09:55.591271   27962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:09:55.721172   27962 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 17:09:55.816671   27962 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 17:09:55.816750   27962 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 17:09:55.821593   27962 start.go:563] Will wait 60s for crictl version
	I0920 17:09:55.821670   27962 ssh_runner.go:195] Run: which crictl
	I0920 17:09:55.825326   27962 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 17:09:55.861139   27962 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 17:09:55.861214   27962 ssh_runner.go:195] Run: crio --version
	I0920 17:09:55.889848   27962 ssh_runner.go:195] Run: crio --version
	I0920 17:09:55.919422   27962 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 17:09:55.920775   27962 out.go:177]   - env NO_PROXY=192.168.39.60
	I0920 17:09:55.922083   27962 out.go:177]   - env NO_PROXY=192.168.39.60,192.168.39.227
	I0920 17:09:55.923747   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetIP
	I0920 17:09:55.926252   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:55.926556   27962 main.go:141] libmachine: (ha-135993-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:49:98", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:09:45 +0000 UTC Type:0 Mac:52:54:00:4a:49:98 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-135993-m03 Clientid:01:52:54:00:4a:49:98}
	I0920 17:09:55.926586   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:55.926743   27962 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 17:09:55.930814   27962 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 17:09:55.943504   27962 mustload.go:65] Loading cluster: ha-135993
	I0920 17:09:55.943748   27962 config.go:182] Loaded profile config "ha-135993": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 17:09:55.944067   27962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:09:55.944109   27962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:09:55.959177   27962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34285
	I0920 17:09:55.959707   27962 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:09:55.960208   27962 main.go:141] libmachine: Using API Version  1
	I0920 17:09:55.960231   27962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:09:55.960549   27962 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:09:55.960794   27962 main.go:141] libmachine: (ha-135993) Calling .GetState
	I0920 17:09:55.962489   27962 host.go:66] Checking if "ha-135993" exists ...
	I0920 17:09:55.962798   27962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:09:55.962843   27962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:09:55.977302   27962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45181
	I0920 17:09:55.977710   27962 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:09:55.978227   27962 main.go:141] libmachine: Using API Version  1
	I0920 17:09:55.978253   27962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:09:55.978558   27962 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:09:55.978742   27962 main.go:141] libmachine: (ha-135993) Calling .DriverName
	I0920 17:09:55.978879   27962 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993 for IP: 192.168.39.133
	I0920 17:09:55.978893   27962 certs.go:194] generating shared ca certs ...
	I0920 17:09:55.978913   27962 certs.go:226] acquiring lock for ca certs: {Name:mkc7ef6c737c6bdc3fdd9dcff8f57029c020d8f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:09:55.979064   27962 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key
	I0920 17:09:55.979123   27962 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key
	I0920 17:09:55.979137   27962 certs.go:256] generating profile certs ...
	I0920 17:09:55.979252   27962 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/client.key
	I0920 17:09:55.979287   27962 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.key.9a1e7345
	I0920 17:09:55.979305   27962 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.crt.9a1e7345 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.60 192.168.39.227 192.168.39.133 192.168.39.254]
	I0920 17:09:56.205622   27962 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.crt.9a1e7345 ...
	I0920 17:09:56.205652   27962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.crt.9a1e7345: {Name:mk741001df891368c2b48ce6ca33636b00474c7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:09:56.205862   27962 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.key.9a1e7345 ...
	I0920 17:09:56.205885   27962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.key.9a1e7345: {Name:mka8bfccee8c9e3909ae2b3c3cb9e59688362565 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:09:56.206039   27962 certs.go:381] copying /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.crt.9a1e7345 -> /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.crt
	I0920 17:09:56.206211   27962 certs.go:385] copying /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.key.9a1e7345 -> /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.key
	I0920 17:09:56.206388   27962 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/proxy-client.key
	I0920 17:09:56.206407   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0920 17:09:56.206426   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0920 17:09:56.206446   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0920 17:09:56.206464   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0920 17:09:56.206480   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0920 17:09:56.206494   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0920 17:09:56.206511   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0920 17:09:56.225918   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0920 17:09:56.225997   27962 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem (1338 bytes)
	W0920 17:09:56.226041   27962 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973_empty.pem, impossibly tiny 0 bytes
	I0920 17:09:56.226052   27962 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 17:09:56.226073   27962 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem (1082 bytes)
	I0920 17:09:56.226113   27962 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem (1123 bytes)
	I0920 17:09:56.226142   27962 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem (1675 bytes)
	I0920 17:09:56.226194   27962 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem (1708 bytes)
	I0920 17:09:56.226220   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem -> /usr/share/ca-certificates/159732.pem
	I0920 17:09:56.226236   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:09:56.226256   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem -> /usr/share/ca-certificates/15973.pem
	I0920 17:09:56.226300   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHHostname
	I0920 17:09:56.229337   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:09:56.229721   27962 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:09:56.229749   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:09:56.229930   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHPort
	I0920 17:09:56.230128   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:09:56.230302   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHUsername
	I0920 17:09:56.230392   27962 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993/id_rsa Username:docker}
	I0920 17:09:56.306176   27962 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0920 17:09:56.311850   27962 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0920 17:09:56.324295   27962 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0920 17:09:56.330346   27962 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0920 17:09:56.342029   27962 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0920 17:09:56.345907   27962 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0920 17:09:56.356185   27962 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0920 17:09:56.360478   27962 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0920 17:09:56.372648   27962 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0920 17:09:56.377310   27962 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0920 17:09:56.392310   27962 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0920 17:09:56.398873   27962 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0920 17:09:56.416705   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 17:09:56.442036   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 17:09:56.465893   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 17:09:56.491259   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0920 17:09:56.515541   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0920 17:09:56.538762   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 17:09:56.561229   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 17:09:56.583847   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 17:09:56.607936   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /usr/share/ca-certificates/159732.pem (1708 bytes)
	I0920 17:09:56.634323   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 17:09:56.662363   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem --> /usr/share/ca-certificates/15973.pem (1338 bytes)
	I0920 17:09:56.687040   27962 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0920 17:09:56.702914   27962 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0920 17:09:56.719096   27962 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0920 17:09:56.735043   27962 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0920 17:09:56.751375   27962 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0920 17:09:56.767907   27962 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0920 17:09:56.785247   27962 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0920 17:09:56.800819   27962 ssh_runner.go:195] Run: openssl version
	I0920 17:09:56.807059   27962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15973.pem && ln -fs /usr/share/ca-certificates/15973.pem /etc/ssl/certs/15973.pem"
	I0920 17:09:56.819325   27962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15973.pem
	I0920 17:09:56.823881   27962 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:04 /usr/share/ca-certificates/15973.pem
	I0920 17:09:56.823942   27962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15973.pem
	I0920 17:09:56.829735   27962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15973.pem /etc/ssl/certs/51391683.0"
	I0920 17:09:56.840229   27962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/159732.pem && ln -fs /usr/share/ca-certificates/159732.pem /etc/ssl/certs/159732.pem"
	I0920 17:09:56.850295   27962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/159732.pem
	I0920 17:09:56.854454   27962 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:04 /usr/share/ca-certificates/159732.pem
	I0920 17:09:56.854516   27962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/159732.pem
	I0920 17:09:56.859987   27962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/159732.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 17:09:56.870869   27962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 17:09:56.881683   27962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:09:56.886087   27962 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:09:56.886162   27962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:09:56.891826   27962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 17:09:56.902542   27962 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 17:09:56.906493   27962 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 17:09:56.906563   27962 kubeadm.go:934] updating node {m03 192.168.39.133 8443 v1.31.1 crio true true} ...
	I0920 17:09:56.906662   27962 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-135993-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.133
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-135993 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 17:09:56.906694   27962 kube-vip.go:115] generating kube-vip config ...
	I0920 17:09:56.906737   27962 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0920 17:09:56.924849   27962 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0920 17:09:56.924928   27962 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0920 17:09:56.924987   27962 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 17:09:56.935083   27962 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0920 17:09:56.935139   27962 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0920 17:09:56.944640   27962 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0920 17:09:56.944640   27962 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0920 17:09:56.944675   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0920 17:09:56.944710   27962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 17:09:56.944648   27962 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0920 17:09:56.944785   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0920 17:09:56.944830   27962 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0920 17:09:56.944765   27962 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0920 17:09:56.962033   27962 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0920 17:09:56.962071   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0920 17:09:56.962074   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0920 17:09:56.962167   27962 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0920 17:09:56.962114   27962 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0920 17:09:56.962188   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0920 17:09:56.995038   27962 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0920 17:09:56.995085   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0920 17:09:57.877062   27962 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0920 17:09:57.886499   27962 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0920 17:09:57.902951   27962 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 17:09:57.919648   27962 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0920 17:09:57.936776   27962 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0920 17:09:57.940394   27962 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 17:09:57.952344   27962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:09:58.086995   27962 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 17:09:58.104838   27962 host.go:66] Checking if "ha-135993" exists ...
	I0920 17:09:58.105202   27962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:09:58.105252   27962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:09:58.121702   27962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37739
	I0920 17:09:58.122199   27962 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:09:58.122665   27962 main.go:141] libmachine: Using API Version  1
	I0920 17:09:58.122690   27962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:09:58.123042   27962 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:09:58.123222   27962 main.go:141] libmachine: (ha-135993) Calling .DriverName
	I0920 17:09:58.123436   27962 start.go:317] joinCluster: &{Name:ha-135993 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-135993 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.133 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:fa
lse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 17:09:58.123567   27962 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0920 17:09:58.123585   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHHostname
	I0920 17:09:58.126769   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:09:58.127177   27962 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:09:58.127198   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:09:58.127380   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHPort
	I0920 17:09:58.127561   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:09:58.127676   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHUsername
	I0920 17:09:58.127807   27962 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993/id_rsa Username:docker}
	I0920 17:09:58.304684   27962 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.133 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 17:09:58.304742   27962 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token xem78t.sac8uh54rhaf5wj8 --discovery-token-ca-cert-hash sha256:736f3fb521855813decb4a7214f2b8beff5f81c3971b60decfa20a8807626a2d --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-135993-m03 --control-plane --apiserver-advertise-address=192.168.39.133 --apiserver-bind-port=8443"
	I0920 17:10:20.782828   27962 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token xem78t.sac8uh54rhaf5wj8 --discovery-token-ca-cert-hash sha256:736f3fb521855813decb4a7214f2b8beff5f81c3971b60decfa20a8807626a2d --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-135993-m03 --control-plane --apiserver-advertise-address=192.168.39.133 --apiserver-bind-port=8443": (22.478064097s)
	I0920 17:10:20.782862   27962 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0920 17:10:21.369579   27962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-135993-m03 minikube.k8s.io/updated_at=2024_09_20T17_10_21_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=0626f22cf0d915d75e291a5bce701f94395056e1 minikube.k8s.io/name=ha-135993 minikube.k8s.io/primary=false
	I0920 17:10:21.545661   27962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-135993-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0920 17:10:21.676455   27962 start.go:319] duration metric: took 23.553017419s to joinCluster
	I0920 17:10:21.676541   27962 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.133 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 17:10:21.676981   27962 config.go:182] Loaded profile config "ha-135993": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 17:10:21.678497   27962 out.go:177] * Verifying Kubernetes components...
	I0920 17:10:21.679903   27962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:10:21.961073   27962 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 17:10:21.996476   27962 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19672-8777/kubeconfig
	I0920 17:10:21.996707   27962 kapi.go:59] client config for ha-135993: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/client.crt", KeyFile:"/home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/client.key", CAFile:"/home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0920 17:10:21.996765   27962 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.60:8443
	I0920 17:10:21.996997   27962 node_ready.go:35] waiting up to 6m0s for node "ha-135993-m03" to be "Ready" ...
	I0920 17:10:21.997072   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:21.997080   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:21.997090   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:21.997095   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:22.001181   27962 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 17:10:22.497463   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:22.497485   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:22.497495   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:22.497507   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:22.502449   27962 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 17:10:22.997389   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:22.997418   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:22.997429   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:22.997438   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:23.001501   27962 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 17:10:23.497533   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:23.497557   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:23.497566   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:23.497570   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:23.500839   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:23.997331   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:23.997361   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:23.997370   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:23.997375   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:24.001172   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:24.001662   27962 node_ready.go:53] node "ha-135993-m03" has status "Ready":"False"
	I0920 17:10:24.497248   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:24.497270   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:24.497279   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:24.497284   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:24.501584   27962 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 17:10:24.997441   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:24.997461   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:24.997474   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:24.997479   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:25.001314   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:25.497255   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:25.497284   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:25.497297   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:25.497302   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:25.500828   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:25.997812   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:25.997877   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:25.997892   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:25.997897   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:26.001955   27962 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 17:10:26.002456   27962 node_ready.go:53] node "ha-135993-m03" has status "Ready":"False"
	I0920 17:10:26.497957   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:26.497985   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:26.498009   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:26.498014   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:26.505329   27962 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0920 17:10:26.997635   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:26.997665   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:26.997677   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:26.997681   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:27.001531   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:27.497548   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:27.497572   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:27.497582   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:27.497587   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:27.501038   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:27.998155   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:27.998184   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:27.998196   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:27.998201   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:28.002255   27962 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 17:10:28.002946   27962 node_ready.go:53] node "ha-135993-m03" has status "Ready":"False"
	I0920 17:10:28.497717   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:28.497741   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:28.497752   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:28.497759   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:28.501375   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:28.997522   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:28.997548   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:28.997556   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:28.997562   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:29.002576   27962 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 17:10:29.498184   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:29.498217   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:29.498230   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:29.498237   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:29.502043   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:29.998000   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:29.998032   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:29.998044   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:29.998050   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:30.001668   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:30.497469   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:30.497508   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:30.497521   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:30.497530   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:30.500913   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:30.501381   27962 node_ready.go:53] node "ha-135993-m03" has status "Ready":"False"
	I0920 17:10:30.997662   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:30.997683   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:30.997692   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:30.997696   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:31.001443   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:31.497374   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:31.497396   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:31.497406   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:31.497411   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:31.500970   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:31.998212   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:31.998237   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:31.998245   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:31.998250   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:32.005715   27962 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0920 17:10:32.497621   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:32.497644   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:32.497652   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:32.497656   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:32.501947   27962 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 17:10:32.502498   27962 node_ready.go:53] node "ha-135993-m03" has status "Ready":"False"
	I0920 17:10:32.998138   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:32.998162   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:32.998170   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:32.998174   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:33.002736   27962 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 17:10:33.497634   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:33.497655   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:33.497663   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:33.497669   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:33.501049   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:33.997307   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:33.997332   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:33.997340   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:33.997343   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:34.001271   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:34.497449   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:34.497471   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:34.497479   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:34.497483   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:34.501394   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:34.997478   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:34.997503   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:34.997512   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:34.997518   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:35.001257   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:35.001994   27962 node_ready.go:53] node "ha-135993-m03" has status "Ready":"False"
	I0920 17:10:35.497192   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:35.497221   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:35.497238   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:35.497244   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:35.501544   27962 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 17:10:35.997358   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:35.997383   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:35.997390   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:35.997394   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:36.000988   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:36.498031   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:36.498054   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:36.498064   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:36.498069   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:36.501887   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:36.997545   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:36.997568   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:36.997576   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:36.997579   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:37.001444   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:37.002042   27962 node_ready.go:53] node "ha-135993-m03" has status "Ready":"False"
	I0920 17:10:37.497312   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:37.497339   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:37.497347   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:37.497352   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:37.500690   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:37.997364   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:37.997392   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:37.997402   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:37.997406   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:38.000903   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:38.498015   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:38.498036   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:38.498046   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:38.498053   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:38.501382   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:38.997276   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:38.997298   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:38.997307   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:38.997311   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:39.000962   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:39.497287   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:39.497313   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:39.497323   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:39.497329   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:39.501180   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:39.501915   27962 node_ready.go:53] node "ha-135993-m03" has status "Ready":"False"
	I0920 17:10:39.997251   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:39.997274   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:39.997285   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:39.997291   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:40.000356   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:40.000916   27962 node_ready.go:49] node "ha-135993-m03" has status "Ready":"True"
	I0920 17:10:40.000937   27962 node_ready.go:38] duration metric: took 18.003923058s for node "ha-135993-m03" to be "Ready" ...
	I0920 17:10:40.000949   27962 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 17:10:40.001029   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods
	I0920 17:10:40.001041   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:40.001051   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:40.001059   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:40.007086   27962 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0920 17:10:40.013456   27962 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-gcvg4" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:40.013531   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-gcvg4
	I0920 17:10:40.013539   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:40.013547   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:40.013551   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:40.016217   27962 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 17:10:40.016928   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993
	I0920 17:10:40.016944   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:40.016951   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:40.016954   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:40.019552   27962 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 17:10:40.020302   27962 pod_ready.go:93] pod "coredns-7c65d6cfc9-gcvg4" in "kube-system" namespace has status "Ready":"True"
	I0920 17:10:40.020321   27962 pod_ready.go:82] duration metric: took 6.8416ms for pod "coredns-7c65d6cfc9-gcvg4" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:40.020329   27962 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-kpbhk" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:40.020387   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-kpbhk
	I0920 17:10:40.020395   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:40.020402   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:40.020405   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:40.022739   27962 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 17:10:40.023876   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993
	I0920 17:10:40.023897   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:40.023907   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:40.023914   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:40.026180   27962 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 17:10:40.026617   27962 pod_ready.go:93] pod "coredns-7c65d6cfc9-kpbhk" in "kube-system" namespace has status "Ready":"True"
	I0920 17:10:40.026633   27962 pod_ready.go:82] duration metric: took 6.291183ms for pod "coredns-7c65d6cfc9-kpbhk" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:40.026644   27962 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-135993" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:40.026708   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-135993
	I0920 17:10:40.026721   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:40.026729   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:40.026733   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:40.029955   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:40.030688   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993
	I0920 17:10:40.030707   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:40.030717   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:40.030724   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:40.033291   27962 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 17:10:40.033722   27962 pod_ready.go:93] pod "etcd-ha-135993" in "kube-system" namespace has status "Ready":"True"
	I0920 17:10:40.033740   27962 pod_ready.go:82] duration metric: took 7.086877ms for pod "etcd-ha-135993" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:40.033752   27962 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-135993-m02" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:40.033808   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-135993-m02
	I0920 17:10:40.033816   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:40.033823   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:40.033827   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:40.036180   27962 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 17:10:40.036735   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:10:40.036750   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:40.036757   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:40.036761   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:40.039148   27962 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 17:10:40.039672   27962 pod_ready.go:93] pod "etcd-ha-135993-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 17:10:40.039690   27962 pod_ready.go:82] duration metric: took 5.930508ms for pod "etcd-ha-135993-m02" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:40.039699   27962 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-135993-m03" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:40.198080   27962 request.go:632] Waited for 158.310883ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-135993-m03
	I0920 17:10:40.198142   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-135993-m03
	I0920 17:10:40.198147   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:40.198156   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:40.198165   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:40.201559   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:40.397955   27962 request.go:632] Waited for 195.344828ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:40.398036   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:40.398047   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:40.398057   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:40.398064   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:40.401572   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:40.402144   27962 pod_ready.go:93] pod "etcd-ha-135993-m03" in "kube-system" namespace has status "Ready":"True"
	I0920 17:10:40.402168   27962 pod_ready.go:82] duration metric: took 362.461912ms for pod "etcd-ha-135993-m03" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:40.402191   27962 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-135993" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:40.598190   27962 request.go:632] Waited for 195.924651ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-135993
	I0920 17:10:40.598265   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-135993
	I0920 17:10:40.598274   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:40.598282   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:40.598292   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:40.601449   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:40.797361   27962 request.go:632] Waited for 195.295556ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-135993
	I0920 17:10:40.797452   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993
	I0920 17:10:40.797463   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:40.797474   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:40.797479   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:40.800725   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:40.801428   27962 pod_ready.go:93] pod "kube-apiserver-ha-135993" in "kube-system" namespace has status "Ready":"True"
	I0920 17:10:40.801448   27962 pod_ready.go:82] duration metric: took 399.249989ms for pod "kube-apiserver-ha-135993" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:40.801457   27962 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-135993-m02" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:40.997409   27962 request.go:632] Waited for 195.878449ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-135993-m02
	I0920 17:10:40.997467   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-135993-m02
	I0920 17:10:40.997472   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:40.997479   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:40.997488   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:41.001457   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:41.197787   27962 request.go:632] Waited for 195.349078ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:10:41.197860   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:10:41.197871   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:41.197879   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:41.197882   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:41.201485   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:41.202105   27962 pod_ready.go:93] pod "kube-apiserver-ha-135993-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 17:10:41.202124   27962 pod_ready.go:82] duration metric: took 400.661085ms for pod "kube-apiserver-ha-135993-m02" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:41.202133   27962 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-135993-m03" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:41.398233   27962 request.go:632] Waited for 195.997178ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-135993-m03
	I0920 17:10:41.398303   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-135993-m03
	I0920 17:10:41.398378   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:41.398394   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:41.398400   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:41.402317   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:41.597319   27962 request.go:632] Waited for 194.299169ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:41.597378   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:41.597383   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:41.597411   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:41.597417   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:41.600918   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:41.601672   27962 pod_ready.go:93] pod "kube-apiserver-ha-135993-m03" in "kube-system" namespace has status "Ready":"True"
	I0920 17:10:41.601692   27962 pod_ready.go:82] duration metric: took 399.551518ms for pod "kube-apiserver-ha-135993-m03" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:41.601704   27962 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-135993" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:41.797255   27962 request.go:632] Waited for 195.471307ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-135993
	I0920 17:10:41.797312   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-135993
	I0920 17:10:41.797318   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:41.797325   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:41.797329   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:41.801261   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:41.997269   27962 request.go:632] Waited for 195.294616ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-135993
	I0920 17:10:41.997363   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993
	I0920 17:10:41.997371   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:41.997382   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:41.997392   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:42.001363   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:42.002111   27962 pod_ready.go:93] pod "kube-controller-manager-ha-135993" in "kube-system" namespace has status "Ready":"True"
	I0920 17:10:42.002135   27962 pod_ready.go:82] duration metric: took 400.422144ms for pod "kube-controller-manager-ha-135993" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:42.002152   27962 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-135993-m02" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:42.198137   27962 request.go:632] Waited for 195.883622ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-135993-m02
	I0920 17:10:42.198204   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-135993-m02
	I0920 17:10:42.198211   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:42.198224   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:42.198233   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:42.201776   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:42.397933   27962 request.go:632] Waited for 195.390844ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:10:42.397990   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:10:42.397996   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:42.398003   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:42.398008   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:42.401639   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:42.402402   27962 pod_ready.go:93] pod "kube-controller-manager-ha-135993-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 17:10:42.402423   27962 pod_ready.go:82] duration metric: took 400.260074ms for pod "kube-controller-manager-ha-135993-m02" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:42.402438   27962 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-135993-m03" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:42.597289   27962 request.go:632] Waited for 194.763978ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-135993-m03
	I0920 17:10:42.597371   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-135993-m03
	I0920 17:10:42.597384   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:42.597393   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:42.597400   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:42.601014   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:42.797863   27962 request.go:632] Waited for 195.944092ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:42.797944   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:42.797955   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:42.797965   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:42.797974   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:42.801609   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:42.802166   27962 pod_ready.go:93] pod "kube-controller-manager-ha-135993-m03" in "kube-system" namespace has status "Ready":"True"
	I0920 17:10:42.802184   27962 pod_ready.go:82] duration metric: took 399.739056ms for pod "kube-controller-manager-ha-135993-m03" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:42.802194   27962 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-45c9m" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:42.997304   27962 request.go:632] Waited for 195.040269ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-proxy-45c9m
	I0920 17:10:42.997408   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-proxy-45c9m
	I0920 17:10:42.997421   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:42.997432   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:42.997437   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:43.001257   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:43.198020   27962 request.go:632] Waited for 196.102413ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:43.198085   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:43.198092   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:43.198100   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:43.198106   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:43.201658   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:43.202252   27962 pod_ready.go:93] pod "kube-proxy-45c9m" in "kube-system" namespace has status "Ready":"True"
	I0920 17:10:43.202273   27962 pod_ready.go:82] duration metric: took 400.072197ms for pod "kube-proxy-45c9m" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:43.202287   27962 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-52r49" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:43.397914   27962 request.go:632] Waited for 195.445037ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-proxy-52r49
	I0920 17:10:43.397992   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-proxy-52r49
	I0920 17:10:43.397998   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:43.398005   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:43.398011   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:43.401788   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:43.597874   27962 request.go:632] Waited for 195.37712ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-135993
	I0920 17:10:43.597952   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993
	I0920 17:10:43.597964   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:43.597978   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:43.597989   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:43.600840   27962 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 17:10:43.601662   27962 pod_ready.go:93] pod "kube-proxy-52r49" in "kube-system" namespace has status "Ready":"True"
	I0920 17:10:43.601684   27962 pod_ready.go:82] duration metric: took 399.386758ms for pod "kube-proxy-52r49" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:43.601693   27962 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-z6xqt" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:43.797664   27962 request.go:632] Waited for 195.909482ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z6xqt
	I0920 17:10:43.797730   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z6xqt
	I0920 17:10:43.797738   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:43.797745   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:43.797750   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:43.801166   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:43.998193   27962 request.go:632] Waited for 196.396377ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:10:43.998304   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:10:43.998313   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:43.998325   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:43.998334   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:44.001971   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:44.002756   27962 pod_ready.go:93] pod "kube-proxy-z6xqt" in "kube-system" namespace has status "Ready":"True"
	I0920 17:10:44.002782   27962 pod_ready.go:82] duration metric: took 401.080699ms for pod "kube-proxy-z6xqt" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:44.002795   27962 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-135993" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:44.198129   27962 request.go:632] Waited for 195.259225ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-135993
	I0920 17:10:44.198208   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-135993
	I0920 17:10:44.198217   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:44.198225   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:44.198229   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:44.202058   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:44.398232   27962 request.go:632] Waited for 195.373668ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-135993
	I0920 17:10:44.398304   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993
	I0920 17:10:44.398313   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:44.398322   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:44.398336   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:44.402177   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:44.402890   27962 pod_ready.go:93] pod "kube-scheduler-ha-135993" in "kube-system" namespace has status "Ready":"True"
	I0920 17:10:44.402910   27962 pod_ready.go:82] duration metric: took 400.107134ms for pod "kube-scheduler-ha-135993" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:44.402920   27962 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-135993-m02" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:44.598018   27962 request.go:632] Waited for 195.007589ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-135993-m02
	I0920 17:10:44.598096   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-135993-m02
	I0920 17:10:44.598103   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:44.598114   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:44.598131   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:44.601458   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:44.797367   27962 request.go:632] Waited for 195.276041ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:10:44.797421   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:10:44.797426   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:44.797434   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:44.797437   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:44.800953   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:44.801547   27962 pod_ready.go:93] pod "kube-scheduler-ha-135993-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 17:10:44.801566   27962 pod_ready.go:82] duration metric: took 398.637509ms for pod "kube-scheduler-ha-135993-m02" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:44.801580   27962 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-135993-m03" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:44.997661   27962 request.go:632] Waited for 195.986647ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-135993-m03
	I0920 17:10:44.997741   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-135993-m03
	I0920 17:10:44.997749   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:44.997760   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:44.997769   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:45.001737   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:45.197777   27962 request.go:632] Waited for 195.358869ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:45.197842   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:45.197848   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:45.197858   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:45.197867   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:45.201296   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:45.201689   27962 pod_ready.go:93] pod "kube-scheduler-ha-135993-m03" in "kube-system" namespace has status "Ready":"True"
	I0920 17:10:45.201707   27962 pod_ready.go:82] duration metric: took 400.119509ms for pod "kube-scheduler-ha-135993-m03" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:45.201719   27962 pod_ready.go:39] duration metric: took 5.200758265s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 17:10:45.201733   27962 api_server.go:52] waiting for apiserver process to appear ...
	I0920 17:10:45.201783   27962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 17:10:45.218374   27962 api_server.go:72] duration metric: took 23.541794087s to wait for apiserver process to appear ...
	I0920 17:10:45.218402   27962 api_server.go:88] waiting for apiserver healthz status ...
	I0920 17:10:45.218421   27962 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0920 17:10:45.222904   27962 api_server.go:279] https://192.168.39.60:8443/healthz returned 200:
	ok
	I0920 17:10:45.222982   27962 round_trippers.go:463] GET https://192.168.39.60:8443/version
	I0920 17:10:45.222994   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:45.223006   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:45.223010   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:45.224049   27962 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0920 17:10:45.224222   27962 api_server.go:141] control plane version: v1.31.1
	I0920 17:10:45.224245   27962 api_server.go:131] duration metric: took 5.83633ms to wait for apiserver health ...
	I0920 17:10:45.224256   27962 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 17:10:45.397714   27962 request.go:632] Waited for 173.358789ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods
	I0920 17:10:45.397793   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods
	I0920 17:10:45.397805   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:45.397818   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:45.397824   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:45.404937   27962 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0920 17:10:45.411424   27962 system_pods.go:59] 24 kube-system pods found
	I0920 17:10:45.411457   27962 system_pods.go:61] "coredns-7c65d6cfc9-gcvg4" [899b2b8c-9009-46c0-816b-781e85eb8b19] Running
	I0920 17:10:45.411462   27962 system_pods.go:61] "coredns-7c65d6cfc9-kpbhk" [0dfd9f1a-148c-4dba-884a-8618b74f82d0] Running
	I0920 17:10:45.411466   27962 system_pods.go:61] "etcd-ha-135993" [d5e44451-ab41-4bdf-8f34-c8202bc33cda] Running
	I0920 17:10:45.411470   27962 system_pods.go:61] "etcd-ha-135993-m02" [415d8f47-3bc4-42fc-ae75-b8217f5a731d] Running
	I0920 17:10:45.411473   27962 system_pods.go:61] "etcd-ha-135993-m03" [e5b78345-b51c-4e26-871e-76e190b209b1] Running
	I0920 17:10:45.411476   27962 system_pods.go:61] "kindnet-5m4r8" [c799ac5a-69f7-45d5-a291-61edfd753404] Running
	I0920 17:10:45.411479   27962 system_pods.go:61] "kindnet-6clt2" [d73a0817-d84f-4269-9de0-1532287a07db] Running
	I0920 17:10:45.411483   27962 system_pods.go:61] "kindnet-hcqf8" [727437d8-e050-4785-8bb7-90f8a496a2cb] Running
	I0920 17:10:45.411485   27962 system_pods.go:61] "kube-apiserver-ha-135993" [0c2adef2-0b98-4752-ba6c-0719b723c93c] Running
	I0920 17:10:45.411489   27962 system_pods.go:61] "kube-apiserver-ha-135993-m02" [e74438ac-347a-4f38-ba96-c7687763c669] Running
	I0920 17:10:45.411492   27962 system_pods.go:61] "kube-apiserver-ha-135993-m03" [9b8aa964-0aee-4301-94be-5c3f4e6f67bb] Running
	I0920 17:10:45.411495   27962 system_pods.go:61] "kube-controller-manager-ha-135993" [ba82afa0-d0a1-4515-bd0f-574f03c2a5a5] Running
	I0920 17:10:45.411498   27962 system_pods.go:61] "kube-controller-manager-ha-135993-m02" [ba4a59ae-5348-43b7-b80d-96d5d63afea2] Running
	I0920 17:10:45.411501   27962 system_pods.go:61] "kube-controller-manager-ha-135993-m03" [d8ae5058-1dd2-4012-90ce-75dc09f57a92] Running
	I0920 17:10:45.411504   27962 system_pods.go:61] "kube-proxy-45c9m" [c04f91a8-5bae-4e79-bfca-66b941217821] Running
	I0920 17:10:45.411507   27962 system_pods.go:61] "kube-proxy-52r49" [8d1124bd-e7cb-4239-a29d-c1d5b8870aff] Running
	I0920 17:10:45.411510   27962 system_pods.go:61] "kube-proxy-z6xqt" [70b8c8ab-77cc-4681-a9b1-3c28fc0f2674] Running
	I0920 17:10:45.411514   27962 system_pods.go:61] "kube-scheduler-ha-135993" [25eaf632-035a-44d9-8ce0-e6c3c0e4e0c4] Running
	I0920 17:10:45.411520   27962 system_pods.go:61] "kube-scheduler-ha-135993-m02" [6daee23a-d627-41e8-81b1-ceb4fa70ec3e] Running
	I0920 17:10:45.411522   27962 system_pods.go:61] "kube-scheduler-ha-135993-m03" [3ae2b99d-898a-448c-8e87-2b1e4b5fbae9] Running
	I0920 17:10:45.411525   27962 system_pods.go:61] "kube-vip-ha-135993" [6aa396e1-76b2-4911-bc93-660c51cef03d] Running
	I0920 17:10:45.411528   27962 system_pods.go:61] "kube-vip-ha-135993-m02" [2e9d1f8a-1ce9-46f9-b58b-f13302832062] Running
	I0920 17:10:45.411531   27962 system_pods.go:61] "kube-vip-ha-135993-m03" [ccf084e2-8b4f-4dde-a290-e644d7a0dde3] Running
	I0920 17:10:45.411536   27962 system_pods.go:61] "storage-provisioner" [57137bee-9a7b-4659-a855-0da82d137cb0] Running
	I0920 17:10:45.411542   27962 system_pods.go:74] duration metric: took 187.277251ms to wait for pod list to return data ...
	I0920 17:10:45.411551   27962 default_sa.go:34] waiting for default service account to be created ...
	I0920 17:10:45.597901   27962 request.go:632] Waited for 186.270484ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/default/serviceaccounts
	I0920 17:10:45.597955   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/default/serviceaccounts
	I0920 17:10:45.597961   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:45.597969   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:45.597974   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:45.601352   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:45.601480   27962 default_sa.go:45] found service account: "default"
	I0920 17:10:45.601500   27962 default_sa.go:55] duration metric: took 189.941966ms for default service account to be created ...
	I0920 17:10:45.601512   27962 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 17:10:45.797900   27962 request.go:632] Waited for 196.315857ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods
	I0920 17:10:45.797971   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods
	I0920 17:10:45.797976   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:45.797983   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:45.797988   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:45.805414   27962 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0920 17:10:45.812236   27962 system_pods.go:86] 24 kube-system pods found
	I0920 17:10:45.812269   27962 system_pods.go:89] "coredns-7c65d6cfc9-gcvg4" [899b2b8c-9009-46c0-816b-781e85eb8b19] Running
	I0920 17:10:45.812275   27962 system_pods.go:89] "coredns-7c65d6cfc9-kpbhk" [0dfd9f1a-148c-4dba-884a-8618b74f82d0] Running
	I0920 17:10:45.812279   27962 system_pods.go:89] "etcd-ha-135993" [d5e44451-ab41-4bdf-8f34-c8202bc33cda] Running
	I0920 17:10:45.812282   27962 system_pods.go:89] "etcd-ha-135993-m02" [415d8f47-3bc4-42fc-ae75-b8217f5a731d] Running
	I0920 17:10:45.812287   27962 system_pods.go:89] "etcd-ha-135993-m03" [e5b78345-b51c-4e26-871e-76e190b209b1] Running
	I0920 17:10:45.812290   27962 system_pods.go:89] "kindnet-5m4r8" [c799ac5a-69f7-45d5-a291-61edfd753404] Running
	I0920 17:10:45.812294   27962 system_pods.go:89] "kindnet-6clt2" [d73a0817-d84f-4269-9de0-1532287a07db] Running
	I0920 17:10:45.812297   27962 system_pods.go:89] "kindnet-hcqf8" [727437d8-e050-4785-8bb7-90f8a496a2cb] Running
	I0920 17:10:45.812301   27962 system_pods.go:89] "kube-apiserver-ha-135993" [0c2adef2-0b98-4752-ba6c-0719b723c93c] Running
	I0920 17:10:45.812304   27962 system_pods.go:89] "kube-apiserver-ha-135993-m02" [e74438ac-347a-4f38-ba96-c7687763c669] Running
	I0920 17:10:45.812308   27962 system_pods.go:89] "kube-apiserver-ha-135993-m03" [9b8aa964-0aee-4301-94be-5c3f4e6f67bb] Running
	I0920 17:10:45.812311   27962 system_pods.go:89] "kube-controller-manager-ha-135993" [ba82afa0-d0a1-4515-bd0f-574f03c2a5a5] Running
	I0920 17:10:45.812314   27962 system_pods.go:89] "kube-controller-manager-ha-135993-m02" [ba4a59ae-5348-43b7-b80d-96d5d63afea2] Running
	I0920 17:10:45.812319   27962 system_pods.go:89] "kube-controller-manager-ha-135993-m03" [d8ae5058-1dd2-4012-90ce-75dc09f57a92] Running
	I0920 17:10:45.812324   27962 system_pods.go:89] "kube-proxy-45c9m" [c04f91a8-5bae-4e79-bfca-66b941217821] Running
	I0920 17:10:45.812328   27962 system_pods.go:89] "kube-proxy-52r49" [8d1124bd-e7cb-4239-a29d-c1d5b8870aff] Running
	I0920 17:10:45.812333   27962 system_pods.go:89] "kube-proxy-z6xqt" [70b8c8ab-77cc-4681-a9b1-3c28fc0f2674] Running
	I0920 17:10:45.812336   27962 system_pods.go:89] "kube-scheduler-ha-135993" [25eaf632-035a-44d9-8ce0-e6c3c0e4e0c4] Running
	I0920 17:10:45.812340   27962 system_pods.go:89] "kube-scheduler-ha-135993-m02" [6daee23a-d627-41e8-81b1-ceb4fa70ec3e] Running
	I0920 17:10:45.812344   27962 system_pods.go:89] "kube-scheduler-ha-135993-m03" [3ae2b99d-898a-448c-8e87-2b1e4b5fbae9] Running
	I0920 17:10:45.812348   27962 system_pods.go:89] "kube-vip-ha-135993" [6aa396e1-76b2-4911-bc93-660c51cef03d] Running
	I0920 17:10:45.812351   27962 system_pods.go:89] "kube-vip-ha-135993-m02" [2e9d1f8a-1ce9-46f9-b58b-f13302832062] Running
	I0920 17:10:45.812354   27962 system_pods.go:89] "kube-vip-ha-135993-m03" [ccf084e2-8b4f-4dde-a290-e644d7a0dde3] Running
	I0920 17:10:45.812360   27962 system_pods.go:89] "storage-provisioner" [57137bee-9a7b-4659-a855-0da82d137cb0] Running
	I0920 17:10:45.812366   27962 system_pods.go:126] duration metric: took 210.848794ms to wait for k8s-apps to be running ...
	I0920 17:10:45.812375   27962 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 17:10:45.812419   27962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 17:10:45.827985   27962 system_svc.go:56] duration metric: took 15.600828ms WaitForService to wait for kubelet
	I0920 17:10:45.828023   27962 kubeadm.go:582] duration metric: took 24.151442817s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 17:10:45.828047   27962 node_conditions.go:102] verifying NodePressure condition ...
	I0920 17:10:45.998195   27962 request.go:632] Waited for 170.064742ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes
	I0920 17:10:45.998254   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes
	I0920 17:10:45.998260   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:45.998267   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:45.998275   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:46.002746   27962 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 17:10:46.003936   27962 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 17:10:46.003959   27962 node_conditions.go:123] node cpu capacity is 2
	I0920 17:10:46.003973   27962 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 17:10:46.003983   27962 node_conditions.go:123] node cpu capacity is 2
	I0920 17:10:46.003987   27962 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 17:10:46.003992   27962 node_conditions.go:123] node cpu capacity is 2
	I0920 17:10:46.004000   27962 node_conditions.go:105] duration metric: took 175.947788ms to run NodePressure ...
	I0920 17:10:46.004016   27962 start.go:241] waiting for startup goroutines ...
	I0920 17:10:46.004041   27962 start.go:255] writing updated cluster config ...
	I0920 17:10:46.004403   27962 ssh_runner.go:195] Run: rm -f paused
	I0920 17:10:46.058462   27962 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 17:10:46.060232   27962 out.go:177] * Done! kubectl is now configured to use "ha-135993" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 20 17:14:26 ha-135993 crio[661]: time="2024-09-20 17:14:26.968897328Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852466968868072,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=af027159-30ec-49c5-95d9-37483e0ff1e2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 17:14:26 ha-135993 crio[661]: time="2024-09-20 17:14:26.969425082Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0031aa7d-c571-4254-9e23-a0032e7a8abb name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:14:26 ha-135993 crio[661]: time="2024-09-20 17:14:26.969492507Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0031aa7d-c571-4254-9e23-a0032e7a8abb name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:14:26 ha-135993 crio[661]: time="2024-09-20 17:14:26.969794522Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d2a30264a8299336cfb5a4338bab42f8ef663f285ffa7fcb290de88635658a97,PodSandboxId:afa282bba6347297904c2a6423e39984da1a95a2caf8b23dd6857518cc0bb05d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726852250916004479,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-df429,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ef8ed1eb-a7c6-4c49-9e46-924bad4d9577,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36f3e8a4356ff70d6ac1d50a79bc078c32683201ca9c0787097b28f3417fdc90,PodSandboxId:c1cd70ce60a8324767003f8d8ab2c4e9bbafba1edc0b62b8ad54d09201e8b82c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726852104314815295,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57137bee-9a7b-4659-a855-0da82d137cb0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c668f6376655011c61ac0bac2456f3e751f265e08a925a427f7ecfca3d54d97,PodSandboxId:6e8ccc1edc7282309966f52b57c94684aa08f468f2023c8e5a61044327ad0cdb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726852104355492838,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kpbhk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dfd9f1a-148c-4dba-884a-8618b74f82d0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5054778f39bbb435a191b7cbdabfca614d8b6d64b7bce122294d50415d601787,PodSandboxId:6fda3c09e12fee3d9bfbdf163c989e0242d8464e853282bca12154b468cf1a1d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726852104287014426,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gcvg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 899b2b8c-90
09-46c0-816b-781e85eb8b19,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8792a3b1249ffb1a157682b07dec9a0edf11b81c44e53ea068e0fc6f4548be22,PodSandboxId:ed014d23a111faa2c65032e35b2ef554b33cd4cb8eddbc89e3b77efabb53a5ce,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17268520
92541411891,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6clt2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d73a0817-d84f-4269-9de0-1532287a07db,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4b462c3efaa10b1790ebacd5fe8845d6eb91643cb6db9b2830f33afe1744d57,PodSandboxId:1971096e9fdaaef4dfb856b23a044da810f3ea4c5d05511c7fc53105be9d664b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726852092275090369,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-52r49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d1124bd-e7cb-4239-a29d-c1d5b8870aff,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a56cd54bb369a6e7651ec4ac3a33220fcfa19226c4ca75ab535aeddbf2811a9,PodSandboxId:bd7dad5ca0acddfe976ec11fd8f4a2cd641ca2b663a812a1209683ee830b901e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726852084813198134,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a641242a9f304152af48553fabd7a110,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b48cf1f03207bb258f12f5506b5e60fd4bb742d7b2ced222664cc2b996ff15d,PodSandboxId:f3f5771528b9cb4aa1d625ba84f09a7820fe3964ceb281b04f73fe5e13f6f894,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726852081654759731,Labels:map[string]string{io.kubernetes.container.
name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fa5fbf3465d3154576a11474b7b8548,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f5eb92cf36b033e765f0681f8a6634251dfd6dc0db3410efd3ef6e8580a8b2f,PodSandboxId:b0a0c7068266ae4cdca9368f9606704a9cf02c9b4e905e2d6c179562e88cf8ba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726852081628521436,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bce625d942177d9274bb431f7a7012b2,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db80f5e2505940b81bc20c39e98d873b35eff2e71aac748d8b406e21f5435fca,PodSandboxId:77a9434f5f03e0fce6c021d0cb97ce2bbe8dbb697371c561907dc8656adca49c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726852081504080347,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.
kubernetes.pod.name: kube-scheduler-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086260df87dea88f53b3b3ca08d61864,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e70d74afe0f7f7c1e0f084aba99e9c488a1f999e94866b12072fdcbc24b0dad7,PodSandboxId:74a0a0888b0f65cfdbc3321666aaf8f377feaae2a4e3c132c5ed40ccd4468689,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726852081538334631,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-135993,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f0e3932935569a5c49edd0da5a87eba,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0031aa7d-c571-4254-9e23-a0032e7a8abb name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:14:27 ha-135993 crio[661]: time="2024-09-20 17:14:27.004486966Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a81a4476-1781-4208-8068-5240893e94d0 name=/runtime.v1.RuntimeService/Version
	Sep 20 17:14:27 ha-135993 crio[661]: time="2024-09-20 17:14:27.004772737Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a81a4476-1781-4208-8068-5240893e94d0 name=/runtime.v1.RuntimeService/Version
	Sep 20 17:14:27 ha-135993 crio[661]: time="2024-09-20 17:14:27.005874527Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5e4a7722-3f51-412a-b31c-0c0d0423e264 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 17:14:27 ha-135993 crio[661]: time="2024-09-20 17:14:27.006347858Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852467006325559,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5e4a7722-3f51-412a-b31c-0c0d0423e264 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 17:14:27 ha-135993 crio[661]: time="2024-09-20 17:14:27.006849754Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7493a184-a6e4-4324-b0b7-9e1d8f95aba2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:14:27 ha-135993 crio[661]: time="2024-09-20 17:14:27.006916692Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7493a184-a6e4-4324-b0b7-9e1d8f95aba2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:14:27 ha-135993 crio[661]: time="2024-09-20 17:14:27.007259789Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d2a30264a8299336cfb5a4338bab42f8ef663f285ffa7fcb290de88635658a97,PodSandboxId:afa282bba6347297904c2a6423e39984da1a95a2caf8b23dd6857518cc0bb05d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726852250916004479,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-df429,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ef8ed1eb-a7c6-4c49-9e46-924bad4d9577,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36f3e8a4356ff70d6ac1d50a79bc078c32683201ca9c0787097b28f3417fdc90,PodSandboxId:c1cd70ce60a8324767003f8d8ab2c4e9bbafba1edc0b62b8ad54d09201e8b82c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726852104314815295,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57137bee-9a7b-4659-a855-0da82d137cb0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c668f6376655011c61ac0bac2456f3e751f265e08a925a427f7ecfca3d54d97,PodSandboxId:6e8ccc1edc7282309966f52b57c94684aa08f468f2023c8e5a61044327ad0cdb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726852104355492838,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kpbhk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dfd9f1a-148c-4dba-884a-8618b74f82d0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5054778f39bbb435a191b7cbdabfca614d8b6d64b7bce122294d50415d601787,PodSandboxId:6fda3c09e12fee3d9bfbdf163c989e0242d8464e853282bca12154b468cf1a1d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726852104287014426,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gcvg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 899b2b8c-90
09-46c0-816b-781e85eb8b19,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8792a3b1249ffb1a157682b07dec9a0edf11b81c44e53ea068e0fc6f4548be22,PodSandboxId:ed014d23a111faa2c65032e35b2ef554b33cd4cb8eddbc89e3b77efabb53a5ce,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17268520
92541411891,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6clt2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d73a0817-d84f-4269-9de0-1532287a07db,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4b462c3efaa10b1790ebacd5fe8845d6eb91643cb6db9b2830f33afe1744d57,PodSandboxId:1971096e9fdaaef4dfb856b23a044da810f3ea4c5d05511c7fc53105be9d664b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726852092275090369,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-52r49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d1124bd-e7cb-4239-a29d-c1d5b8870aff,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a56cd54bb369a6e7651ec4ac3a33220fcfa19226c4ca75ab535aeddbf2811a9,PodSandboxId:bd7dad5ca0acddfe976ec11fd8f4a2cd641ca2b663a812a1209683ee830b901e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726852084813198134,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a641242a9f304152af48553fabd7a110,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b48cf1f03207bb258f12f5506b5e60fd4bb742d7b2ced222664cc2b996ff15d,PodSandboxId:f3f5771528b9cb4aa1d625ba84f09a7820fe3964ceb281b04f73fe5e13f6f894,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726852081654759731,Labels:map[string]string{io.kubernetes.container.
name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fa5fbf3465d3154576a11474b7b8548,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f5eb92cf36b033e765f0681f8a6634251dfd6dc0db3410efd3ef6e8580a8b2f,PodSandboxId:b0a0c7068266ae4cdca9368f9606704a9cf02c9b4e905e2d6c179562e88cf8ba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726852081628521436,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bce625d942177d9274bb431f7a7012b2,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db80f5e2505940b81bc20c39e98d873b35eff2e71aac748d8b406e21f5435fca,PodSandboxId:77a9434f5f03e0fce6c021d0cb97ce2bbe8dbb697371c561907dc8656adca49c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726852081504080347,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.
kubernetes.pod.name: kube-scheduler-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086260df87dea88f53b3b3ca08d61864,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e70d74afe0f7f7c1e0f084aba99e9c488a1f999e94866b12072fdcbc24b0dad7,PodSandboxId:74a0a0888b0f65cfdbc3321666aaf8f377feaae2a4e3c132c5ed40ccd4468689,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726852081538334631,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-135993,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f0e3932935569a5c49edd0da5a87eba,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7493a184-a6e4-4324-b0b7-9e1d8f95aba2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:14:27 ha-135993 crio[661]: time="2024-09-20 17:14:27.045480690Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4ff62ec0-b92c-4d8b-b3a2-59bb3527ea0b name=/runtime.v1.RuntimeService/Version
	Sep 20 17:14:27 ha-135993 crio[661]: time="2024-09-20 17:14:27.045575730Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4ff62ec0-b92c-4d8b-b3a2-59bb3527ea0b name=/runtime.v1.RuntimeService/Version
	Sep 20 17:14:27 ha-135993 crio[661]: time="2024-09-20 17:14:27.046514013Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1dccc01d-c42b-475f-9e5a-66f437202438 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 17:14:27 ha-135993 crio[661]: time="2024-09-20 17:14:27.046970839Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852467046946990,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1dccc01d-c42b-475f-9e5a-66f437202438 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 17:14:27 ha-135993 crio[661]: time="2024-09-20 17:14:27.047482959Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3f95b512-4452-423a-9e76-5c2d9f9f3b00 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:14:27 ha-135993 crio[661]: time="2024-09-20 17:14:27.047547959Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3f95b512-4452-423a-9e76-5c2d9f9f3b00 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:14:27 ha-135993 crio[661]: time="2024-09-20 17:14:27.048455945Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d2a30264a8299336cfb5a4338bab42f8ef663f285ffa7fcb290de88635658a97,PodSandboxId:afa282bba6347297904c2a6423e39984da1a95a2caf8b23dd6857518cc0bb05d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726852250916004479,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-df429,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ef8ed1eb-a7c6-4c49-9e46-924bad4d9577,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36f3e8a4356ff70d6ac1d50a79bc078c32683201ca9c0787097b28f3417fdc90,PodSandboxId:c1cd70ce60a8324767003f8d8ab2c4e9bbafba1edc0b62b8ad54d09201e8b82c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726852104314815295,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57137bee-9a7b-4659-a855-0da82d137cb0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c668f6376655011c61ac0bac2456f3e751f265e08a925a427f7ecfca3d54d97,PodSandboxId:6e8ccc1edc7282309966f52b57c94684aa08f468f2023c8e5a61044327ad0cdb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726852104355492838,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kpbhk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dfd9f1a-148c-4dba-884a-8618b74f82d0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5054778f39bbb435a191b7cbdabfca614d8b6d64b7bce122294d50415d601787,PodSandboxId:6fda3c09e12fee3d9bfbdf163c989e0242d8464e853282bca12154b468cf1a1d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726852104287014426,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gcvg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 899b2b8c-90
09-46c0-816b-781e85eb8b19,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8792a3b1249ffb1a157682b07dec9a0edf11b81c44e53ea068e0fc6f4548be22,PodSandboxId:ed014d23a111faa2c65032e35b2ef554b33cd4cb8eddbc89e3b77efabb53a5ce,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17268520
92541411891,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6clt2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d73a0817-d84f-4269-9de0-1532287a07db,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4b462c3efaa10b1790ebacd5fe8845d6eb91643cb6db9b2830f33afe1744d57,PodSandboxId:1971096e9fdaaef4dfb856b23a044da810f3ea4c5d05511c7fc53105be9d664b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726852092275090369,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-52r49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d1124bd-e7cb-4239-a29d-c1d5b8870aff,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a56cd54bb369a6e7651ec4ac3a33220fcfa19226c4ca75ab535aeddbf2811a9,PodSandboxId:bd7dad5ca0acddfe976ec11fd8f4a2cd641ca2b663a812a1209683ee830b901e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726852084813198134,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a641242a9f304152af48553fabd7a110,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b48cf1f03207bb258f12f5506b5e60fd4bb742d7b2ced222664cc2b996ff15d,PodSandboxId:f3f5771528b9cb4aa1d625ba84f09a7820fe3964ceb281b04f73fe5e13f6f894,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726852081654759731,Labels:map[string]string{io.kubernetes.container.
name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fa5fbf3465d3154576a11474b7b8548,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f5eb92cf36b033e765f0681f8a6634251dfd6dc0db3410efd3ef6e8580a8b2f,PodSandboxId:b0a0c7068266ae4cdca9368f9606704a9cf02c9b4e905e2d6c179562e88cf8ba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726852081628521436,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bce625d942177d9274bb431f7a7012b2,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db80f5e2505940b81bc20c39e98d873b35eff2e71aac748d8b406e21f5435fca,PodSandboxId:77a9434f5f03e0fce6c021d0cb97ce2bbe8dbb697371c561907dc8656adca49c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726852081504080347,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.
kubernetes.pod.name: kube-scheduler-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086260df87dea88f53b3b3ca08d61864,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e70d74afe0f7f7c1e0f084aba99e9c488a1f999e94866b12072fdcbc24b0dad7,PodSandboxId:74a0a0888b0f65cfdbc3321666aaf8f377feaae2a4e3c132c5ed40ccd4468689,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726852081538334631,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-135993,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f0e3932935569a5c49edd0da5a87eba,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3f95b512-4452-423a-9e76-5c2d9f9f3b00 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:14:27 ha-135993 crio[661]: time="2024-09-20 17:14:27.088433523Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e24596f5-5127-44ec-be3c-076646e65d79 name=/runtime.v1.RuntimeService/Version
	Sep 20 17:14:27 ha-135993 crio[661]: time="2024-09-20 17:14:27.088507667Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e24596f5-5127-44ec-be3c-076646e65d79 name=/runtime.v1.RuntimeService/Version
	Sep 20 17:14:27 ha-135993 crio[661]: time="2024-09-20 17:14:27.089345610Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f0897fe1-21eb-4595-adb2-252a8a584f17 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 17:14:27 ha-135993 crio[661]: time="2024-09-20 17:14:27.089791160Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852467089768298,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f0897fe1-21eb-4595-adb2-252a8a584f17 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 17:14:27 ha-135993 crio[661]: time="2024-09-20 17:14:27.090351421Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=97c99f58-74d6-495d-a4d6-71e0acd31808 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:14:27 ha-135993 crio[661]: time="2024-09-20 17:14:27.090410049Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=97c99f58-74d6-495d-a4d6-71e0acd31808 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:14:27 ha-135993 crio[661]: time="2024-09-20 17:14:27.090701335Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d2a30264a8299336cfb5a4338bab42f8ef663f285ffa7fcb290de88635658a97,PodSandboxId:afa282bba6347297904c2a6423e39984da1a95a2caf8b23dd6857518cc0bb05d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726852250916004479,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-df429,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ef8ed1eb-a7c6-4c49-9e46-924bad4d9577,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36f3e8a4356ff70d6ac1d50a79bc078c32683201ca9c0787097b28f3417fdc90,PodSandboxId:c1cd70ce60a8324767003f8d8ab2c4e9bbafba1edc0b62b8ad54d09201e8b82c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726852104314815295,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57137bee-9a7b-4659-a855-0da82d137cb0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c668f6376655011c61ac0bac2456f3e751f265e08a925a427f7ecfca3d54d97,PodSandboxId:6e8ccc1edc7282309966f52b57c94684aa08f468f2023c8e5a61044327ad0cdb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726852104355492838,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kpbhk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dfd9f1a-148c-4dba-884a-8618b74f82d0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5054778f39bbb435a191b7cbdabfca614d8b6d64b7bce122294d50415d601787,PodSandboxId:6fda3c09e12fee3d9bfbdf163c989e0242d8464e853282bca12154b468cf1a1d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726852104287014426,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gcvg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 899b2b8c-90
09-46c0-816b-781e85eb8b19,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8792a3b1249ffb1a157682b07dec9a0edf11b81c44e53ea068e0fc6f4548be22,PodSandboxId:ed014d23a111faa2c65032e35b2ef554b33cd4cb8eddbc89e3b77efabb53a5ce,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17268520
92541411891,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6clt2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d73a0817-d84f-4269-9de0-1532287a07db,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4b462c3efaa10b1790ebacd5fe8845d6eb91643cb6db9b2830f33afe1744d57,PodSandboxId:1971096e9fdaaef4dfb856b23a044da810f3ea4c5d05511c7fc53105be9d664b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726852092275090369,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-52r49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d1124bd-e7cb-4239-a29d-c1d5b8870aff,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a56cd54bb369a6e7651ec4ac3a33220fcfa19226c4ca75ab535aeddbf2811a9,PodSandboxId:bd7dad5ca0acddfe976ec11fd8f4a2cd641ca2b663a812a1209683ee830b901e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726852084813198134,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a641242a9f304152af48553fabd7a110,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b48cf1f03207bb258f12f5506b5e60fd4bb742d7b2ced222664cc2b996ff15d,PodSandboxId:f3f5771528b9cb4aa1d625ba84f09a7820fe3964ceb281b04f73fe5e13f6f894,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726852081654759731,Labels:map[string]string{io.kubernetes.container.
name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fa5fbf3465d3154576a11474b7b8548,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f5eb92cf36b033e765f0681f8a6634251dfd6dc0db3410efd3ef6e8580a8b2f,PodSandboxId:b0a0c7068266ae4cdca9368f9606704a9cf02c9b4e905e2d6c179562e88cf8ba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726852081628521436,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bce625d942177d9274bb431f7a7012b2,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db80f5e2505940b81bc20c39e98d873b35eff2e71aac748d8b406e21f5435fca,PodSandboxId:77a9434f5f03e0fce6c021d0cb97ce2bbe8dbb697371c561907dc8656adca49c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726852081504080347,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.
kubernetes.pod.name: kube-scheduler-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086260df87dea88f53b3b3ca08d61864,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e70d74afe0f7f7c1e0f084aba99e9c488a1f999e94866b12072fdcbc24b0dad7,PodSandboxId:74a0a0888b0f65cfdbc3321666aaf8f377feaae2a4e3c132c5ed40ccd4468689,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726852081538334631,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-135993,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f0e3932935569a5c49edd0da5a87eba,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=97c99f58-74d6-495d-a4d6-71e0acd31808 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d2a30264a8299       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   afa282bba6347       busybox-7dff88458-df429
	7c668f6376655       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   6e8ccc1edc728       coredns-7c65d6cfc9-kpbhk
	36f3e8a4356ff       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   c1cd70ce60a83       storage-provisioner
	5054778f39bbb       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   6fda3c09e12fe       coredns-7c65d6cfc9-gcvg4
	8792a3b1249ff       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      6 minutes ago       Running             kindnet-cni               0                   ed014d23a111f       kindnet-6clt2
	e4b462c3efaa1       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   1971096e9fdaa       kube-proxy-52r49
	1a56cd54bb369       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   bd7dad5ca0acd       kube-vip-ha-135993
	2b48cf1f03207       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   f3f5771528b9c       kube-controller-manager-ha-135993
	1f5eb92cf36b0       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   b0a0c7068266a       kube-apiserver-ha-135993
	e70d74afe0f7f       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   74a0a0888b0f6       etcd-ha-135993
	db80f5e250594       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   77a9434f5f03e       kube-scheduler-ha-135993
	
	
	==> coredns [5054778f39bbb435a191b7cbdabfca614d8b6d64b7bce122294d50415d601787] <==
	[INFO] 10.244.0.4:37855 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001838356s
	[INFO] 10.244.0.4:49834 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000062858s
	[INFO] 10.244.0.4:37202 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001240214s
	[INFO] 10.244.0.4:56343 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000095387s
	[INFO] 10.244.0.4:41974 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000080526s
	[INFO] 10.244.2.2:50089 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000170402s
	[INFO] 10.244.2.2:41205 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001201877s
	[INFO] 10.244.2.2:49094 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000154615s
	[INFO] 10.244.2.2:54226 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000116561s
	[INFO] 10.244.2.2:56885 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000137064s
	[INFO] 10.244.1.2:43199 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133082s
	[INFO] 10.244.1.2:54300 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000122573s
	[INFO] 10.244.1.2:57535 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000095892s
	[INFO] 10.244.1.2:45845 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000088385s
	[INFO] 10.244.0.4:53452 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000193594s
	[INFO] 10.244.0.4:46571 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000075164s
	[INFO] 10.244.2.2:44125 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000166147s
	[INFO] 10.244.2.2:59364 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000113432s
	[INFO] 10.244.2.2:54562 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000112311s
	[INFO] 10.244.1.2:60066 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132637s
	[INFO] 10.244.1.2:43717 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00017413s
	[INFO] 10.244.1.2:51684 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000156522s
	[INFO] 10.244.0.4:56213 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000141144s
	[INFO] 10.244.2.2:56175 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000117658s
	[INFO] 10.244.2.2:59810 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000111868s
	
	
	==> coredns [7c668f6376655011c61ac0bac2456f3e751f265e08a925a427f7ecfca3d54d97] <==
	[INFO] 10.244.0.4:48619 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00021775s
	[INFO] 10.244.0.4:46660 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000082726s
	[INFO] 10.244.2.2:38551 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.001366629s
	[INFO] 10.244.2.2:52956 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001396555s
	[INFO] 10.244.1.2:37231 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000279388s
	[INFO] 10.244.1.2:48508 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000280908s
	[INFO] 10.244.1.2:47714 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004766702s
	[INFO] 10.244.1.2:42041 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000169898s
	[INFO] 10.244.1.2:35115 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000212804s
	[INFO] 10.244.1.2:39956 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000247275s
	[INFO] 10.244.0.4:46191 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134745s
	[INFO] 10.244.0.4:49235 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000135262s
	[INFO] 10.244.0.4:33483 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000051965s
	[INFO] 10.244.2.2:40337 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000151683s
	[INFO] 10.244.2.2:54318 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001827239s
	[INFO] 10.244.2.2:58127 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000121998s
	[INFO] 10.244.0.4:54582 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000104228s
	[INFO] 10.244.0.4:57447 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000174115s
	[INFO] 10.244.2.2:39583 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000117382s
	[INFO] 10.244.1.2:55713 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00021321s
	[INFO] 10.244.0.4:57049 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000099997s
	[INFO] 10.244.0.4:39453 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000227319s
	[INFO] 10.244.0.4:46666 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000102501s
	[INFO] 10.244.2.2:49743 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159057s
	[INFO] 10.244.2.2:55499 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000197724s
	
	
	==> describe nodes <==
	Name:               ha-135993
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-135993
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0626f22cf0d915d75e291a5bce701f94395056e1
	                    minikube.k8s.io/name=ha-135993
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T17_08_08_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 17:08:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-135993
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 17:14:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 17:11:12 +0000   Fri, 20 Sep 2024 17:08:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 17:11:12 +0000   Fri, 20 Sep 2024 17:08:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 17:11:12 +0000   Fri, 20 Sep 2024 17:08:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 17:11:12 +0000   Fri, 20 Sep 2024 17:08:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.60
	  Hostname:    ha-135993
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6e83ceee6b834466a3a10733ff3c06b4
	  System UUID:                6e83ceee-6b83-4466-a3a1-0733ff3c06b4
	  Boot ID:                    ddcdaa90-2381-4c26-932e-b18d04f91d88
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-df429              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m41s
	  kube-system                 coredns-7c65d6cfc9-gcvg4             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m15s
	  kube-system                 coredns-7c65d6cfc9-kpbhk             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m15s
	  kube-system                 etcd-ha-135993                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m20s
	  kube-system                 kindnet-6clt2                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m16s
	  kube-system                 kube-apiserver-ha-135993             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m20s
	  kube-system                 kube-controller-manager-ha-135993    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m20s
	  kube-system                 kube-proxy-52r49                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m16s
	  kube-system                 kube-scheduler-ha-135993             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m20s
	  kube-system                 kube-vip-ha-135993                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m22s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m14s  kube-proxy       
	  Normal  Starting                 6m20s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m20s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m20s  kubelet          Node ha-135993 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m20s  kubelet          Node ha-135993 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m20s  kubelet          Node ha-135993 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m16s  node-controller  Node ha-135993 event: Registered Node ha-135993 in Controller
	  Normal  NodeReady                6m4s   kubelet          Node ha-135993 status is now: NodeReady
	  Normal  RegisteredNode           5m16s  node-controller  Node ha-135993 event: Registered Node ha-135993 in Controller
	  Normal  RegisteredNode           4m1s   node-controller  Node ha-135993 event: Registered Node ha-135993 in Controller
	
	
	Name:               ha-135993-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-135993-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0626f22cf0d915d75e291a5bce701f94395056e1
	                    minikube.k8s.io/name=ha-135993
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_20T17_09_05_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 17:09:03 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-135993-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 17:11:56 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 20 Sep 2024 17:11:05 +0000   Fri, 20 Sep 2024 17:12:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 20 Sep 2024 17:11:05 +0000   Fri, 20 Sep 2024 17:12:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 20 Sep 2024 17:11:05 +0000   Fri, 20 Sep 2024 17:12:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 20 Sep 2024 17:11:05 +0000   Fri, 20 Sep 2024 17:12:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.227
	  Hostname:    ha-135993-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 50c529298e8f4fbb9207cda8fc4b8abe
	  System UUID:                50c52929-8e8f-4fbb-9207-cda8fc4b8abe
	  Boot ID:                    7739b1d1-ac71-4753-b570-c987dc1deaff
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-cw8r4                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m41s
	  kube-system                 etcd-ha-135993-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m22s
	  kube-system                 kindnet-5m4r8                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m24s
	  kube-system                 kube-apiserver-ha-135993-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m22s
	  kube-system                 kube-controller-manager-ha-135993-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m17s
	  kube-system                 kube-proxy-z6xqt                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m24s
	  kube-system                 kube-scheduler-ha-135993-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m22s
	  kube-system                 kube-vip-ha-135993-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m20s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  5m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  CIDRAssignmentFailed     5m24s                  cidrAllocator    Node ha-135993-m02 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  5m24s (x8 over 5m25s)  kubelet          Node ha-135993-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m24s (x8 over 5m25s)  kubelet          Node ha-135993-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m24s (x7 over 5m25s)  kubelet          Node ha-135993-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m21s                  node-controller  Node ha-135993-m02 event: Registered Node ha-135993-m02 in Controller
	  Normal  RegisteredNode           5m16s                  node-controller  Node ha-135993-m02 event: Registered Node ha-135993-m02 in Controller
	  Normal  RegisteredNode           4m1s                   node-controller  Node ha-135993-m02 event: Registered Node ha-135993-m02 in Controller
	  Normal  NodeNotReady             111s                   node-controller  Node ha-135993-m02 status is now: NodeNotReady
	
	
	Name:               ha-135993-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-135993-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0626f22cf0d915d75e291a5bce701f94395056e1
	                    minikube.k8s.io/name=ha-135993
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_20T17_10_21_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 17:10:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-135993-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 17:14:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 17:11:19 +0000   Fri, 20 Sep 2024 17:10:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 17:11:19 +0000   Fri, 20 Sep 2024 17:10:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 17:11:19 +0000   Fri, 20 Sep 2024 17:10:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 17:11:19 +0000   Fri, 20 Sep 2024 17:10:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.133
	  Hostname:    ha-135993-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a16666848f8545f6bbb9419c97d0a0cd
	  System UUID:                a1666684-8f85-45f6-bbb9-419c97d0a0cd
	  Boot ID:                    fe050582-04ee-4cce-a278-cfc26db3e639
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-ksx56                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m41s
	  kube-system                 etcd-ha-135993-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m7s
	  kube-system                 kindnet-hcqf8                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m9s
	  kube-system                 kube-apiserver-ha-135993-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m8s
	  kube-system                 kube-controller-manager-ha-135993-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m8s
	  kube-system                 kube-proxy-45c9m                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m9s
	  kube-system                 kube-scheduler-ha-135993-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m7s
	  kube-system                 kube-vip-ha-135993-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m5s                 kube-proxy       
	  Normal  CIDRAssignmentFailed     4m9s                 cidrAllocator    Node ha-135993-m03 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  4m9s (x8 over 4m9s)  kubelet          Node ha-135993-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m9s (x8 over 4m9s)  kubelet          Node ha-135993-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m9s (x7 over 4m9s)  kubelet          Node ha-135993-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m6s                 node-controller  Node ha-135993-m03 event: Registered Node ha-135993-m03 in Controller
	  Normal  RegisteredNode           4m6s                 node-controller  Node ha-135993-m03 event: Registered Node ha-135993-m03 in Controller
	  Normal  RegisteredNode           4m1s                 node-controller  Node ha-135993-m03 event: Registered Node ha-135993-m03 in Controller
	
	
	Name:               ha-135993-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-135993-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0626f22cf0d915d75e291a5bce701f94395056e1
	                    minikube.k8s.io/name=ha-135993
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_20T17_11_21_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 17:11:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-135993-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 17:14:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 17:11:51 +0000   Fri, 20 Sep 2024 17:11:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 17:11:51 +0000   Fri, 20 Sep 2024 17:11:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 17:11:51 +0000   Fri, 20 Sep 2024 17:11:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 17:11:51 +0000   Fri, 20 Sep 2024 17:11:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.101
	  Hostname:    ha-135993-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 16a282b7a18241dba73a5c13e70f4f98
	  System UUID:                16a282b7-a182-41db-a73a-5c13e70f4f98
	  Boot ID:                    57ea2493-1758-4be8-813f-bc554e901359
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.4.0/24
	PodCIDRs:                     10.244.4.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-88sbs       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m6s
	  kube-system                 kube-proxy-2q8mx    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m1s                 kube-proxy       
	  Normal  RegisteredNode           3m6s                 node-controller  Node ha-135993-m04 event: Registered Node ha-135993-m04 in Controller
	  Normal  RegisteredNode           3m6s                 node-controller  Node ha-135993-m04 event: Registered Node ha-135993-m04 in Controller
	  Normal  CIDRAssignmentFailed     3m6s                 cidrAllocator    Node ha-135993-m04 status is now: CIDRAssignmentFailed
	  Normal  CIDRAssignmentFailed     3m6s                 cidrAllocator    Node ha-135993-m04 status is now: CIDRAssignmentFailed
	  Normal  RegisteredNode           3m6s                 node-controller  Node ha-135993-m04 event: Registered Node ha-135993-m04 in Controller
	  Normal  NodeHasSufficientMemory  3m6s (x2 over 3m7s)  kubelet          Node ha-135993-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m6s (x2 over 3m7s)  kubelet          Node ha-135993-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m6s (x2 over 3m7s)  kubelet          Node ha-135993-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m6s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m47s                kubelet          Node ha-135993-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep20 17:07] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051754] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036812] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.151587] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.924820] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.564513] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.722394] systemd-fstab-generator[585]: Ignoring "noauto" option for root device
	[  +0.057997] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064240] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.169257] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.120861] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.270464] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +4.125709] systemd-fstab-generator[747]: Ignoring "noauto" option for root device
	[Sep20 17:08] systemd-fstab-generator[882]: Ignoring "noauto" option for root device
	[  +0.057676] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.984086] systemd-fstab-generator[1298]: Ignoring "noauto" option for root device
	[  +0.083524] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.134244] kauditd_printk_skb: 33 callbacks suppressed
	[ +11.488548] kauditd_printk_skb: 26 callbacks suppressed
	[Sep20 17:09] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [e70d74afe0f7f7c1e0f084aba99e9c488a1f999e94866b12072fdcbc24b0dad7] <==
	{"level":"warn","ts":"2024-09-20T17:14:27.320623Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"a8909dfe06e7dd47","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T17:14:27.348674Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"a8909dfe06e7dd47","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T17:14:27.355082Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"a8909dfe06e7dd47","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T17:14:27.355170Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"a8909dfe06e7dd47","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T17:14:27.361246Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"a8909dfe06e7dd47","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T17:14:27.372705Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"a8909dfe06e7dd47","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T17:14:27.379907Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"a8909dfe06e7dd47","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T17:14:27.387161Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"a8909dfe06e7dd47","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T17:14:27.390604Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"a8909dfe06e7dd47","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T17:14:27.393784Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"a8909dfe06e7dd47","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T17:14:27.399512Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"a8909dfe06e7dd47","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T17:14:27.406985Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"a8909dfe06e7dd47","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T17:14:27.413632Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"a8909dfe06e7dd47","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T17:14:27.416852Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"a8909dfe06e7dd47","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T17:14:27.419798Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"a8909dfe06e7dd47","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T17:14:27.429484Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"a8909dfe06e7dd47","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T17:14:27.436336Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"a8909dfe06e7dd47","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T17:14:27.444149Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"a8909dfe06e7dd47","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T17:14:27.448642Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"a8909dfe06e7dd47","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T17:14:27.451349Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"a8909dfe06e7dd47","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T17:14:27.454887Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"a8909dfe06e7dd47","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T17:14:27.455083Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"a8909dfe06e7dd47","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T17:14:27.460879Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"a8909dfe06e7dd47","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T17:14:27.467151Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"a8909dfe06e7dd47","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T17:14:27.481887Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"a8909dfe06e7dd47","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 17:14:27 up 6 min,  0 users,  load average: 0.12, 0.26, 0.16
	Linux ha-135993 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [8792a3b1249ffb1a157682b07dec9a0edf11b81c44e53ea068e0fc6f4548be22] <==
	I0920 17:13:53.582999       1 main.go:322] Node ha-135993-m04 has CIDR [10.244.4.0/24] 
	I0920 17:14:03.590376       1 main.go:295] Handling node with IPs: map[192.168.39.60:{}]
	I0920 17:14:03.590443       1 main.go:299] handling current node
	I0920 17:14:03.590471       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0920 17:14:03.590480       1 main.go:322] Node ha-135993-m02 has CIDR [10.244.1.0/24] 
	I0920 17:14:03.590676       1 main.go:295] Handling node with IPs: map[192.168.39.133:{}]
	I0920 17:14:03.590706       1 main.go:322] Node ha-135993-m03 has CIDR [10.244.2.0/24] 
	I0920 17:14:03.590816       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I0920 17:14:03.590843       1 main.go:322] Node ha-135993-m04 has CIDR [10.244.4.0/24] 
	I0920 17:14:13.583195       1 main.go:295] Handling node with IPs: map[192.168.39.60:{}]
	I0920 17:14:13.583335       1 main.go:299] handling current node
	I0920 17:14:13.583420       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0920 17:14:13.583466       1 main.go:322] Node ha-135993-m02 has CIDR [10.244.1.0/24] 
	I0920 17:14:13.583620       1 main.go:295] Handling node with IPs: map[192.168.39.133:{}]
	I0920 17:14:13.583644       1 main.go:322] Node ha-135993-m03 has CIDR [10.244.2.0/24] 
	I0920 17:14:13.583702       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I0920 17:14:13.583720       1 main.go:322] Node ha-135993-m04 has CIDR [10.244.4.0/24] 
	I0920 17:14:23.591621       1 main.go:295] Handling node with IPs: map[192.168.39.60:{}]
	I0920 17:14:23.591791       1 main.go:299] handling current node
	I0920 17:14:23.591830       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0920 17:14:23.591879       1 main.go:322] Node ha-135993-m02 has CIDR [10.244.1.0/24] 
	I0920 17:14:23.592101       1 main.go:295] Handling node with IPs: map[192.168.39.133:{}]
	I0920 17:14:23.592144       1 main.go:322] Node ha-135993-m03 has CIDR [10.244.2.0/24] 
	I0920 17:14:23.592330       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I0920 17:14:23.592360       1 main.go:322] Node ha-135993-m04 has CIDR [10.244.4.0/24] 
	
	
	==> kube-apiserver [1f5eb92cf36b033e765f0681f8a6634251dfd6dc0db3410efd3ef6e8580a8b2f] <==
	I0920 17:08:07.820550       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0920 17:08:07.842885       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0920 17:08:07.862886       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0920 17:08:11.804724       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0920 17:08:12.220544       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0920 17:09:03.875074       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E0920 17:09:03.875307       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 9.525µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E0920 17:09:03.876629       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E0920 17:09:03.877931       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0920 17:09:03.879420       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="4.477542ms" method="POST" path="/api/v1/namespaces/kube-system/events" result=null
	E0920 17:10:52.052815       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53414: use of closed network connection
	E0920 17:10:52.239817       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53432: use of closed network connection
	E0920 17:10:52.430950       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53454: use of closed network connection
	E0920 17:10:52.630448       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53478: use of closed network connection
	E0920 17:10:52.817389       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53506: use of closed network connection
	E0920 17:10:52.989544       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53526: use of closed network connection
	E0920 17:10:53.190104       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53554: use of closed network connection
	E0920 17:10:53.362503       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53570: use of closed network connection
	E0920 17:10:53.531925       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53576: use of closed network connection
	E0920 17:10:53.828718       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53614: use of closed network connection
	E0920 17:10:53.999814       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53638: use of closed network connection
	E0920 17:10:54.192818       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53650: use of closed network connection
	E0920 17:10:54.370009       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53670: use of closed network connection
	E0920 17:10:54.550881       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53696: use of closed network connection
	E0920 17:10:54.730661       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53720: use of closed network connection
	
	
	==> kube-controller-manager [2b48cf1f03207bb258f12f5506b5e60fd4bb742d7b2ced222664cc2b996ff15d] <==
	E0920 17:11:20.808313       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-vs5cl failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-vs5cl\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E0920 17:11:20.822359       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-vs5cl failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-vs5cl\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0920 17:11:21.218667       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-135993-m04\" does not exist"
	I0920 17:11:21.266531       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-135993-m04" podCIDRs=["10.244.4.0/24"]
	I0920 17:11:21.268323       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-135993-m04"
	I0920 17:11:21.270125       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-135993-m04"
	I0920 17:11:21.352675       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-135993-m04"
	I0920 17:11:21.402439       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-135993-m04"
	I0920 17:11:21.449183       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-135993-m04"
	I0920 17:11:21.529576       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-135993-m04"
	I0920 17:11:21.640943       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-135993-m04"
	I0920 17:11:21.919088       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-135993-m04"
	I0920 17:11:31.476194       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-135993-m04"
	I0920 17:11:40.764702       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-135993-m04"
	I0920 17:11:40.765063       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-135993-m04"
	I0920 17:11:40.780191       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-135993-m04"
	I0920 17:11:41.285623       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-135993-m04"
	I0920 17:11:51.690173       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-135993-m04"
	I0920 17:12:36.378745       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-135993-m04"
	I0920 17:12:36.380639       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-135993-m02"
	I0920 17:12:36.411090       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-135993-m02"
	I0920 17:12:36.576962       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="32.12946ms"
	I0920 17:12:36.577066       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="57.179µs"
	I0920 17:12:36.637966       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-135993-m02"
	I0920 17:12:41.581669       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-135993-m02"
	
	
	==> kube-proxy [e4b462c3efaa10b1790ebacd5fe8845d6eb91643cb6db9b2830f33afe1744d57] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 17:08:12.692616       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0920 17:08:12.737645       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.60"]
	E0920 17:08:12.737744       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 17:08:12.838388       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 17:08:12.838464       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 17:08:12.838491       1 server_linux.go:169] "Using iptables Proxier"
	I0920 17:08:12.844425       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 17:08:12.846303       1 server.go:483] "Version info" version="v1.31.1"
	I0920 17:08:12.846331       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 17:08:12.851490       1 config.go:199] "Starting service config controller"
	I0920 17:08:12.851939       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 17:08:12.853474       1 config.go:105] "Starting endpoint slice config controller"
	I0920 17:08:12.855057       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 17:08:12.854368       1 config.go:328] "Starting node config controller"
	I0920 17:08:12.883844       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 17:08:12.954338       1 shared_informer.go:320] Caches are synced for service config
	I0920 17:08:12.955452       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 17:08:12.985151       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [db80f5e2505940b81bc20c39e98d873b35eff2e71aac748d8b406e21f5435fca] <==
	W0920 17:08:05.980455       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0920 17:08:05.980516       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0920 17:08:05.980456       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0920 17:08:07.504058       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0920 17:10:18.405414       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-hcqf8\": pod kindnet-hcqf8 is already assigned to node \"ha-135993-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-hcqf8" node="ha-135993-m03"
	E0920 17:10:18.405548       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-45c9m\": pod kube-proxy-45c9m is already assigned to node \"ha-135993-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-45c9m" node="ha-135993-m03"
	E0920 17:10:18.409425       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-45c9m\": pod kube-proxy-45c9m is already assigned to node \"ha-135993-m03\"" pod="kube-system/kube-proxy-45c9m"
	E0920 17:10:18.411700       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-hcqf8\": pod kindnet-hcqf8 is already assigned to node \"ha-135993-m03\"" pod="kube-system/kindnet-hcqf8"
	I0920 17:10:18.416087       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-hcqf8" node="ha-135993-m03"
	E0920 17:10:46.972562       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-ksx56\": pod busybox-7dff88458-ksx56 is already assigned to node \"ha-135993-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-ksx56" node="ha-135993-m03"
	E0920 17:10:46.972640       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod f499b34f-4e98-4ebc-90b5-90b1b13d26c7(default/busybox-7dff88458-ksx56) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-ksx56"
	E0920 17:10:46.972665       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-ksx56\": pod busybox-7dff88458-ksx56 is already assigned to node \"ha-135993-m03\"" pod="default/busybox-7dff88458-ksx56"
	I0920 17:10:46.972689       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-ksx56" node="ha-135993-m03"
	E0920 17:11:21.276134       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-w6gf8\": pod kube-proxy-w6gf8 is already assigned to node \"ha-135993-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-w6gf8" node="ha-135993-m04"
	E0920 17:11:21.276387       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 344e8822-62e5-4678-9654-381b97c31527(kube-system/kube-proxy-w6gf8) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-w6gf8"
	E0920 17:11:21.277109       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-w6gf8\": pod kube-proxy-w6gf8 is already assigned to node \"ha-135993-m04\"" pod="kube-system/kube-proxy-w6gf8"
	I0920 17:11:21.277247       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-w6gf8" node="ha-135993-m04"
	E0920 17:11:21.344572       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-n6xl6\": pod kindnet-n6xl6 is already assigned to node \"ha-135993-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-n6xl6" node="ha-135993-m04"
	E0920 17:11:21.344755       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-n6xl6\": pod kindnet-n6xl6 is already assigned to node \"ha-135993-m04\"" pod="kube-system/kindnet-n6xl6"
	E0920 17:11:21.388481       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-jfsxq\": pod kube-proxy-jfsxq is already assigned to node \"ha-135993-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-jfsxq" node="ha-135993-m04"
	E0920 17:11:21.388679       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-jfsxq\": pod kube-proxy-jfsxq is already assigned to node \"ha-135993-m04\"" pod="kube-system/kube-proxy-jfsxq"
	E0920 17:11:21.399720       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-svxp4\": pod kindnet-svxp4 is already assigned to node \"ha-135993-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-svxp4" node="ha-135993-m04"
	E0920 17:11:21.401135       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod a758ff76-3e8c-40c1-9742-2fbcddd4aa87(kube-system/kindnet-svxp4) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-svxp4"
	E0920 17:11:21.401322       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-svxp4\": pod kindnet-svxp4 is already assigned to node \"ha-135993-m04\"" pod="kube-system/kindnet-svxp4"
	I0920 17:11:21.401439       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-svxp4" node="ha-135993-m04"
	
	
	==> kubelet <==
	Sep 20 17:13:07 ha-135993 kubelet[1305]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 20 17:13:07 ha-135993 kubelet[1305]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 20 17:13:07 ha-135993 kubelet[1305]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 20 17:13:07 ha-135993 kubelet[1305]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 20 17:13:07 ha-135993 kubelet[1305]: E0920 17:13:07.854081    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852387853510446,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:13:07 ha-135993 kubelet[1305]: E0920 17:13:07.854113    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852387853510446,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:13:17 ha-135993 kubelet[1305]: E0920 17:13:17.855865    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852397855363761,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:13:17 ha-135993 kubelet[1305]: E0920 17:13:17.856405    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852397855363761,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:13:27 ha-135993 kubelet[1305]: E0920 17:13:27.859417    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852407858772404,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:13:27 ha-135993 kubelet[1305]: E0920 17:13:27.859469    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852407858772404,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:13:37 ha-135993 kubelet[1305]: E0920 17:13:37.861128    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852417860446509,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:13:37 ha-135993 kubelet[1305]: E0920 17:13:37.861168    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852417860446509,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:13:47 ha-135993 kubelet[1305]: E0920 17:13:47.864331    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852427863997805,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:13:47 ha-135993 kubelet[1305]: E0920 17:13:47.864372    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852427863997805,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:13:57 ha-135993 kubelet[1305]: E0920 17:13:57.866952    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852437866556596,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:13:57 ha-135993 kubelet[1305]: E0920 17:13:57.866977    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852437866556596,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:14:07 ha-135993 kubelet[1305]: E0920 17:14:07.772947    1305 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 20 17:14:07 ha-135993 kubelet[1305]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 20 17:14:07 ha-135993 kubelet[1305]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 20 17:14:07 ha-135993 kubelet[1305]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 20 17:14:07 ha-135993 kubelet[1305]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 20 17:14:07 ha-135993 kubelet[1305]: E0920 17:14:07.869325    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852447868022083,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:14:07 ha-135993 kubelet[1305]: E0920 17:14:07.869353    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852447868022083,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:14:17 ha-135993 kubelet[1305]: E0920 17:14:17.871289    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852457870862251,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:14:17 ha-135993 kubelet[1305]: E0920 17:14:17.871679    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852457870862251,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-135993 -n ha-135993
helpers_test.go:261: (dbg) Run:  kubectl --context ha-135993 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (5.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (6.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-135993 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-135993 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-amd64 -p ha-135993 status -v=7 --alsologtostderr: (4.229396713s)
ha_test.go:435: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-135993 status -v=7 --alsologtostderr": 
ha_test.go:438: status says not all four hosts are running: args "out/minikube-linux-amd64 -p ha-135993 status -v=7 --alsologtostderr": 
ha_test.go:441: status says not all four kubelets are running: args "out/minikube-linux-amd64 -p ha-135993 status -v=7 --alsologtostderr": 
ha_test.go:444: status says not all three apiservers are running: args "out/minikube-linux-amd64 -p ha-135993 status -v=7 --alsologtostderr": 
ha_test.go:448: (dbg) Run:  kubectl get nodes
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-135993 -n ha-135993
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-135993 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-135993 logs -n 25: (1.360456776s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-135993 ssh -n                                                                 | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:11 UTC | 20 Sep 24 17:11 UTC |
	|         | ha-135993-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-135993 cp ha-135993-m03:/home/docker/cp-test.txt                              | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:11 UTC | 20 Sep 24 17:11 UTC |
	|         | ha-135993:/home/docker/cp-test_ha-135993-m03_ha-135993.txt                       |           |         |         |                     |                     |
	| ssh     | ha-135993 ssh -n                                                                 | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:11 UTC | 20 Sep 24 17:11 UTC |
	|         | ha-135993-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-135993 ssh -n ha-135993 sudo cat                                              | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:11 UTC | 20 Sep 24 17:11 UTC |
	|         | /home/docker/cp-test_ha-135993-m03_ha-135993.txt                                 |           |         |         |                     |                     |
	| cp      | ha-135993 cp ha-135993-m03:/home/docker/cp-test.txt                              | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:11 UTC | 20 Sep 24 17:11 UTC |
	|         | ha-135993-m02:/home/docker/cp-test_ha-135993-m03_ha-135993-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-135993 ssh -n                                                                 | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:11 UTC | 20 Sep 24 17:11 UTC |
	|         | ha-135993-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-135993 ssh -n ha-135993-m02 sudo cat                                          | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:11 UTC | 20 Sep 24 17:11 UTC |
	|         | /home/docker/cp-test_ha-135993-m03_ha-135993-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-135993 cp ha-135993-m03:/home/docker/cp-test.txt                              | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:11 UTC | 20 Sep 24 17:11 UTC |
	|         | ha-135993-m04:/home/docker/cp-test_ha-135993-m03_ha-135993-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-135993 ssh -n                                                                 | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:11 UTC | 20 Sep 24 17:11 UTC |
	|         | ha-135993-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-135993 ssh -n ha-135993-m04 sudo cat                                          | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:11 UTC | 20 Sep 24 17:11 UTC |
	|         | /home/docker/cp-test_ha-135993-m03_ha-135993-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-135993 cp testdata/cp-test.txt                                                | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:11 UTC | 20 Sep 24 17:11 UTC |
	|         | ha-135993-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-135993 ssh -n                                                                 | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:11 UTC | 20 Sep 24 17:11 UTC |
	|         | ha-135993-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-135993 cp ha-135993-m04:/home/docker/cp-test.txt                              | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:11 UTC | 20 Sep 24 17:11 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2672215621/001/cp-test_ha-135993-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-135993 ssh -n                                                                 | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:11 UTC | 20 Sep 24 17:11 UTC |
	|         | ha-135993-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-135993 cp ha-135993-m04:/home/docker/cp-test.txt                              | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:11 UTC | 20 Sep 24 17:11 UTC |
	|         | ha-135993:/home/docker/cp-test_ha-135993-m04_ha-135993.txt                       |           |         |         |                     |                     |
	| ssh     | ha-135993 ssh -n                                                                 | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:11 UTC | 20 Sep 24 17:11 UTC |
	|         | ha-135993-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-135993 ssh -n ha-135993 sudo cat                                              | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:11 UTC | 20 Sep 24 17:11 UTC |
	|         | /home/docker/cp-test_ha-135993-m04_ha-135993.txt                                 |           |         |         |                     |                     |
	| cp      | ha-135993 cp ha-135993-m04:/home/docker/cp-test.txt                              | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:11 UTC | 20 Sep 24 17:12 UTC |
	|         | ha-135993-m02:/home/docker/cp-test_ha-135993-m04_ha-135993-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-135993 ssh -n                                                                 | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:12 UTC | 20 Sep 24 17:12 UTC |
	|         | ha-135993-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-135993 ssh -n ha-135993-m02 sudo cat                                          | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:12 UTC | 20 Sep 24 17:12 UTC |
	|         | /home/docker/cp-test_ha-135993-m04_ha-135993-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-135993 cp ha-135993-m04:/home/docker/cp-test.txt                              | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:12 UTC | 20 Sep 24 17:12 UTC |
	|         | ha-135993-m03:/home/docker/cp-test_ha-135993-m04_ha-135993-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-135993 ssh -n                                                                 | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:12 UTC | 20 Sep 24 17:12 UTC |
	|         | ha-135993-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-135993 ssh -n ha-135993-m03 sudo cat                                          | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:12 UTC | 20 Sep 24 17:12 UTC |
	|         | /home/docker/cp-test_ha-135993-m04_ha-135993-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-135993 node stop m02 -v=7                                                     | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:12 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-135993 node start m02 -v=7                                                    | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:14 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 17:07:28
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 17:07:28.224109   27962 out.go:345] Setting OutFile to fd 1 ...
	I0920 17:07:28.224206   27962 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:07:28.224213   27962 out.go:358] Setting ErrFile to fd 2...
	I0920 17:07:28.224218   27962 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:07:28.224387   27962 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-8777/.minikube/bin
	I0920 17:07:28.224982   27962 out.go:352] Setting JSON to false
	I0920 17:07:28.225784   27962 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2991,"bootTime":1726849057,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 17:07:28.225901   27962 start.go:139] virtualization: kvm guest
	I0920 17:07:28.228074   27962 out.go:177] * [ha-135993] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 17:07:28.229408   27962 notify.go:220] Checking for updates...
	I0920 17:07:28.229444   27962 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 17:07:28.230821   27962 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 17:07:28.231979   27962 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-8777/kubeconfig
	I0920 17:07:28.233045   27962 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-8777/.minikube
	I0920 17:07:28.234136   27962 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 17:07:28.235151   27962 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 17:07:28.236602   27962 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 17:07:28.271877   27962 out.go:177] * Using the kvm2 driver based on user configuration
	I0920 17:07:28.273222   27962 start.go:297] selected driver: kvm2
	I0920 17:07:28.273240   27962 start.go:901] validating driver "kvm2" against <nil>
	I0920 17:07:28.273253   27962 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 17:07:28.274045   27962 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 17:07:28.274154   27962 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19672-8777/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 17:07:28.289424   27962 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 17:07:28.289473   27962 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 17:07:28.289714   27962 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 17:07:28.289743   27962 cni.go:84] Creating CNI manager for ""
	I0920 17:07:28.289789   27962 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0920 17:07:28.289814   27962 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0920 17:07:28.289902   27962 start.go:340] cluster config:
	{Name:ha-135993 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-135993 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0920 17:07:28.290006   27962 iso.go:125] acquiring lock: {Name:mkba95ef0488e46f622333e9f317f43def93040b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 17:07:28.291840   27962 out.go:177] * Starting "ha-135993" primary control-plane node in "ha-135993" cluster
	I0920 17:07:28.292971   27962 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 17:07:28.293012   27962 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0920 17:07:28.293022   27962 cache.go:56] Caching tarball of preloaded images
	I0920 17:07:28.293121   27962 preload.go:172] Found /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 17:07:28.293135   27962 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 17:07:28.293509   27962 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/config.json ...
	I0920 17:07:28.293532   27962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/config.json: {Name:mk8c38de8f77a94cd04edafc97e1e3e5f16f67aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:07:28.293702   27962 start.go:360] acquireMachinesLock for ha-135993: {Name:mkfeedb385cf08b5d2aa00913e85815d02a180c2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 17:07:28.293739   27962 start.go:364] duration metric: took 21.191µs to acquireMachinesLock for "ha-135993"
	I0920 17:07:28.293762   27962 start.go:93] Provisioning new machine with config: &{Name:ha-135993 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-135993 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 17:07:28.293816   27962 start.go:125] createHost starting for "" (driver="kvm2")
	I0920 17:07:28.295606   27962 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 17:07:28.295844   27962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:07:28.295897   27962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:07:28.310515   27962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38263
	I0920 17:07:28.311021   27962 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:07:28.311565   27962 main.go:141] libmachine: Using API Version  1
	I0920 17:07:28.311587   27962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:07:28.311884   27962 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:07:28.312062   27962 main.go:141] libmachine: (ha-135993) Calling .GetMachineName
	I0920 17:07:28.312230   27962 main.go:141] libmachine: (ha-135993) Calling .DriverName
	I0920 17:07:28.312390   27962 start.go:159] libmachine.API.Create for "ha-135993" (driver="kvm2")
	I0920 17:07:28.312423   27962 client.go:168] LocalClient.Create starting
	I0920 17:07:28.312451   27962 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem
	I0920 17:07:28.312493   27962 main.go:141] libmachine: Decoding PEM data...
	I0920 17:07:28.312531   27962 main.go:141] libmachine: Parsing certificate...
	I0920 17:07:28.312583   27962 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem
	I0920 17:07:28.312603   27962 main.go:141] libmachine: Decoding PEM data...
	I0920 17:07:28.312616   27962 main.go:141] libmachine: Parsing certificate...
	I0920 17:07:28.312634   27962 main.go:141] libmachine: Running pre-create checks...
	I0920 17:07:28.312641   27962 main.go:141] libmachine: (ha-135993) Calling .PreCreateCheck
	I0920 17:07:28.313012   27962 main.go:141] libmachine: (ha-135993) Calling .GetConfigRaw
	I0920 17:07:28.313345   27962 main.go:141] libmachine: Creating machine...
	I0920 17:07:28.313358   27962 main.go:141] libmachine: (ha-135993) Calling .Create
	I0920 17:07:28.313496   27962 main.go:141] libmachine: (ha-135993) Creating KVM machine...
	I0920 17:07:28.314784   27962 main.go:141] libmachine: (ha-135993) DBG | found existing default KVM network
	I0920 17:07:28.315382   27962 main.go:141] libmachine: (ha-135993) DBG | I0920 17:07:28.315245   27985 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002211e0}
	I0920 17:07:28.315406   27962 main.go:141] libmachine: (ha-135993) DBG | created network xml: 
	I0920 17:07:28.315419   27962 main.go:141] libmachine: (ha-135993) DBG | <network>
	I0920 17:07:28.315429   27962 main.go:141] libmachine: (ha-135993) DBG |   <name>mk-ha-135993</name>
	I0920 17:07:28.315440   27962 main.go:141] libmachine: (ha-135993) DBG |   <dns enable='no'/>
	I0920 17:07:28.315450   27962 main.go:141] libmachine: (ha-135993) DBG |   
	I0920 17:07:28.315469   27962 main.go:141] libmachine: (ha-135993) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0920 17:07:28.315477   27962 main.go:141] libmachine: (ha-135993) DBG |     <dhcp>
	I0920 17:07:28.315483   27962 main.go:141] libmachine: (ha-135993) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0920 17:07:28.315496   27962 main.go:141] libmachine: (ha-135993) DBG |     </dhcp>
	I0920 17:07:28.315507   27962 main.go:141] libmachine: (ha-135993) DBG |   </ip>
	I0920 17:07:28.315519   27962 main.go:141] libmachine: (ha-135993) DBG |   
	I0920 17:07:28.315530   27962 main.go:141] libmachine: (ha-135993) DBG | </network>
	I0920 17:07:28.315542   27962 main.go:141] libmachine: (ha-135993) DBG | 
	I0920 17:07:28.320907   27962 main.go:141] libmachine: (ha-135993) DBG | trying to create private KVM network mk-ha-135993 192.168.39.0/24...
	I0920 17:07:28.387245   27962 main.go:141] libmachine: (ha-135993) DBG | private KVM network mk-ha-135993 192.168.39.0/24 created
	I0920 17:07:28.387277   27962 main.go:141] libmachine: (ha-135993) DBG | I0920 17:07:28.387214   27985 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19672-8777/.minikube
	I0920 17:07:28.387292   27962 main.go:141] libmachine: (ha-135993) Setting up store path in /home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993 ...
	I0920 17:07:28.387307   27962 main.go:141] libmachine: (ha-135993) Building disk image from file:///home/jenkins/minikube-integration/19672-8777/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso
	I0920 17:07:28.387375   27962 main.go:141] libmachine: (ha-135993) Downloading /home/jenkins/minikube-integration/19672-8777/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19672-8777/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso...
	I0920 17:07:28.647940   27962 main.go:141] libmachine: (ha-135993) DBG | I0920 17:07:28.647805   27985 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993/id_rsa...
	I0920 17:07:28.842374   27962 main.go:141] libmachine: (ha-135993) DBG | I0920 17:07:28.842220   27985 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993/ha-135993.rawdisk...
	I0920 17:07:28.842416   27962 main.go:141] libmachine: (ha-135993) DBG | Writing magic tar header
	I0920 17:07:28.842425   27962 main.go:141] libmachine: (ha-135993) DBG | Writing SSH key tar header
	I0920 17:07:28.842433   27962 main.go:141] libmachine: (ha-135993) DBG | I0920 17:07:28.842377   27985 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993 ...
	I0920 17:07:28.842562   27962 main.go:141] libmachine: (ha-135993) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993
	I0920 17:07:28.842579   27962 main.go:141] libmachine: (ha-135993) Setting executable bit set on /home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993 (perms=drwx------)
	I0920 17:07:28.842585   27962 main.go:141] libmachine: (ha-135993) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-8777/.minikube/machines
	I0920 17:07:28.842594   27962 main.go:141] libmachine: (ha-135993) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-8777/.minikube
	I0920 17:07:28.842600   27962 main.go:141] libmachine: (ha-135993) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-8777
	I0920 17:07:28.842608   27962 main.go:141] libmachine: (ha-135993) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0920 17:07:28.842615   27962 main.go:141] libmachine: (ha-135993) Setting executable bit set on /home/jenkins/minikube-integration/19672-8777/.minikube/machines (perms=drwxr-xr-x)
	I0920 17:07:28.842628   27962 main.go:141] libmachine: (ha-135993) Setting executable bit set on /home/jenkins/minikube-integration/19672-8777/.minikube (perms=drwxr-xr-x)
	I0920 17:07:28.842634   27962 main.go:141] libmachine: (ha-135993) Setting executable bit set on /home/jenkins/minikube-integration/19672-8777 (perms=drwxrwxr-x)
	I0920 17:07:28.842641   27962 main.go:141] libmachine: (ha-135993) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0920 17:07:28.842659   27962 main.go:141] libmachine: (ha-135993) DBG | Checking permissions on dir: /home/jenkins
	I0920 17:07:28.842667   27962 main.go:141] libmachine: (ha-135993) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0920 17:07:28.842678   27962 main.go:141] libmachine: (ha-135993) Creating domain...
	I0920 17:07:28.842684   27962 main.go:141] libmachine: (ha-135993) DBG | Checking permissions on dir: /home
	I0920 17:07:28.842691   27962 main.go:141] libmachine: (ha-135993) DBG | Skipping /home - not owner
	I0920 17:07:28.843894   27962 main.go:141] libmachine: (ha-135993) define libvirt domain using xml: 
	I0920 17:07:28.843929   27962 main.go:141] libmachine: (ha-135993) <domain type='kvm'>
	I0920 17:07:28.843939   27962 main.go:141] libmachine: (ha-135993)   <name>ha-135993</name>
	I0920 17:07:28.843946   27962 main.go:141] libmachine: (ha-135993)   <memory unit='MiB'>2200</memory>
	I0920 17:07:28.843953   27962 main.go:141] libmachine: (ha-135993)   <vcpu>2</vcpu>
	I0920 17:07:28.843960   27962 main.go:141] libmachine: (ha-135993)   <features>
	I0920 17:07:28.843968   27962 main.go:141] libmachine: (ha-135993)     <acpi/>
	I0920 17:07:28.843974   27962 main.go:141] libmachine: (ha-135993)     <apic/>
	I0920 17:07:28.843981   27962 main.go:141] libmachine: (ha-135993)     <pae/>
	I0920 17:07:28.844000   27962 main.go:141] libmachine: (ha-135993)     
	I0920 17:07:28.844009   27962 main.go:141] libmachine: (ha-135993)   </features>
	I0920 17:07:28.844018   27962 main.go:141] libmachine: (ha-135993)   <cpu mode='host-passthrough'>
	I0920 17:07:28.844024   27962 main.go:141] libmachine: (ha-135993)   
	I0920 17:07:28.844044   27962 main.go:141] libmachine: (ha-135993)   </cpu>
	I0920 17:07:28.844054   27962 main.go:141] libmachine: (ha-135993)   <os>
	I0920 17:07:28.844083   27962 main.go:141] libmachine: (ha-135993)     <type>hvm</type>
	I0920 17:07:28.844103   27962 main.go:141] libmachine: (ha-135993)     <boot dev='cdrom'/>
	I0920 17:07:28.844109   27962 main.go:141] libmachine: (ha-135993)     <boot dev='hd'/>
	I0920 17:07:28.844113   27962 main.go:141] libmachine: (ha-135993)     <bootmenu enable='no'/>
	I0920 17:07:28.844118   27962 main.go:141] libmachine: (ha-135993)   </os>
	I0920 17:07:28.844121   27962 main.go:141] libmachine: (ha-135993)   <devices>
	I0920 17:07:28.844128   27962 main.go:141] libmachine: (ha-135993)     <disk type='file' device='cdrom'>
	I0920 17:07:28.844137   27962 main.go:141] libmachine: (ha-135993)       <source file='/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993/boot2docker.iso'/>
	I0920 17:07:28.844142   27962 main.go:141] libmachine: (ha-135993)       <target dev='hdc' bus='scsi'/>
	I0920 17:07:28.844146   27962 main.go:141] libmachine: (ha-135993)       <readonly/>
	I0920 17:07:28.844151   27962 main.go:141] libmachine: (ha-135993)     </disk>
	I0920 17:07:28.844157   27962 main.go:141] libmachine: (ha-135993)     <disk type='file' device='disk'>
	I0920 17:07:28.844164   27962 main.go:141] libmachine: (ha-135993)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0920 17:07:28.844172   27962 main.go:141] libmachine: (ha-135993)       <source file='/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993/ha-135993.rawdisk'/>
	I0920 17:07:28.844194   27962 main.go:141] libmachine: (ha-135993)       <target dev='hda' bus='virtio'/>
	I0920 17:07:28.844214   27962 main.go:141] libmachine: (ha-135993)     </disk>
	I0920 17:07:28.844234   27962 main.go:141] libmachine: (ha-135993)     <interface type='network'>
	I0920 17:07:28.844247   27962 main.go:141] libmachine: (ha-135993)       <source network='mk-ha-135993'/>
	I0920 17:07:28.844256   27962 main.go:141] libmachine: (ha-135993)       <model type='virtio'/>
	I0920 17:07:28.844274   27962 main.go:141] libmachine: (ha-135993)     </interface>
	I0920 17:07:28.844298   27962 main.go:141] libmachine: (ha-135993)     <interface type='network'>
	I0920 17:07:28.844316   27962 main.go:141] libmachine: (ha-135993)       <source network='default'/>
	I0920 17:07:28.844331   27962 main.go:141] libmachine: (ha-135993)       <model type='virtio'/>
	I0920 17:07:28.844342   27962 main.go:141] libmachine: (ha-135993)     </interface>
	I0920 17:07:28.844351   27962 main.go:141] libmachine: (ha-135993)     <serial type='pty'>
	I0920 17:07:28.844360   27962 main.go:141] libmachine: (ha-135993)       <target port='0'/>
	I0920 17:07:28.844366   27962 main.go:141] libmachine: (ha-135993)     </serial>
	I0920 17:07:28.844373   27962 main.go:141] libmachine: (ha-135993)     <console type='pty'>
	I0920 17:07:28.844381   27962 main.go:141] libmachine: (ha-135993)       <target type='serial' port='0'/>
	I0920 17:07:28.844400   27962 main.go:141] libmachine: (ha-135993)     </console>
	I0920 17:07:28.844411   27962 main.go:141] libmachine: (ha-135993)     <rng model='virtio'>
	I0920 17:07:28.844423   27962 main.go:141] libmachine: (ha-135993)       <backend model='random'>/dev/random</backend>
	I0920 17:07:28.844437   27962 main.go:141] libmachine: (ha-135993)     </rng>
	I0920 17:07:28.844445   27962 main.go:141] libmachine: (ha-135993)     
	I0920 17:07:28.844456   27962 main.go:141] libmachine: (ha-135993)     
	I0920 17:07:28.844462   27962 main.go:141] libmachine: (ha-135993)   </devices>
	I0920 17:07:28.844471   27962 main.go:141] libmachine: (ha-135993) </domain>
	I0920 17:07:28.844477   27962 main.go:141] libmachine: (ha-135993) 
	I0920 17:07:28.849080   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:80:85:3f in network default
	I0920 17:07:28.849710   27962 main.go:141] libmachine: (ha-135993) Ensuring networks are active...
	I0920 17:07:28.849730   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:28.850712   27962 main.go:141] libmachine: (ha-135993) Ensuring network default is active
	I0920 17:07:28.850972   27962 main.go:141] libmachine: (ha-135993) Ensuring network mk-ha-135993 is active
	I0920 17:07:28.851547   27962 main.go:141] libmachine: (ha-135993) Getting domain xml...
	I0920 17:07:28.852218   27962 main.go:141] libmachine: (ha-135993) Creating domain...
	I0920 17:07:30.058549   27962 main.go:141] libmachine: (ha-135993) Waiting to get IP...
	I0920 17:07:30.059436   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:30.059857   27962 main.go:141] libmachine: (ha-135993) DBG | unable to find current IP address of domain ha-135993 in network mk-ha-135993
	I0920 17:07:30.059875   27962 main.go:141] libmachine: (ha-135993) DBG | I0920 17:07:30.059831   27985 retry.go:31] will retry after 273.871147ms: waiting for machine to come up
	I0920 17:07:30.335232   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:30.335705   27962 main.go:141] libmachine: (ha-135993) DBG | unable to find current IP address of domain ha-135993 in network mk-ha-135993
	I0920 17:07:30.335727   27962 main.go:141] libmachine: (ha-135993) DBG | I0920 17:07:30.335673   27985 retry.go:31] will retry after 312.261403ms: waiting for machine to come up
	I0920 17:07:30.649140   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:30.649587   27962 main.go:141] libmachine: (ha-135993) DBG | unable to find current IP address of domain ha-135993 in network mk-ha-135993
	I0920 17:07:30.649616   27962 main.go:141] libmachine: (ha-135993) DBG | I0920 17:07:30.649539   27985 retry.go:31] will retry after 394.960563ms: waiting for machine to come up
	I0920 17:07:31.046134   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:31.046737   27962 main.go:141] libmachine: (ha-135993) DBG | unable to find current IP address of domain ha-135993 in network mk-ha-135993
	I0920 17:07:31.046803   27962 main.go:141] libmachine: (ha-135993) DBG | I0920 17:07:31.046706   27985 retry.go:31] will retry after 406.180853ms: waiting for machine to come up
	I0920 17:07:31.454086   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:31.454470   27962 main.go:141] libmachine: (ha-135993) DBG | unable to find current IP address of domain ha-135993 in network mk-ha-135993
	I0920 17:07:31.454493   27962 main.go:141] libmachine: (ha-135993) DBG | I0920 17:07:31.454441   27985 retry.go:31] will retry after 507.991566ms: waiting for machine to come up
	I0920 17:07:31.964134   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:31.964550   27962 main.go:141] libmachine: (ha-135993) DBG | unable to find current IP address of domain ha-135993 in network mk-ha-135993
	I0920 17:07:31.964579   27962 main.go:141] libmachine: (ha-135993) DBG | I0920 17:07:31.964520   27985 retry.go:31] will retry after 921.386836ms: waiting for machine to come up
	I0920 17:07:32.887074   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:32.887532   27962 main.go:141] libmachine: (ha-135993) DBG | unable to find current IP address of domain ha-135993 in network mk-ha-135993
	I0920 17:07:32.887576   27962 main.go:141] libmachine: (ha-135993) DBG | I0920 17:07:32.887477   27985 retry.go:31] will retry after 836.533379ms: waiting for machine to come up
	I0920 17:07:33.725040   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:33.725632   27962 main.go:141] libmachine: (ha-135993) DBG | unable to find current IP address of domain ha-135993 in network mk-ha-135993
	I0920 17:07:33.725663   27962 main.go:141] libmachine: (ha-135993) DBG | I0920 17:07:33.725548   27985 retry.go:31] will retry after 1.249731704s: waiting for machine to come up
	I0920 17:07:34.976928   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:34.977332   27962 main.go:141] libmachine: (ha-135993) DBG | unable to find current IP address of domain ha-135993 in network mk-ha-135993
	I0920 17:07:34.977363   27962 main.go:141] libmachine: (ha-135993) DBG | I0920 17:07:34.977281   27985 retry.go:31] will retry after 1.538905112s: waiting for machine to come up
	I0920 17:07:36.517997   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:36.518523   27962 main.go:141] libmachine: (ha-135993) DBG | unable to find current IP address of domain ha-135993 in network mk-ha-135993
	I0920 17:07:36.518558   27962 main.go:141] libmachine: (ha-135993) DBG | I0920 17:07:36.518494   27985 retry.go:31] will retry after 1.90472576s: waiting for machine to come up
	I0920 17:07:38.424570   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:38.424987   27962 main.go:141] libmachine: (ha-135993) DBG | unable to find current IP address of domain ha-135993 in network mk-ha-135993
	I0920 17:07:38.425014   27962 main.go:141] libmachine: (ha-135993) DBG | I0920 17:07:38.424942   27985 retry.go:31] will retry after 2.741058611s: waiting for machine to come up
	I0920 17:07:41.169975   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:41.170341   27962 main.go:141] libmachine: (ha-135993) DBG | unable to find current IP address of domain ha-135993 in network mk-ha-135993
	I0920 17:07:41.170384   27962 main.go:141] libmachine: (ha-135993) DBG | I0920 17:07:41.170291   27985 retry.go:31] will retry after 3.268233116s: waiting for machine to come up
	I0920 17:07:44.440089   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:44.440457   27962 main.go:141] libmachine: (ha-135993) DBG | unable to find current IP address of domain ha-135993 in network mk-ha-135993
	I0920 17:07:44.440479   27962 main.go:141] libmachine: (ha-135993) DBG | I0920 17:07:44.440421   27985 retry.go:31] will retry after 4.54359632s: waiting for machine to come up
	I0920 17:07:48.986065   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:48.986437   27962 main.go:141] libmachine: (ha-135993) Found IP for machine: 192.168.39.60
	I0920 17:07:48.986462   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has current primary IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:48.986471   27962 main.go:141] libmachine: (ha-135993) Reserving static IP address...
	I0920 17:07:48.986867   27962 main.go:141] libmachine: (ha-135993) DBG | unable to find host DHCP lease matching {name: "ha-135993", mac: "52:54:00:99:26:09", ip: "192.168.39.60"} in network mk-ha-135993
	I0920 17:07:49.060367   27962 main.go:141] libmachine: (ha-135993) DBG | Getting to WaitForSSH function...
	I0920 17:07:49.060399   27962 main.go:141] libmachine: (ha-135993) Reserved static IP address: 192.168.39.60
	I0920 17:07:49.060416   27962 main.go:141] libmachine: (ha-135993) Waiting for SSH to be available...
	I0920 17:07:49.063301   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:49.063688   27962 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:minikube Clientid:01:52:54:00:99:26:09}
	I0920 17:07:49.063720   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:49.063827   27962 main.go:141] libmachine: (ha-135993) DBG | Using SSH client type: external
	I0920 17:07:49.063851   27962 main.go:141] libmachine: (ha-135993) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993/id_rsa (-rw-------)
	I0920 17:07:49.063904   27962 main.go:141] libmachine: (ha-135993) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.60 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 17:07:49.063928   27962 main.go:141] libmachine: (ha-135993) DBG | About to run SSH command:
	I0920 17:07:49.063942   27962 main.go:141] libmachine: (ha-135993) DBG | exit 0
	I0920 17:07:49.193721   27962 main.go:141] libmachine: (ha-135993) DBG | SSH cmd err, output: <nil>: 
	I0920 17:07:49.194050   27962 main.go:141] libmachine: (ha-135993) KVM machine creation complete!
	I0920 17:07:49.194374   27962 main.go:141] libmachine: (ha-135993) Calling .GetConfigRaw
	I0920 17:07:49.195018   27962 main.go:141] libmachine: (ha-135993) Calling .DriverName
	I0920 17:07:49.195196   27962 main.go:141] libmachine: (ha-135993) Calling .DriverName
	I0920 17:07:49.195368   27962 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0920 17:07:49.195383   27962 main.go:141] libmachine: (ha-135993) Calling .GetState
	I0920 17:07:49.196554   27962 main.go:141] libmachine: Detecting operating system of created instance...
	I0920 17:07:49.196568   27962 main.go:141] libmachine: Waiting for SSH to be available...
	I0920 17:07:49.196573   27962 main.go:141] libmachine: Getting to WaitForSSH function...
	I0920 17:07:49.196578   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHHostname
	I0920 17:07:49.199132   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:49.199593   27962 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:07:49.199612   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:49.199789   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHPort
	I0920 17:07:49.199931   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:07:49.200061   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:07:49.200187   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHUsername
	I0920 17:07:49.200332   27962 main.go:141] libmachine: Using SSH client type: native
	I0920 17:07:49.200544   27962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0920 17:07:49.200555   27962 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0920 17:07:49.309150   27962 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 17:07:49.309171   27962 main.go:141] libmachine: Detecting the provisioner...
	I0920 17:07:49.309178   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHHostname
	I0920 17:07:49.311937   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:49.312313   27962 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:07:49.312340   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:49.312539   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHPort
	I0920 17:07:49.312760   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:07:49.312905   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:07:49.313028   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHUsername
	I0920 17:07:49.313214   27962 main.go:141] libmachine: Using SSH client type: native
	I0920 17:07:49.313445   27962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0920 17:07:49.313459   27962 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0920 17:07:49.422616   27962 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0920 17:07:49.422713   27962 main.go:141] libmachine: found compatible host: buildroot
	I0920 17:07:49.422725   27962 main.go:141] libmachine: Provisioning with buildroot...
	I0920 17:07:49.422735   27962 main.go:141] libmachine: (ha-135993) Calling .GetMachineName
	I0920 17:07:49.422993   27962 buildroot.go:166] provisioning hostname "ha-135993"
	I0920 17:07:49.423024   27962 main.go:141] libmachine: (ha-135993) Calling .GetMachineName
	I0920 17:07:49.423217   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHHostname
	I0920 17:07:49.425983   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:49.426356   27962 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:07:49.426386   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:49.426537   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHPort
	I0920 17:07:49.426731   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:07:49.426884   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:07:49.427002   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHUsername
	I0920 17:07:49.427182   27962 main.go:141] libmachine: Using SSH client type: native
	I0920 17:07:49.427358   27962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0920 17:07:49.427369   27962 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-135993 && echo "ha-135993" | sudo tee /etc/hostname
	I0920 17:07:49.546887   27962 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-135993
	
	I0920 17:07:49.546939   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHHostname
	I0920 17:07:49.549688   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:49.550074   27962 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:07:49.550101   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:49.550275   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHPort
	I0920 17:07:49.550460   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:07:49.550617   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:07:49.550748   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHUsername
	I0920 17:07:49.550889   27962 main.go:141] libmachine: Using SSH client type: native
	I0920 17:07:49.551094   27962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0920 17:07:49.551110   27962 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-135993' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-135993/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-135993' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 17:07:49.666876   27962 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 17:07:49.666908   27962 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-8777/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-8777/.minikube}
	I0920 17:07:49.666933   27962 buildroot.go:174] setting up certificates
	I0920 17:07:49.666946   27962 provision.go:84] configureAuth start
	I0920 17:07:49.666956   27962 main.go:141] libmachine: (ha-135993) Calling .GetMachineName
	I0920 17:07:49.667278   27962 main.go:141] libmachine: (ha-135993) Calling .GetIP
	I0920 17:07:49.670314   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:49.670647   27962 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:07:49.670670   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:49.670822   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHHostname
	I0920 17:07:49.672840   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:49.673146   27962 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:07:49.673169   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:49.673340   27962 provision.go:143] copyHostCerts
	I0920 17:07:49.673366   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem
	I0920 17:07:49.673396   27962 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem, removing ...
	I0920 17:07:49.673411   27962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem
	I0920 17:07:49.673481   27962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem (1082 bytes)
	I0920 17:07:49.673583   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem
	I0920 17:07:49.673609   27962 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem, removing ...
	I0920 17:07:49.673619   27962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem
	I0920 17:07:49.673659   27962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem (1123 bytes)
	I0920 17:07:49.673727   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem
	I0920 17:07:49.673743   27962 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem, removing ...
	I0920 17:07:49.673749   27962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem
	I0920 17:07:49.673771   27962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem (1675 bytes)
	I0920 17:07:49.673820   27962 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem org=jenkins.ha-135993 san=[127.0.0.1 192.168.39.60 ha-135993 localhost minikube]
	I0920 17:07:49.869795   27962 provision.go:177] copyRemoteCerts
	I0920 17:07:49.869886   27962 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 17:07:49.869910   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHHostname
	I0920 17:07:49.872957   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:49.873263   27962 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:07:49.873287   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:49.873619   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHPort
	I0920 17:07:49.874014   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:07:49.874211   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHUsername
	I0920 17:07:49.874372   27962 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993/id_rsa Username:docker}
	I0920 17:07:49.959921   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0920 17:07:49.960005   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0920 17:07:49.984738   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0920 17:07:49.984817   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0920 17:07:50.008778   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0920 17:07:50.008846   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0920 17:07:50.031838   27962 provision.go:87] duration metric: took 364.880224ms to configureAuth
	I0920 17:07:50.031867   27962 buildroot.go:189] setting minikube options for container-runtime
	I0920 17:07:50.032039   27962 config.go:182] Loaded profile config "ha-135993": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 17:07:50.032140   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHHostname
	I0920 17:07:50.034890   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:50.035323   27962 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:07:50.035358   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:50.035520   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHPort
	I0920 17:07:50.035689   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:07:50.035831   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:07:50.035997   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHUsername
	I0920 17:07:50.036173   27962 main.go:141] libmachine: Using SSH client type: native
	I0920 17:07:50.036358   27962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0920 17:07:50.036378   27962 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 17:07:50.251754   27962 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 17:07:50.251780   27962 main.go:141] libmachine: Checking connection to Docker...
	I0920 17:07:50.251789   27962 main.go:141] libmachine: (ha-135993) Calling .GetURL
	I0920 17:07:50.253114   27962 main.go:141] libmachine: (ha-135993) DBG | Using libvirt version 6000000
	I0920 17:07:50.254998   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:50.255262   27962 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:07:50.255284   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:50.255431   27962 main.go:141] libmachine: Docker is up and running!
	I0920 17:07:50.255453   27962 main.go:141] libmachine: Reticulating splines...
	I0920 17:07:50.255462   27962 client.go:171] duration metric: took 21.943029238s to LocalClient.Create
	I0920 17:07:50.255485   27962 start.go:167] duration metric: took 21.94309612s to libmachine.API.Create "ha-135993"
	I0920 17:07:50.255496   27962 start.go:293] postStartSetup for "ha-135993" (driver="kvm2")
	I0920 17:07:50.255512   27962 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 17:07:50.255535   27962 main.go:141] libmachine: (ha-135993) Calling .DriverName
	I0920 17:07:50.255798   27962 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 17:07:50.255830   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHHostname
	I0920 17:07:50.258006   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:50.258354   27962 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:07:50.258386   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:50.258536   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHPort
	I0920 17:07:50.258726   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:07:50.258853   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHUsername
	I0920 17:07:50.259008   27962 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993/id_rsa Username:docker}
	I0920 17:07:50.343779   27962 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 17:07:50.347644   27962 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 17:07:50.347675   27962 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/addons for local assets ...
	I0920 17:07:50.347738   27962 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/files for local assets ...
	I0920 17:07:50.347830   27962 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem -> 159732.pem in /etc/ssl/certs
	I0920 17:07:50.347842   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem -> /etc/ssl/certs/159732.pem
	I0920 17:07:50.347940   27962 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 17:07:50.356818   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /etc/ssl/certs/159732.pem (1708 bytes)
	I0920 17:07:50.380005   27962 start.go:296] duration metric: took 124.491428ms for postStartSetup
	I0920 17:07:50.380073   27962 main.go:141] libmachine: (ha-135993) Calling .GetConfigRaw
	I0920 17:07:50.380667   27962 main.go:141] libmachine: (ha-135993) Calling .GetIP
	I0920 17:07:50.383411   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:50.383719   27962 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:07:50.383749   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:50.384003   27962 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/config.json ...
	I0920 17:07:50.384196   27962 start.go:128] duration metric: took 22.090370371s to createHost
	I0920 17:07:50.384222   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHHostname
	I0920 17:07:50.386519   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:50.386950   27962 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:07:50.386966   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:50.387165   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHPort
	I0920 17:07:50.387336   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:07:50.387480   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:07:50.387623   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHUsername
	I0920 17:07:50.387744   27962 main.go:141] libmachine: Using SSH client type: native
	I0920 17:07:50.387905   27962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0920 17:07:50.387916   27962 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 17:07:50.498520   27962 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726852070.471027061
	
	I0920 17:07:50.498552   27962 fix.go:216] guest clock: 1726852070.471027061
	I0920 17:07:50.498562   27962 fix.go:229] Guest: 2024-09-20 17:07:50.471027061 +0000 UTC Remote: 2024-09-20 17:07:50.384207902 +0000 UTC m=+22.194917586 (delta=86.819159ms)
	I0920 17:07:50.498623   27962 fix.go:200] guest clock delta is within tolerance: 86.819159ms
	I0920 17:07:50.498637   27962 start.go:83] releasing machines lock for "ha-135993", held for 22.204885202s
	I0920 17:07:50.498672   27962 main.go:141] libmachine: (ha-135993) Calling .DriverName
	I0920 17:07:50.498937   27962 main.go:141] libmachine: (ha-135993) Calling .GetIP
	I0920 17:07:50.501692   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:50.502068   27962 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:07:50.502095   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:50.502251   27962 main.go:141] libmachine: (ha-135993) Calling .DriverName
	I0920 17:07:50.502720   27962 main.go:141] libmachine: (ha-135993) Calling .DriverName
	I0920 17:07:50.502881   27962 main.go:141] libmachine: (ha-135993) Calling .DriverName
	I0920 17:07:50.502969   27962 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 17:07:50.503024   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHHostname
	I0920 17:07:50.503115   27962 ssh_runner.go:195] Run: cat /version.json
	I0920 17:07:50.503135   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHHostname
	I0920 17:07:50.505769   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:50.506399   27962 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:07:50.506780   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:50.506810   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:50.507015   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHPort
	I0920 17:07:50.507188   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:07:50.507286   27962 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:07:50.507312   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:50.507447   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHUsername
	I0920 17:07:50.507463   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHPort
	I0920 17:07:50.507586   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:07:50.507587   27962 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993/id_rsa Username:docker}
	I0920 17:07:50.507682   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHUsername
	I0920 17:07:50.507776   27962 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993/id_rsa Username:docker}
	I0920 17:07:50.586773   27962 ssh_runner.go:195] Run: systemctl --version
	I0920 17:07:50.621546   27962 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 17:07:50.780598   27962 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 17:07:50.786517   27962 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 17:07:50.786583   27962 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 17:07:50.802071   27962 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 17:07:50.802094   27962 start.go:495] detecting cgroup driver to use...
	I0920 17:07:50.802161   27962 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 17:07:50.818377   27962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 17:07:50.832630   27962 docker.go:217] disabling cri-docker service (if available) ...
	I0920 17:07:50.832707   27962 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 17:07:50.846087   27962 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 17:07:50.860151   27962 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 17:07:50.975426   27962 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 17:07:51.126213   27962 docker.go:233] disabling docker service ...
	I0920 17:07:51.126291   27962 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 17:07:51.140089   27962 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 17:07:51.152679   27962 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 17:07:51.283500   27962 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 17:07:51.390304   27962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 17:07:51.403627   27962 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 17:07:51.421174   27962 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 17:07:51.421242   27962 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:07:51.431235   27962 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 17:07:51.431310   27962 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:07:51.442561   27962 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:07:51.452862   27962 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:07:51.463189   27962 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 17:07:51.473283   27962 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:07:51.483302   27962 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:07:51.500456   27962 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:07:51.510444   27962 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 17:07:51.519365   27962 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 17:07:51.519445   27962 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 17:07:51.532282   27962 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 17:07:51.541316   27962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:07:51.653648   27962 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 17:07:51.739658   27962 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 17:07:51.739747   27962 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 17:07:51.744441   27962 start.go:563] Will wait 60s for crictl version
	I0920 17:07:51.744510   27962 ssh_runner.go:195] Run: which crictl
	I0920 17:07:51.747928   27962 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 17:07:51.785033   27962 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 17:07:51.785130   27962 ssh_runner.go:195] Run: crio --version
	I0920 17:07:51.813367   27962 ssh_runner.go:195] Run: crio --version
	I0920 17:07:51.843606   27962 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 17:07:51.844877   27962 main.go:141] libmachine: (ha-135993) Calling .GetIP
	I0920 17:07:51.847711   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:51.848041   27962 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:07:51.848067   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:07:51.848302   27962 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 17:07:51.852330   27962 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 17:07:51.865291   27962 kubeadm.go:883] updating cluster {Name:ha-135993 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-135993 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 17:07:51.865398   27962 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 17:07:51.865449   27962 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 17:07:51.899883   27962 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 17:07:51.899943   27962 ssh_runner.go:195] Run: which lz4
	I0920 17:07:51.903807   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0920 17:07:51.903901   27962 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 17:07:51.907726   27962 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 17:07:51.907767   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0920 17:07:53.234059   27962 crio.go:462] duration metric: took 1.330180344s to copy over tarball
	I0920 17:07:53.234125   27962 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 17:07:55.407532   27962 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.173354398s)
	I0920 17:07:55.407570   27962 crio.go:469] duration metric: took 2.173487919s to extract the tarball
	I0920 17:07:55.407579   27962 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 17:07:55.444916   27962 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 17:07:55.491028   27962 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 17:07:55.491053   27962 cache_images.go:84] Images are preloaded, skipping loading
	I0920 17:07:55.491061   27962 kubeadm.go:934] updating node { 192.168.39.60 8443 v1.31.1 crio true true} ...
	I0920 17:07:55.491157   27962 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-135993 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.60
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-135993 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 17:07:55.491229   27962 ssh_runner.go:195] Run: crio config
	I0920 17:07:55.542472   27962 cni.go:84] Creating CNI manager for ""
	I0920 17:07:55.542496   27962 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0920 17:07:55.542509   27962 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 17:07:55.542534   27962 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.60 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-135993 NodeName:ha-135993 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.60"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.60 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 17:07:55.542711   27962 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.60
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-135993"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.60
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.60"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 17:07:55.542744   27962 kube-vip.go:115] generating kube-vip config ...
	I0920 17:07:55.542799   27962 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0920 17:07:55.561052   27962 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0920 17:07:55.561147   27962 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0920 17:07:55.561195   27962 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 17:07:55.571044   27962 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 17:07:55.571106   27962 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0920 17:07:55.580660   27962 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0920 17:07:55.598713   27962 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 17:07:55.616229   27962 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0920 17:07:55.634067   27962 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0920 17:07:55.651892   27962 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0920 17:07:55.655923   27962 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 17:07:55.667484   27962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:07:55.788088   27962 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 17:07:55.804588   27962 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993 for IP: 192.168.39.60
	I0920 17:07:55.804611   27962 certs.go:194] generating shared ca certs ...
	I0920 17:07:55.804631   27962 certs.go:226] acquiring lock for ca certs: {Name:mkc7ef6c737c6bdc3fdd9dcff8f57029c020d8f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:07:55.804804   27962 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key
	I0920 17:07:55.804860   27962 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key
	I0920 17:07:55.804874   27962 certs.go:256] generating profile certs ...
	I0920 17:07:55.804946   27962 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/client.key
	I0920 17:07:55.804963   27962 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/client.crt with IP's: []
	I0920 17:07:56.041638   27962 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/client.crt ...
	I0920 17:07:56.041670   27962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/client.crt: {Name:mk77b02a314748d6817683dcddc9e50a9706a3fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:07:56.041866   27962 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/client.key ...
	I0920 17:07:56.041881   27962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/client.key: {Name:mkce8a68ad81e086e143b0882e17cc856a54adae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:07:56.042064   27962 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.key.be3ca380
	I0920 17:07:56.042085   27962 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.crt.be3ca380 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.60 192.168.39.254]
	I0920 17:07:56.245960   27962 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.crt.be3ca380 ...
	I0920 17:07:56.245992   27962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.crt.be3ca380: {Name:mka9503983e8ca6a4d05f68e1a88c79ee07a7913 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:07:56.246164   27962 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.key.be3ca380 ...
	I0920 17:07:56.246181   27962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.key.be3ca380: {Name:mk892756342d52e742959b6836b3a7605e9575d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:07:56.246306   27962 certs.go:381] copying /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.crt.be3ca380 -> /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.crt
	I0920 17:07:56.246416   27962 certs.go:385] copying /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.key.be3ca380 -> /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.key
	I0920 17:07:56.246500   27962 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/proxy-client.key
	I0920 17:07:56.246524   27962 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/proxy-client.crt with IP's: []
	I0920 17:07:56.401234   27962 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/proxy-client.crt ...
	I0920 17:07:56.401270   27962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/proxy-client.crt: {Name:mk970b226fef3a4347b937972fcb4fd73f00dc38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:07:56.401441   27962 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/proxy-client.key ...
	I0920 17:07:56.401452   27962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/proxy-client.key: {Name:mke4168ed8a5ff16fb6768d15dd8e4f984e56621 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:07:56.401519   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0920 17:07:56.401536   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0920 17:07:56.401547   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0920 17:07:56.401558   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0920 17:07:56.401568   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0920 17:07:56.401579   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0920 17:07:56.401588   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0920 17:07:56.401600   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0920 17:07:56.401644   27962 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem (1338 bytes)
	W0920 17:07:56.401677   27962 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973_empty.pem, impossibly tiny 0 bytes
	I0920 17:07:56.401684   27962 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 17:07:56.401706   27962 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem (1082 bytes)
	I0920 17:07:56.401730   27962 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem (1123 bytes)
	I0920 17:07:56.401754   27962 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem (1675 bytes)
	I0920 17:07:56.401789   27962 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem (1708 bytes)
	I0920 17:07:56.401817   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem -> /usr/share/ca-certificates/15973.pem
	I0920 17:07:56.401847   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem -> /usr/share/ca-certificates/159732.pem
	I0920 17:07:56.401862   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:07:56.402409   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 17:07:56.427996   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 17:07:56.451855   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 17:07:56.475801   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0920 17:07:56.499662   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0920 17:07:56.522944   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 17:07:56.548908   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 17:07:56.575686   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 17:07:56.604616   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem --> /usr/share/ca-certificates/15973.pem (1338 bytes)
	I0920 17:07:56.627314   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /usr/share/ca-certificates/159732.pem (1708 bytes)
	I0920 17:07:56.649875   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 17:07:56.673591   27962 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 17:07:56.694627   27962 ssh_runner.go:195] Run: openssl version
	I0920 17:07:56.700654   27962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 17:07:56.711864   27962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:07:56.716521   27962 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:07:56.716587   27962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:07:56.722355   27962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 17:07:56.733975   27962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15973.pem && ln -fs /usr/share/ca-certificates/15973.pem /etc/ssl/certs/15973.pem"
	I0920 17:07:56.745449   27962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15973.pem
	I0920 17:07:56.749937   27962 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:04 /usr/share/ca-certificates/15973.pem
	I0920 17:07:56.750010   27962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15973.pem
	I0920 17:07:56.755845   27962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15973.pem /etc/ssl/certs/51391683.0"
	I0920 17:07:56.766910   27962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/159732.pem && ln -fs /usr/share/ca-certificates/159732.pem /etc/ssl/certs/159732.pem"
	I0920 17:07:56.777908   27962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/159732.pem
	I0920 17:07:56.782437   27962 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:04 /usr/share/ca-certificates/159732.pem
	I0920 17:07:56.782504   27962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/159732.pem
	I0920 17:07:56.788567   27962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/159732.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 17:07:56.800002   27962 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 17:07:56.804473   27962 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 17:07:56.804532   27962 kubeadm.go:392] StartCluster: {Name:ha-135993 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-135993 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 17:07:56.804601   27962 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 17:07:56.804641   27962 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 17:07:56.847709   27962 cri.go:89] found id: ""
	I0920 17:07:56.847785   27962 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 17:07:56.859005   27962 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 17:07:56.869479   27962 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 17:07:56.879263   27962 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 17:07:56.879288   27962 kubeadm.go:157] found existing configuration files:
	
	I0920 17:07:56.879350   27962 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 17:07:56.888673   27962 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 17:07:56.888748   27962 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 17:07:56.898330   27962 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 17:07:56.908293   27962 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 17:07:56.908361   27962 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 17:07:56.918173   27962 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 17:07:56.926869   27962 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 17:07:56.926939   27962 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 17:07:56.935901   27962 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 17:07:56.944708   27962 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 17:07:56.944774   27962 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 17:07:56.954425   27962 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 17:07:57.049417   27962 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 17:07:57.049552   27962 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 17:07:57.158652   27962 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 17:07:57.158798   27962 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 17:07:57.158931   27962 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 17:07:57.167722   27962 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 17:07:57.313232   27962 out.go:235]   - Generating certificates and keys ...
	I0920 17:07:57.313352   27962 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 17:07:57.313425   27962 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 17:07:57.313486   27962 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0920 17:07:57.601566   27962 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0920 17:07:57.893152   27962 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0920 17:07:58.140227   27962 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0920 17:07:58.556100   27962 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0920 17:07:58.556284   27962 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-135993 localhost] and IPs [192.168.39.60 127.0.0.1 ::1]
	I0920 17:07:58.800301   27962 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0920 17:07:58.800437   27962 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-135993 localhost] and IPs [192.168.39.60 127.0.0.1 ::1]
	I0920 17:07:58.953666   27962 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0920 17:07:59.106407   27962 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0920 17:07:59.233998   27962 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0920 17:07:59.234129   27962 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 17:07:59.525137   27962 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 17:07:59.766968   27962 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 17:08:00.120492   27962 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 17:08:00.216832   27962 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 17:08:00.360049   27962 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 17:08:00.360513   27962 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 17:08:00.363304   27962 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 17:08:00.365927   27962 out.go:235]   - Booting up control plane ...
	I0920 17:08:00.366064   27962 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 17:08:00.366181   27962 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 17:08:00.366311   27962 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 17:08:00.379619   27962 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 17:08:00.385661   27962 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 17:08:00.385729   27962 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 17:08:00.519566   27962 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 17:08:00.519711   27962 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 17:08:01.020357   27962 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.387016ms
	I0920 17:08:01.020471   27962 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 17:08:07.015773   27962 kubeadm.go:310] [api-check] The API server is healthy after 5.999233043s
	I0920 17:08:07.031789   27962 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 17:08:07.055338   27962 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 17:08:07.096965   27962 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 17:08:07.097212   27962 kubeadm.go:310] [mark-control-plane] Marking the node ha-135993 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 17:08:07.111378   27962 kubeadm.go:310] [bootstrap-token] Using token: xrduw1.53792puohqvk415u
	I0920 17:08:07.112987   27962 out.go:235]   - Configuring RBAC rules ...
	I0920 17:08:07.113105   27962 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 17:08:07.126679   27962 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 17:08:07.140129   27962 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 17:08:07.144364   27962 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 17:08:07.148863   27962 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 17:08:07.153587   27962 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 17:08:07.423299   27962 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 17:08:07.856227   27962 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 17:08:08.423318   27962 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 17:08:08.423341   27962 kubeadm.go:310] 
	I0920 17:08:08.423388   27962 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 17:08:08.423393   27962 kubeadm.go:310] 
	I0920 17:08:08.423477   27962 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 17:08:08.423485   27962 kubeadm.go:310] 
	I0920 17:08:08.423525   27962 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 17:08:08.423586   27962 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 17:08:08.423645   27962 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 17:08:08.423658   27962 kubeadm.go:310] 
	I0920 17:08:08.423712   27962 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 17:08:08.423722   27962 kubeadm.go:310] 
	I0920 17:08:08.423765   27962 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 17:08:08.423774   27962 kubeadm.go:310] 
	I0920 17:08:08.423861   27962 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 17:08:08.423966   27962 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 17:08:08.424052   27962 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 17:08:08.424086   27962 kubeadm.go:310] 
	I0920 17:08:08.424207   27962 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 17:08:08.424318   27962 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 17:08:08.424327   27962 kubeadm.go:310] 
	I0920 17:08:08.424428   27962 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token xrduw1.53792puohqvk415u \
	I0920 17:08:08.424587   27962 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:736f3fb521855813decb4a7214f2b8beff5f81c3971b60decfa20a8807626a2d \
	I0920 17:08:08.424622   27962 kubeadm.go:310] 	--control-plane 
	I0920 17:08:08.424629   27962 kubeadm.go:310] 
	I0920 17:08:08.424753   27962 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 17:08:08.424765   27962 kubeadm.go:310] 
	I0920 17:08:08.424873   27962 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token xrduw1.53792puohqvk415u \
	I0920 17:08:08.425013   27962 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:736f3fb521855813decb4a7214f2b8beff5f81c3971b60decfa20a8807626a2d 
	I0920 17:08:08.425950   27962 kubeadm.go:310] W0920 17:07:57.025597     832 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 17:08:08.426273   27962 kubeadm.go:310] W0920 17:07:57.026508     832 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 17:08:08.426428   27962 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 17:08:08.426462   27962 cni.go:84] Creating CNI manager for ""
	I0920 17:08:08.426477   27962 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0920 17:08:08.428341   27962 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0920 17:08:08.429841   27962 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0920 17:08:08.435818   27962 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0920 17:08:08.435838   27962 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0920 17:08:08.455244   27962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0920 17:08:08.799287   27962 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 17:08:08.799381   27962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:08:08.799436   27962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-135993 minikube.k8s.io/updated_at=2024_09_20T17_08_08_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=0626f22cf0d915d75e291a5bce701f94395056e1 minikube.k8s.io/name=ha-135993 minikube.k8s.io/primary=true
	I0920 17:08:08.948517   27962 ops.go:34] apiserver oom_adj: -16
	I0920 17:08:08.948664   27962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:08:09.449228   27962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:08:09.949041   27962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:08:10.449579   27962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:08:10.949086   27962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:08:11.449011   27962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:08:11.949120   27962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:08:12.448969   27962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:08:12.581415   27962 kubeadm.go:1113] duration metric: took 3.782097256s to wait for elevateKubeSystemPrivileges
	I0920 17:08:12.581460   27962 kubeadm.go:394] duration metric: took 15.776931504s to StartCluster
	I0920 17:08:12.581484   27962 settings.go:142] acquiring lock: {Name:mk2a13c58cdc0faf8cddca5d6716175d45db9bfd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:08:12.581582   27962 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19672-8777/kubeconfig
	I0920 17:08:12.582546   27962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/kubeconfig: {Name:mkf32a4c736808e023459b2f0e40188618a38db1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:08:12.582827   27962 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0920 17:08:12.582838   27962 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 17:08:12.582868   27962 start.go:241] waiting for startup goroutines ...
	I0920 17:08:12.582877   27962 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 17:08:12.582961   27962 addons.go:69] Setting storage-provisioner=true in profile "ha-135993"
	I0920 17:08:12.582983   27962 addons.go:234] Setting addon storage-provisioner=true in "ha-135993"
	I0920 17:08:12.582992   27962 addons.go:69] Setting default-storageclass=true in profile "ha-135993"
	I0920 17:08:12.583015   27962 host.go:66] Checking if "ha-135993" exists ...
	I0920 17:08:12.583021   27962 config.go:182] Loaded profile config "ha-135993": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 17:08:12.583016   27962 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-135993"
	I0920 17:08:12.583508   27962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:08:12.583545   27962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:08:12.583546   27962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:08:12.583578   27962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:08:12.598612   27962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36417
	I0920 17:08:12.598702   27962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37497
	I0920 17:08:12.599159   27962 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:08:12.599205   27962 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:08:12.599708   27962 main.go:141] libmachine: Using API Version  1
	I0920 17:08:12.599711   27962 main.go:141] libmachine: Using API Version  1
	I0920 17:08:12.599730   27962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:08:12.599732   27962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:08:12.600086   27962 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:08:12.600096   27962 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:08:12.600272   27962 main.go:141] libmachine: (ha-135993) Calling .GetState
	I0920 17:08:12.600654   27962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:08:12.600687   27962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:08:12.602399   27962 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19672-8777/kubeconfig
	I0920 17:08:12.602624   27962 kapi.go:59] client config for ha-135993: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/client.crt", KeyFile:"/home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/client.key", CAFile:"/home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0920 17:08:12.603002   27962 cert_rotation.go:140] Starting client certificate rotation controller
	I0920 17:08:12.603197   27962 addons.go:234] Setting addon default-storageclass=true in "ha-135993"
	I0920 17:08:12.603229   27962 host.go:66] Checking if "ha-135993" exists ...
	I0920 17:08:12.603512   27962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:08:12.603547   27962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:08:12.615990   27962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34795
	I0920 17:08:12.616508   27962 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:08:12.617237   27962 main.go:141] libmachine: Using API Version  1
	I0920 17:08:12.617264   27962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:08:12.617610   27962 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:08:12.617796   27962 main.go:141] libmachine: (ha-135993) Calling .GetState
	I0920 17:08:12.619399   27962 main.go:141] libmachine: (ha-135993) Calling .DriverName
	I0920 17:08:12.621713   27962 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 17:08:12.623141   27962 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 17:08:12.623157   27962 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 17:08:12.623178   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHHostname
	I0920 17:08:12.623273   27962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33981
	I0920 17:08:12.623802   27962 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:08:12.624342   27962 main.go:141] libmachine: Using API Version  1
	I0920 17:08:12.624366   27962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:08:12.624828   27962 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:08:12.625480   27962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:08:12.625530   27962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:08:12.626097   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:08:12.626527   27962 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:08:12.626552   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:08:12.626807   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHPort
	I0920 17:08:12.626980   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:08:12.627125   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHUsername
	I0920 17:08:12.627264   27962 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993/id_rsa Username:docker}
	I0920 17:08:12.642774   27962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36005
	I0920 17:08:12.643262   27962 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:08:12.643818   27962 main.go:141] libmachine: Using API Version  1
	I0920 17:08:12.643841   27962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:08:12.644239   27962 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:08:12.644440   27962 main.go:141] libmachine: (ha-135993) Calling .GetState
	I0920 17:08:12.645924   27962 main.go:141] libmachine: (ha-135993) Calling .DriverName
	I0920 17:08:12.646117   27962 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 17:08:12.646130   27962 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 17:08:12.646144   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHHostname
	I0920 17:08:12.649003   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:08:12.649483   27962 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:08:12.649502   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:08:12.649607   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHPort
	I0920 17:08:12.649789   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:08:12.649942   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHUsername
	I0920 17:08:12.650098   27962 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993/id_rsa Username:docker}
	I0920 17:08:12.744585   27962 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0920 17:08:12.762429   27962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 17:08:12.828758   27962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 17:08:13.268354   27962 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0920 17:08:13.434438   27962 main.go:141] libmachine: Making call to close driver server
	I0920 17:08:13.434476   27962 main.go:141] libmachine: (ha-135993) Calling .Close
	I0920 17:08:13.434519   27962 main.go:141] libmachine: Making call to close driver server
	I0920 17:08:13.434543   27962 main.go:141] libmachine: (ha-135993) Calling .Close
	I0920 17:08:13.434773   27962 main.go:141] libmachine: (ha-135993) DBG | Closing plugin on server side
	I0920 17:08:13.434818   27962 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:08:13.434827   27962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:08:13.434838   27962 main.go:141] libmachine: Making call to close driver server
	I0920 17:08:13.434847   27962 main.go:141] libmachine: (ha-135993) Calling .Close
	I0920 17:08:13.434882   27962 main.go:141] libmachine: (ha-135993) DBG | Closing plugin on server side
	I0920 17:08:13.434897   27962 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:08:13.434914   27962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:08:13.434931   27962 main.go:141] libmachine: Making call to close driver server
	I0920 17:08:13.434943   27962 main.go:141] libmachine: (ha-135993) Calling .Close
	I0920 17:08:13.435090   27962 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:08:13.435107   27962 main.go:141] libmachine: (ha-135993) DBG | Closing plugin on server side
	I0920 17:08:13.435115   27962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:08:13.435168   27962 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:08:13.435183   27962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:08:13.435240   27962 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0920 17:08:13.435265   27962 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0920 17:08:13.435361   27962 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0920 17:08:13.435370   27962 round_trippers.go:469] Request Headers:
	I0920 17:08:13.435380   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:08:13.435388   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:08:13.451251   27962 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0920 17:08:13.451915   27962 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0920 17:08:13.451933   27962 round_trippers.go:469] Request Headers:
	I0920 17:08:13.451945   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:08:13.451951   27962 round_trippers.go:473]     Content-Type: application/json
	I0920 17:08:13.451959   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:08:13.455819   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:08:13.456046   27962 main.go:141] libmachine: Making call to close driver server
	I0920 17:08:13.456063   27962 main.go:141] libmachine: (ha-135993) Calling .Close
	I0920 17:08:13.456328   27962 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:08:13.456345   27962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:08:13.457999   27962 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0920 17:08:13.459046   27962 addons.go:510] duration metric: took 876.16629ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0920 17:08:13.459075   27962 start.go:246] waiting for cluster config update ...
	I0920 17:08:13.459086   27962 start.go:255] writing updated cluster config ...
	I0920 17:08:13.460310   27962 out.go:201] 
	I0920 17:08:13.461415   27962 config.go:182] Loaded profile config "ha-135993": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 17:08:13.461487   27962 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/config.json ...
	I0920 17:08:13.462998   27962 out.go:177] * Starting "ha-135993-m02" control-plane node in "ha-135993" cluster
	I0920 17:08:13.463913   27962 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 17:08:13.463932   27962 cache.go:56] Caching tarball of preloaded images
	I0920 17:08:13.464013   27962 preload.go:172] Found /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 17:08:13.464026   27962 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 17:08:13.464094   27962 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/config.json ...
	I0920 17:08:13.464275   27962 start.go:360] acquireMachinesLock for ha-135993-m02: {Name:mkfeedb385cf08b5d2aa00913e85815d02a180c2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 17:08:13.464329   27962 start.go:364] duration metric: took 31.835µs to acquireMachinesLock for "ha-135993-m02"
	I0920 17:08:13.464351   27962 start.go:93] Provisioning new machine with config: &{Name:ha-135993 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-135993 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 17:08:13.464449   27962 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0920 17:08:13.466601   27962 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 17:08:13.466688   27962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:08:13.466714   27962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:08:13.482616   27962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46331
	I0920 17:08:13.483161   27962 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:08:13.483661   27962 main.go:141] libmachine: Using API Version  1
	I0920 17:08:13.483682   27962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:08:13.484002   27962 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:08:13.484185   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetMachineName
	I0920 17:08:13.484325   27962 main.go:141] libmachine: (ha-135993-m02) Calling .DriverName
	I0920 17:08:13.484522   27962 start.go:159] libmachine.API.Create for "ha-135993" (driver="kvm2")
	I0920 17:08:13.484544   27962 client.go:168] LocalClient.Create starting
	I0920 17:08:13.484569   27962 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem
	I0920 17:08:13.484600   27962 main.go:141] libmachine: Decoding PEM data...
	I0920 17:08:13.484614   27962 main.go:141] libmachine: Parsing certificate...
	I0920 17:08:13.484662   27962 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem
	I0920 17:08:13.484680   27962 main.go:141] libmachine: Decoding PEM data...
	I0920 17:08:13.484691   27962 main.go:141] libmachine: Parsing certificate...
	I0920 17:08:13.484704   27962 main.go:141] libmachine: Running pre-create checks...
	I0920 17:08:13.484711   27962 main.go:141] libmachine: (ha-135993-m02) Calling .PreCreateCheck
	I0920 17:08:13.484853   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetConfigRaw
	I0920 17:08:13.485217   27962 main.go:141] libmachine: Creating machine...
	I0920 17:08:13.485230   27962 main.go:141] libmachine: (ha-135993-m02) Calling .Create
	I0920 17:08:13.485333   27962 main.go:141] libmachine: (ha-135993-m02) Creating KVM machine...
	I0920 17:08:13.486545   27962 main.go:141] libmachine: (ha-135993-m02) DBG | found existing default KVM network
	I0920 17:08:13.486700   27962 main.go:141] libmachine: (ha-135993-m02) DBG | found existing private KVM network mk-ha-135993
	I0920 17:08:13.486822   27962 main.go:141] libmachine: (ha-135993-m02) Setting up store path in /home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m02 ...
	I0920 17:08:13.486843   27962 main.go:141] libmachine: (ha-135993-m02) Building disk image from file:///home/jenkins/minikube-integration/19672-8777/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso
	I0920 17:08:13.486907   27962 main.go:141] libmachine: (ha-135993-m02) DBG | I0920 17:08:13.486794   28324 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19672-8777/.minikube
	I0920 17:08:13.486988   27962 main.go:141] libmachine: (ha-135993-m02) Downloading /home/jenkins/minikube-integration/19672-8777/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19672-8777/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso...
	I0920 17:08:13.739935   27962 main.go:141] libmachine: (ha-135993-m02) DBG | I0920 17:08:13.739800   28324 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m02/id_rsa...
	I0920 17:08:13.830603   27962 main.go:141] libmachine: (ha-135993-m02) DBG | I0920 17:08:13.830462   28324 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m02/ha-135993-m02.rawdisk...
	I0920 17:08:13.830640   27962 main.go:141] libmachine: (ha-135993-m02) DBG | Writing magic tar header
	I0920 17:08:13.830656   27962 main.go:141] libmachine: (ha-135993-m02) DBG | Writing SSH key tar header
	I0920 17:08:13.830668   27962 main.go:141] libmachine: (ha-135993-m02) DBG | I0920 17:08:13.830608   28324 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m02 ...
	I0920 17:08:13.830709   27962 main.go:141] libmachine: (ha-135993-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m02
	I0920 17:08:13.830748   27962 main.go:141] libmachine: (ha-135993-m02) Setting executable bit set on /home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m02 (perms=drwx------)
	I0920 17:08:13.830769   27962 main.go:141] libmachine: (ha-135993-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-8777/.minikube/machines
	I0920 17:08:13.830782   27962 main.go:141] libmachine: (ha-135993-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-8777/.minikube
	I0920 17:08:13.830799   27962 main.go:141] libmachine: (ha-135993-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-8777
	I0920 17:08:13.830811   27962 main.go:141] libmachine: (ha-135993-m02) Setting executable bit set on /home/jenkins/minikube-integration/19672-8777/.minikube/machines (perms=drwxr-xr-x)
	I0920 17:08:13.830822   27962 main.go:141] libmachine: (ha-135993-m02) Setting executable bit set on /home/jenkins/minikube-integration/19672-8777/.minikube (perms=drwxr-xr-x)
	I0920 17:08:13.830830   27962 main.go:141] libmachine: (ha-135993-m02) Setting executable bit set on /home/jenkins/minikube-integration/19672-8777 (perms=drwxrwxr-x)
	I0920 17:08:13.830839   27962 main.go:141] libmachine: (ha-135993-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0920 17:08:13.830852   27962 main.go:141] libmachine: (ha-135993-m02) DBG | Checking permissions on dir: /home/jenkins
	I0920 17:08:13.830862   27962 main.go:141] libmachine: (ha-135993-m02) DBG | Checking permissions on dir: /home
	I0920 17:08:13.830873   27962 main.go:141] libmachine: (ha-135993-m02) DBG | Skipping /home - not owner
	I0920 17:08:13.830885   27962 main.go:141] libmachine: (ha-135993-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0920 17:08:13.830900   27962 main.go:141] libmachine: (ha-135993-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0920 17:08:13.830909   27962 main.go:141] libmachine: (ha-135993-m02) Creating domain...
	I0920 17:08:13.831832   27962 main.go:141] libmachine: (ha-135993-m02) define libvirt domain using xml: 
	I0920 17:08:13.831858   27962 main.go:141] libmachine: (ha-135993-m02) <domain type='kvm'>
	I0920 17:08:13.831868   27962 main.go:141] libmachine: (ha-135993-m02)   <name>ha-135993-m02</name>
	I0920 17:08:13.831879   27962 main.go:141] libmachine: (ha-135993-m02)   <memory unit='MiB'>2200</memory>
	I0920 17:08:13.831891   27962 main.go:141] libmachine: (ha-135993-m02)   <vcpu>2</vcpu>
	I0920 17:08:13.831897   27962 main.go:141] libmachine: (ha-135993-m02)   <features>
	I0920 17:08:13.831904   27962 main.go:141] libmachine: (ha-135993-m02)     <acpi/>
	I0920 17:08:13.831913   27962 main.go:141] libmachine: (ha-135993-m02)     <apic/>
	I0920 17:08:13.831922   27962 main.go:141] libmachine: (ha-135993-m02)     <pae/>
	I0920 17:08:13.831931   27962 main.go:141] libmachine: (ha-135993-m02)     
	I0920 17:08:13.831943   27962 main.go:141] libmachine: (ha-135993-m02)   </features>
	I0920 17:08:13.831953   27962 main.go:141] libmachine: (ha-135993-m02)   <cpu mode='host-passthrough'>
	I0920 17:08:13.831960   27962 main.go:141] libmachine: (ha-135993-m02)   
	I0920 17:08:13.831967   27962 main.go:141] libmachine: (ha-135993-m02)   </cpu>
	I0920 17:08:13.831975   27962 main.go:141] libmachine: (ha-135993-m02)   <os>
	I0920 17:08:13.831983   27962 main.go:141] libmachine: (ha-135993-m02)     <type>hvm</type>
	I0920 17:08:13.831995   27962 main.go:141] libmachine: (ha-135993-m02)     <boot dev='cdrom'/>
	I0920 17:08:13.832003   27962 main.go:141] libmachine: (ha-135993-m02)     <boot dev='hd'/>
	I0920 17:08:13.832013   27962 main.go:141] libmachine: (ha-135993-m02)     <bootmenu enable='no'/>
	I0920 17:08:13.832023   27962 main.go:141] libmachine: (ha-135993-m02)   </os>
	I0920 17:08:13.832038   27962 main.go:141] libmachine: (ha-135993-m02)   <devices>
	I0920 17:08:13.832051   27962 main.go:141] libmachine: (ha-135993-m02)     <disk type='file' device='cdrom'>
	I0920 17:08:13.832071   27962 main.go:141] libmachine: (ha-135993-m02)       <source file='/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m02/boot2docker.iso'/>
	I0920 17:08:13.832084   27962 main.go:141] libmachine: (ha-135993-m02)       <target dev='hdc' bus='scsi'/>
	I0920 17:08:13.832095   27962 main.go:141] libmachine: (ha-135993-m02)       <readonly/>
	I0920 17:08:13.832104   27962 main.go:141] libmachine: (ha-135993-m02)     </disk>
	I0920 17:08:13.832113   27962 main.go:141] libmachine: (ha-135993-m02)     <disk type='file' device='disk'>
	I0920 17:08:13.832122   27962 main.go:141] libmachine: (ha-135993-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0920 17:08:13.832133   27962 main.go:141] libmachine: (ha-135993-m02)       <source file='/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m02/ha-135993-m02.rawdisk'/>
	I0920 17:08:13.832144   27962 main.go:141] libmachine: (ha-135993-m02)       <target dev='hda' bus='virtio'/>
	I0920 17:08:13.832153   27962 main.go:141] libmachine: (ha-135993-m02)     </disk>
	I0920 17:08:13.832164   27962 main.go:141] libmachine: (ha-135993-m02)     <interface type='network'>
	I0920 17:08:13.832173   27962 main.go:141] libmachine: (ha-135993-m02)       <source network='mk-ha-135993'/>
	I0920 17:08:13.832186   27962 main.go:141] libmachine: (ha-135993-m02)       <model type='virtio'/>
	I0920 17:08:13.832197   27962 main.go:141] libmachine: (ha-135993-m02)     </interface>
	I0920 17:08:13.832209   27962 main.go:141] libmachine: (ha-135993-m02)     <interface type='network'>
	I0920 17:08:13.832217   27962 main.go:141] libmachine: (ha-135993-m02)       <source network='default'/>
	I0920 17:08:13.832232   27962 main.go:141] libmachine: (ha-135993-m02)       <model type='virtio'/>
	I0920 17:08:13.832243   27962 main.go:141] libmachine: (ha-135993-m02)     </interface>
	I0920 17:08:13.832253   27962 main.go:141] libmachine: (ha-135993-m02)     <serial type='pty'>
	I0920 17:08:13.832261   27962 main.go:141] libmachine: (ha-135993-m02)       <target port='0'/>
	I0920 17:08:13.832270   27962 main.go:141] libmachine: (ha-135993-m02)     </serial>
	I0920 17:08:13.832278   27962 main.go:141] libmachine: (ha-135993-m02)     <console type='pty'>
	I0920 17:08:13.832288   27962 main.go:141] libmachine: (ha-135993-m02)       <target type='serial' port='0'/>
	I0920 17:08:13.832293   27962 main.go:141] libmachine: (ha-135993-m02)     </console>
	I0920 17:08:13.832301   27962 main.go:141] libmachine: (ha-135993-m02)     <rng model='virtio'>
	I0920 17:08:13.832311   27962 main.go:141] libmachine: (ha-135993-m02)       <backend model='random'>/dev/random</backend>
	I0920 17:08:13.832320   27962 main.go:141] libmachine: (ha-135993-m02)     </rng>
	I0920 17:08:13.832333   27962 main.go:141] libmachine: (ha-135993-m02)     
	I0920 17:08:13.832354   27962 main.go:141] libmachine: (ha-135993-m02)     
	I0920 17:08:13.832409   27962 main.go:141] libmachine: (ha-135993-m02)   </devices>
	I0920 17:08:13.832434   27962 main.go:141] libmachine: (ha-135993-m02) </domain>
	I0920 17:08:13.832443   27962 main.go:141] libmachine: (ha-135993-m02) 
	I0920 17:08:13.839347   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:40:3b:17 in network default
	I0920 17:08:13.839981   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:13.840002   27962 main.go:141] libmachine: (ha-135993-m02) Ensuring networks are active...
	I0920 17:08:13.840774   27962 main.go:141] libmachine: (ha-135993-m02) Ensuring network default is active
	I0920 17:08:13.841013   27962 main.go:141] libmachine: (ha-135993-m02) Ensuring network mk-ha-135993 is active
	I0920 17:08:13.841381   27962 main.go:141] libmachine: (ha-135993-m02) Getting domain xml...
	I0920 17:08:13.842134   27962 main.go:141] libmachine: (ha-135993-m02) Creating domain...
	I0920 17:08:15.062497   27962 main.go:141] libmachine: (ha-135993-m02) Waiting to get IP...
	I0920 17:08:15.063280   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:15.063771   27962 main.go:141] libmachine: (ha-135993-m02) DBG | unable to find current IP address of domain ha-135993-m02 in network mk-ha-135993
	I0920 17:08:15.063837   27962 main.go:141] libmachine: (ha-135993-m02) DBG | I0920 17:08:15.063776   28324 retry.go:31] will retry after 209.317935ms: waiting for machine to come up
	I0920 17:08:15.275351   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:15.275800   27962 main.go:141] libmachine: (ha-135993-m02) DBG | unable to find current IP address of domain ha-135993-m02 in network mk-ha-135993
	I0920 17:08:15.275825   27962 main.go:141] libmachine: (ha-135993-m02) DBG | I0920 17:08:15.275759   28324 retry.go:31] will retry after 321.648558ms: waiting for machine to come up
	I0920 17:08:15.599294   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:15.599955   27962 main.go:141] libmachine: (ha-135993-m02) DBG | unable to find current IP address of domain ha-135993-m02 in network mk-ha-135993
	I0920 17:08:15.599981   27962 main.go:141] libmachine: (ha-135993-m02) DBG | I0920 17:08:15.599902   28324 retry.go:31] will retry after 379.94005ms: waiting for machine to come up
	I0920 17:08:15.981649   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:15.982207   27962 main.go:141] libmachine: (ha-135993-m02) DBG | unable to find current IP address of domain ha-135993-m02 in network mk-ha-135993
	I0920 17:08:15.982258   27962 main.go:141] libmachine: (ha-135993-m02) DBG | I0920 17:08:15.982185   28324 retry.go:31] will retry after 407.2672ms: waiting for machine to come up
	I0920 17:08:16.390723   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:16.391164   27962 main.go:141] libmachine: (ha-135993-m02) DBG | unable to find current IP address of domain ha-135993-m02 in network mk-ha-135993
	I0920 17:08:16.391190   27962 main.go:141] libmachine: (ha-135993-m02) DBG | I0920 17:08:16.391121   28324 retry.go:31] will retry after 540.634265ms: waiting for machine to come up
	I0920 17:08:16.933924   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:16.934354   27962 main.go:141] libmachine: (ha-135993-m02) DBG | unable to find current IP address of domain ha-135993-m02 in network mk-ha-135993
	I0920 17:08:16.934380   27962 main.go:141] libmachine: (ha-135993-m02) DBG | I0920 17:08:16.934280   28324 retry.go:31] will retry after 944.239732ms: waiting for machine to come up
	I0920 17:08:17.880458   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:17.880905   27962 main.go:141] libmachine: (ha-135993-m02) DBG | unable to find current IP address of domain ha-135993-m02 in network mk-ha-135993
	I0920 17:08:17.880937   27962 main.go:141] libmachine: (ha-135993-m02) DBG | I0920 17:08:17.880855   28324 retry.go:31] will retry after 1.092727798s: waiting for machine to come up
	I0920 17:08:18.975422   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:18.975784   27962 main.go:141] libmachine: (ha-135993-m02) DBG | unable to find current IP address of domain ha-135993-m02 in network mk-ha-135993
	I0920 17:08:18.975813   27962 main.go:141] libmachine: (ha-135993-m02) DBG | I0920 17:08:18.975727   28324 retry.go:31] will retry after 1.481134943s: waiting for machine to come up
	I0920 17:08:20.459346   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:20.459802   27962 main.go:141] libmachine: (ha-135993-m02) DBG | unable to find current IP address of domain ha-135993-m02 in network mk-ha-135993
	I0920 17:08:20.459819   27962 main.go:141] libmachine: (ha-135993-m02) DBG | I0920 17:08:20.459747   28324 retry.go:31] will retry after 1.808510088s: waiting for machine to come up
	I0920 17:08:22.270788   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:22.271210   27962 main.go:141] libmachine: (ha-135993-m02) DBG | unable to find current IP address of domain ha-135993-m02 in network mk-ha-135993
	I0920 17:08:22.271239   27962 main.go:141] libmachine: (ha-135993-m02) DBG | I0920 17:08:22.271135   28324 retry.go:31] will retry after 1.59499674s: waiting for machine to come up
	I0920 17:08:23.868039   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:23.868429   27962 main.go:141] libmachine: (ha-135993-m02) DBG | unable to find current IP address of domain ha-135993-m02 in network mk-ha-135993
	I0920 17:08:23.868456   27962 main.go:141] libmachine: (ha-135993-m02) DBG | I0920 17:08:23.868389   28324 retry.go:31] will retry after 2.718058875s: waiting for machine to come up
	I0920 17:08:26.587523   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:26.588013   27962 main.go:141] libmachine: (ha-135993-m02) DBG | unable to find current IP address of domain ha-135993-m02 in network mk-ha-135993
	I0920 17:08:26.588042   27962 main.go:141] libmachine: (ha-135993-m02) DBG | I0920 17:08:26.587966   28324 retry.go:31] will retry after 2.496735484s: waiting for machine to come up
	I0920 17:08:29.085932   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:29.086306   27962 main.go:141] libmachine: (ha-135993-m02) DBG | unable to find current IP address of domain ha-135993-m02 in network mk-ha-135993
	I0920 17:08:29.086335   27962 main.go:141] libmachine: (ha-135993-m02) DBG | I0920 17:08:29.086239   28324 retry.go:31] will retry after 2.750361097s: waiting for machine to come up
	I0920 17:08:31.838828   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:31.839392   27962 main.go:141] libmachine: (ha-135993-m02) DBG | unable to find current IP address of domain ha-135993-m02 in network mk-ha-135993
	I0920 17:08:31.839414   27962 main.go:141] libmachine: (ha-135993-m02) DBG | I0920 17:08:31.839344   28324 retry.go:31] will retry after 4.254809645s: waiting for machine to come up
	I0920 17:08:36.096360   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:36.096729   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has current primary IP address 192.168.39.227 and MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:36.096746   27962 main.go:141] libmachine: (ha-135993-m02) Found IP for machine: 192.168.39.227
	I0920 17:08:36.096755   27962 main.go:141] libmachine: (ha-135993-m02) Reserving static IP address...
	I0920 17:08:36.097098   27962 main.go:141] libmachine: (ha-135993-m02) DBG | unable to find host DHCP lease matching {name: "ha-135993-m02", mac: "52:54:00:87:dc:24", ip: "192.168.39.227"} in network mk-ha-135993
	I0920 17:08:36.167513   27962 main.go:141] libmachine: (ha-135993-m02) DBG | Getting to WaitForSSH function...
	I0920 17:08:36.167545   27962 main.go:141] libmachine: (ha-135993-m02) Reserved static IP address: 192.168.39.227
	I0920 17:08:36.167558   27962 main.go:141] libmachine: (ha-135993-m02) Waiting for SSH to be available...
	I0920 17:08:36.170087   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:36.170491   27962 main.go:141] libmachine: (ha-135993-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:dc:24", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:08:28 +0000 UTC Type:0 Mac:52:54:00:87:dc:24 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:minikube Clientid:01:52:54:00:87:dc:24}
	I0920 17:08:36.170519   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:36.170690   27962 main.go:141] libmachine: (ha-135993-m02) DBG | Using SSH client type: external
	I0920 17:08:36.170712   27962 main.go:141] libmachine: (ha-135993-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m02/id_rsa (-rw-------)
	I0920 17:08:36.170731   27962 main.go:141] libmachine: (ha-135993-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.227 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 17:08:36.170745   27962 main.go:141] libmachine: (ha-135993-m02) DBG | About to run SSH command:
	I0920 17:08:36.170753   27962 main.go:141] libmachine: (ha-135993-m02) DBG | exit 0
	I0920 17:08:36.294607   27962 main.go:141] libmachine: (ha-135993-m02) DBG | SSH cmd err, output: <nil>: 
	I0920 17:08:36.294933   27962 main.go:141] libmachine: (ha-135993-m02) KVM machine creation complete!
	I0920 17:08:36.295321   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetConfigRaw
	I0920 17:08:36.295951   27962 main.go:141] libmachine: (ha-135993-m02) Calling .DriverName
	I0920 17:08:36.296272   27962 main.go:141] libmachine: (ha-135993-m02) Calling .DriverName
	I0920 17:08:36.296483   27962 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0920 17:08:36.296509   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetState
	I0920 17:08:36.298367   27962 main.go:141] libmachine: Detecting operating system of created instance...
	I0920 17:08:36.298385   27962 main.go:141] libmachine: Waiting for SSH to be available...
	I0920 17:08:36.298392   27962 main.go:141] libmachine: Getting to WaitForSSH function...
	I0920 17:08:36.298400   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHHostname
	I0920 17:08:36.301173   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:36.301568   27962 main.go:141] libmachine: (ha-135993-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:dc:24", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:08:28 +0000 UTC Type:0 Mac:52:54:00:87:dc:24 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-135993-m02 Clientid:01:52:54:00:87:dc:24}
	I0920 17:08:36.301596   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:36.301712   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHPort
	I0920 17:08:36.301889   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHKeyPath
	I0920 17:08:36.302037   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHKeyPath
	I0920 17:08:36.302163   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHUsername
	I0920 17:08:36.302363   27962 main.go:141] libmachine: Using SSH client type: native
	I0920 17:08:36.302570   27962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0920 17:08:36.302587   27962 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0920 17:08:36.409296   27962 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 17:08:36.409321   27962 main.go:141] libmachine: Detecting the provisioner...
	I0920 17:08:36.409329   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHHostname
	I0920 17:08:36.412054   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:36.412453   27962 main.go:141] libmachine: (ha-135993-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:dc:24", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:08:28 +0000 UTC Type:0 Mac:52:54:00:87:dc:24 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-135993-m02 Clientid:01:52:54:00:87:dc:24}
	I0920 17:08:36.412473   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:36.412680   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHPort
	I0920 17:08:36.412859   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHKeyPath
	I0920 17:08:36.413003   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHKeyPath
	I0920 17:08:36.413158   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHUsername
	I0920 17:08:36.413299   27962 main.go:141] libmachine: Using SSH client type: native
	I0920 17:08:36.413464   27962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0920 17:08:36.413474   27962 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0920 17:08:36.522550   27962 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0920 17:08:36.522639   27962 main.go:141] libmachine: found compatible host: buildroot
	I0920 17:08:36.522653   27962 main.go:141] libmachine: Provisioning with buildroot...
	I0920 17:08:36.522668   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetMachineName
	I0920 17:08:36.522875   27962 buildroot.go:166] provisioning hostname "ha-135993-m02"
	I0920 17:08:36.522896   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetMachineName
	I0920 17:08:36.523039   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHHostname
	I0920 17:08:36.525697   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:36.526081   27962 main.go:141] libmachine: (ha-135993-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:dc:24", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:08:28 +0000 UTC Type:0 Mac:52:54:00:87:dc:24 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-135993-m02 Clientid:01:52:54:00:87:dc:24}
	I0920 17:08:36.526108   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:36.526279   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHPort
	I0920 17:08:36.526447   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHKeyPath
	I0920 17:08:36.526596   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHKeyPath
	I0920 17:08:36.526717   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHUsername
	I0920 17:08:36.526893   27962 main.go:141] libmachine: Using SSH client type: native
	I0920 17:08:36.527091   27962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0920 17:08:36.527103   27962 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-135993-m02 && echo "ha-135993-m02" | sudo tee /etc/hostname
	I0920 17:08:36.648108   27962 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-135993-m02
	
	I0920 17:08:36.648139   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHHostname
	I0920 17:08:36.651735   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:36.652103   27962 main.go:141] libmachine: (ha-135993-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:dc:24", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:08:28 +0000 UTC Type:0 Mac:52:54:00:87:dc:24 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-135993-m02 Clientid:01:52:54:00:87:dc:24}
	I0920 17:08:36.652141   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:36.652372   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHPort
	I0920 17:08:36.652553   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHKeyPath
	I0920 17:08:36.652726   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHKeyPath
	I0920 17:08:36.652907   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHUsername
	I0920 17:08:36.653066   27962 main.go:141] libmachine: Using SSH client type: native
	I0920 17:08:36.653241   27962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0920 17:08:36.653262   27962 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-135993-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-135993-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-135993-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 17:08:36.767084   27962 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 17:08:36.767120   27962 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-8777/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-8777/.minikube}
	I0920 17:08:36.767142   27962 buildroot.go:174] setting up certificates
	I0920 17:08:36.767150   27962 provision.go:84] configureAuth start
	I0920 17:08:36.767159   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetMachineName
	I0920 17:08:36.767459   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetIP
	I0920 17:08:36.770189   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:36.770520   27962 main.go:141] libmachine: (ha-135993-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:dc:24", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:08:28 +0000 UTC Type:0 Mac:52:54:00:87:dc:24 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-135993-m02 Clientid:01:52:54:00:87:dc:24}
	I0920 17:08:36.770547   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:36.770672   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHHostname
	I0920 17:08:36.772567   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:36.772866   27962 main.go:141] libmachine: (ha-135993-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:dc:24", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:08:28 +0000 UTC Type:0 Mac:52:54:00:87:dc:24 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-135993-m02 Clientid:01:52:54:00:87:dc:24}
	I0920 17:08:36.772893   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:36.773001   27962 provision.go:143] copyHostCerts
	I0920 17:08:36.773032   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem
	I0920 17:08:36.773066   27962 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem, removing ...
	I0920 17:08:36.773075   27962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem
	I0920 17:08:36.773139   27962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem (1082 bytes)
	I0920 17:08:36.773212   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem
	I0920 17:08:36.773230   27962 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem, removing ...
	I0920 17:08:36.773237   27962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem
	I0920 17:08:36.773260   27962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem (1123 bytes)
	I0920 17:08:36.773312   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem
	I0920 17:08:36.773331   27962 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem, removing ...
	I0920 17:08:36.773337   27962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem
	I0920 17:08:36.773357   27962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem (1675 bytes)
	I0920 17:08:36.773424   27962 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem org=jenkins.ha-135993-m02 san=[127.0.0.1 192.168.39.227 ha-135993-m02 localhost minikube]
	I0920 17:08:36.941019   27962 provision.go:177] copyRemoteCerts
	I0920 17:08:36.941075   27962 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 17:08:36.941096   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHHostname
	I0920 17:08:36.943678   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:36.944038   27962 main.go:141] libmachine: (ha-135993-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:dc:24", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:08:28 +0000 UTC Type:0 Mac:52:54:00:87:dc:24 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-135993-m02 Clientid:01:52:54:00:87:dc:24}
	I0920 17:08:36.944072   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:36.944262   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHPort
	I0920 17:08:36.944447   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHKeyPath
	I0920 17:08:36.944600   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHUsername
	I0920 17:08:36.944758   27962 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m02/id_rsa Username:docker}
	I0920 17:08:37.028603   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0920 17:08:37.028690   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0920 17:08:37.052665   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0920 17:08:37.052750   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0920 17:08:37.077892   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0920 17:08:37.077976   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0920 17:08:37.100815   27962 provision.go:87] duration metric: took 333.648023ms to configureAuth
	I0920 17:08:37.100849   27962 buildroot.go:189] setting minikube options for container-runtime
	I0920 17:08:37.101060   27962 config.go:182] Loaded profile config "ha-135993": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 17:08:37.101132   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHHostname
	I0920 17:08:37.103680   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:37.104025   27962 main.go:141] libmachine: (ha-135993-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:dc:24", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:08:28 +0000 UTC Type:0 Mac:52:54:00:87:dc:24 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-135993-m02 Clientid:01:52:54:00:87:dc:24}
	I0920 17:08:37.104065   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:37.104260   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHPort
	I0920 17:08:37.104442   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHKeyPath
	I0920 17:08:37.104572   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHKeyPath
	I0920 17:08:37.104716   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHUsername
	I0920 17:08:37.104930   27962 main.go:141] libmachine: Using SSH client type: native
	I0920 17:08:37.105131   27962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0920 17:08:37.105151   27962 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 17:08:37.328322   27962 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 17:08:37.328359   27962 main.go:141] libmachine: Checking connection to Docker...
	I0920 17:08:37.328371   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetURL
	I0920 17:08:37.329623   27962 main.go:141] libmachine: (ha-135993-m02) DBG | Using libvirt version 6000000
	I0920 17:08:37.331823   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:37.332143   27962 main.go:141] libmachine: (ha-135993-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:dc:24", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:08:28 +0000 UTC Type:0 Mac:52:54:00:87:dc:24 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-135993-m02 Clientid:01:52:54:00:87:dc:24}
	I0920 17:08:37.332167   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:37.332339   27962 main.go:141] libmachine: Docker is up and running!
	I0920 17:08:37.332353   27962 main.go:141] libmachine: Reticulating splines...
	I0920 17:08:37.332361   27962 client.go:171] duration metric: took 23.847807748s to LocalClient.Create
	I0920 17:08:37.332387   27962 start.go:167] duration metric: took 23.84786362s to libmachine.API.Create "ha-135993"
	I0920 17:08:37.332399   27962 start.go:293] postStartSetup for "ha-135993-m02" (driver="kvm2")
	I0920 17:08:37.332415   27962 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 17:08:37.332439   27962 main.go:141] libmachine: (ha-135993-m02) Calling .DriverName
	I0920 17:08:37.332705   27962 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 17:08:37.332736   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHHostname
	I0920 17:08:37.334802   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:37.335108   27962 main.go:141] libmachine: (ha-135993-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:dc:24", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:08:28 +0000 UTC Type:0 Mac:52:54:00:87:dc:24 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-135993-m02 Clientid:01:52:54:00:87:dc:24}
	I0920 17:08:37.335134   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:37.335218   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHPort
	I0920 17:08:37.335362   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHKeyPath
	I0920 17:08:37.335477   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHUsername
	I0920 17:08:37.335595   27962 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m02/id_rsa Username:docker}
	I0920 17:08:37.416843   27962 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 17:08:37.421359   27962 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 17:08:37.421384   27962 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/addons for local assets ...
	I0920 17:08:37.421448   27962 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/files for local assets ...
	I0920 17:08:37.421538   27962 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem -> 159732.pem in /etc/ssl/certs
	I0920 17:08:37.421549   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem -> /etc/ssl/certs/159732.pem
	I0920 17:08:37.421657   27962 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 17:08:37.431863   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /etc/ssl/certs/159732.pem (1708 bytes)
	I0920 17:08:37.454586   27962 start.go:296] duration metric: took 122.170431ms for postStartSetup
	I0920 17:08:37.454638   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetConfigRaw
	I0920 17:08:37.455188   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetIP
	I0920 17:08:37.457599   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:37.457923   27962 main.go:141] libmachine: (ha-135993-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:dc:24", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:08:28 +0000 UTC Type:0 Mac:52:54:00:87:dc:24 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-135993-m02 Clientid:01:52:54:00:87:dc:24}
	I0920 17:08:37.457945   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:37.458188   27962 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/config.json ...
	I0920 17:08:37.458382   27962 start.go:128] duration metric: took 23.993921825s to createHost
	I0920 17:08:37.458410   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHHostname
	I0920 17:08:37.460848   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:37.461348   27962 main.go:141] libmachine: (ha-135993-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:dc:24", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:08:28 +0000 UTC Type:0 Mac:52:54:00:87:dc:24 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-135993-m02 Clientid:01:52:54:00:87:dc:24}
	I0920 17:08:37.461378   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:37.461561   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHPort
	I0920 17:08:37.461755   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHKeyPath
	I0920 17:08:37.461935   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHKeyPath
	I0920 17:08:37.462069   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHUsername
	I0920 17:08:37.462223   27962 main.go:141] libmachine: Using SSH client type: native
	I0920 17:08:37.462383   27962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0920 17:08:37.462392   27962 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 17:08:37.570351   27962 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726852117.546992904
	
	I0920 17:08:37.570372   27962 fix.go:216] guest clock: 1726852117.546992904
	I0920 17:08:37.570379   27962 fix.go:229] Guest: 2024-09-20 17:08:37.546992904 +0000 UTC Remote: 2024-09-20 17:08:37.458395452 +0000 UTC m=+69.269105040 (delta=88.597452ms)
	I0920 17:08:37.570394   27962 fix.go:200] guest clock delta is within tolerance: 88.597452ms
	I0920 17:08:37.570398   27962 start.go:83] releasing machines lock for "ha-135993-m02", held for 24.10605904s
	I0920 17:08:37.570419   27962 main.go:141] libmachine: (ha-135993-m02) Calling .DriverName
	I0920 17:08:37.570730   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetIP
	I0920 17:08:37.573185   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:37.573501   27962 main.go:141] libmachine: (ha-135993-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:dc:24", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:08:28 +0000 UTC Type:0 Mac:52:54:00:87:dc:24 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-135993-m02 Clientid:01:52:54:00:87:dc:24}
	I0920 17:08:37.573529   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:37.576260   27962 out.go:177] * Found network options:
	I0920 17:08:37.577727   27962 out.go:177]   - NO_PROXY=192.168.39.60
	W0920 17:08:37.578902   27962 proxy.go:119] fail to check proxy env: Error ip not in block
	I0920 17:08:37.578937   27962 main.go:141] libmachine: (ha-135993-m02) Calling .DriverName
	I0920 17:08:37.579631   27962 main.go:141] libmachine: (ha-135993-m02) Calling .DriverName
	I0920 17:08:37.579801   27962 main.go:141] libmachine: (ha-135993-m02) Calling .DriverName
	I0920 17:08:37.579884   27962 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 17:08:37.579926   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHHostname
	W0920 17:08:37.580027   27962 proxy.go:119] fail to check proxy env: Error ip not in block
	I0920 17:08:37.580105   27962 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 17:08:37.580127   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHHostname
	I0920 17:08:37.582896   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:37.583131   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:37.583396   27962 main.go:141] libmachine: (ha-135993-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:dc:24", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:08:28 +0000 UTC Type:0 Mac:52:54:00:87:dc:24 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-135993-m02 Clientid:01:52:54:00:87:dc:24}
	I0920 17:08:37.583422   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:37.583562   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHPort
	I0920 17:08:37.583712   27962 main.go:141] libmachine: (ha-135993-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:dc:24", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:08:28 +0000 UTC Type:0 Mac:52:54:00:87:dc:24 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-135993-m02 Clientid:01:52:54:00:87:dc:24}
	I0920 17:08:37.583731   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:37.583738   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHKeyPath
	I0920 17:08:37.583921   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHUsername
	I0920 17:08:37.583953   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHPort
	I0920 17:08:37.584099   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHKeyPath
	I0920 17:08:37.584097   27962 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m02/id_rsa Username:docker}
	I0920 17:08:37.584246   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetSSHUsername
	I0920 17:08:37.584390   27962 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m02/id_rsa Username:docker}
	I0920 17:08:37.841918   27962 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 17:08:37.847702   27962 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 17:08:37.847782   27962 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 17:08:37.865314   27962 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 17:08:37.865341   27962 start.go:495] detecting cgroup driver to use...
	I0920 17:08:37.865402   27962 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 17:08:37.882395   27962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 17:08:37.898199   27962 docker.go:217] disabling cri-docker service (if available) ...
	I0920 17:08:37.898256   27962 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 17:08:37.914375   27962 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 17:08:37.929731   27962 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 17:08:38.054897   27962 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 17:08:38.213720   27962 docker.go:233] disabling docker service ...
	I0920 17:08:38.213781   27962 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 17:08:38.228604   27962 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 17:08:38.241927   27962 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 17:08:38.372497   27962 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 17:08:38.492012   27962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 17:08:38.505545   27962 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 17:08:38.522859   27962 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 17:08:38.522917   27962 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:08:38.533670   27962 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 17:08:38.533742   27962 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:08:38.543534   27962 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:08:38.553115   27962 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:08:38.563278   27962 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 17:08:38.573734   27962 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:08:38.585820   27962 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:08:38.602582   27962 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:08:38.612986   27962 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 17:08:38.625878   27962 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 17:08:38.625952   27962 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 17:08:38.640746   27962 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 17:08:38.650259   27962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:08:38.774025   27962 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 17:08:38.868968   27962 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 17:08:38.869037   27962 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 17:08:38.873544   27962 start.go:563] Will wait 60s for crictl version
	I0920 17:08:38.873611   27962 ssh_runner.go:195] Run: which crictl
	I0920 17:08:38.877199   27962 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 17:08:38.914545   27962 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 17:08:38.914652   27962 ssh_runner.go:195] Run: crio --version
	I0920 17:08:38.942570   27962 ssh_runner.go:195] Run: crio --version
	I0920 17:08:38.974013   27962 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 17:08:38.975371   27962 out.go:177]   - env NO_PROXY=192.168.39.60
	I0920 17:08:38.976693   27962 main.go:141] libmachine: (ha-135993-m02) Calling .GetIP
	I0920 17:08:38.979315   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:38.979662   27962 main.go:141] libmachine: (ha-135993-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:dc:24", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:08:28 +0000 UTC Type:0 Mac:52:54:00:87:dc:24 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-135993-m02 Clientid:01:52:54:00:87:dc:24}
	I0920 17:08:38.979686   27962 main.go:141] libmachine: (ha-135993-m02) DBG | domain ha-135993-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:87:dc:24 in network mk-ha-135993
	I0920 17:08:38.979928   27962 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 17:08:38.984450   27962 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 17:08:38.996637   27962 mustload.go:65] Loading cluster: ha-135993
	I0920 17:08:38.996863   27962 config.go:182] Loaded profile config "ha-135993": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 17:08:38.997116   27962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:08:38.997144   27962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:08:39.011615   27962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36791
	I0920 17:08:39.012110   27962 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:08:39.012595   27962 main.go:141] libmachine: Using API Version  1
	I0920 17:08:39.012618   27962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:08:39.012951   27962 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:08:39.013120   27962 main.go:141] libmachine: (ha-135993) Calling .GetState
	I0920 17:08:39.014524   27962 host.go:66] Checking if "ha-135993" exists ...
	I0920 17:08:39.014807   27962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:08:39.014829   27962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:08:39.028965   27962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39851
	I0920 17:08:39.029376   27962 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:08:39.029829   27962 main.go:141] libmachine: Using API Version  1
	I0920 17:08:39.029863   27962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:08:39.030149   27962 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:08:39.030299   27962 main.go:141] libmachine: (ha-135993) Calling .DriverName
	I0920 17:08:39.030433   27962 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993 for IP: 192.168.39.227
	I0920 17:08:39.030445   27962 certs.go:194] generating shared ca certs ...
	I0920 17:08:39.030462   27962 certs.go:226] acquiring lock for ca certs: {Name:mkc7ef6c737c6bdc3fdd9dcff8f57029c020d8f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:08:39.030587   27962 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key
	I0920 17:08:39.030622   27962 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key
	I0920 17:08:39.030631   27962 certs.go:256] generating profile certs ...
	I0920 17:08:39.030698   27962 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/client.key
	I0920 17:08:39.030722   27962 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.key.8c3b1447
	I0920 17:08:39.030736   27962 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.crt.8c3b1447 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.60 192.168.39.227 192.168.39.254]
	I0920 17:08:39.095051   27962 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.crt.8c3b1447 ...
	I0920 17:08:39.095081   27962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.crt.8c3b1447: {Name:mke080ae3589481bb1ac84166b67a86b0862deca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:08:39.095299   27962 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.key.8c3b1447 ...
	I0920 17:08:39.095313   27962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.key.8c3b1447: {Name:mk0aaeb424c58a29d9543a386b9ebefcbd99d38f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:08:39.095401   27962 certs.go:381] copying /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.crt.8c3b1447 -> /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.crt
	I0920 17:08:39.095524   27962 certs.go:385] copying /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.key.8c3b1447 -> /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.key
	I0920 17:08:39.095653   27962 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/proxy-client.key
	I0920 17:08:39.095667   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0920 17:08:39.095679   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0920 17:08:39.095689   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0920 17:08:39.095702   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0920 17:08:39.095712   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0920 17:08:39.095724   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0920 17:08:39.095736   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0920 17:08:39.095749   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0920 17:08:39.095802   27962 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem (1338 bytes)
	W0920 17:08:39.095830   27962 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973_empty.pem, impossibly tiny 0 bytes
	I0920 17:08:39.095839   27962 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 17:08:39.095858   27962 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem (1082 bytes)
	I0920 17:08:39.095878   27962 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem (1123 bytes)
	I0920 17:08:39.095901   27962 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem (1675 bytes)
	I0920 17:08:39.095936   27962 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem (1708 bytes)
	I0920 17:08:39.095961   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:08:39.095977   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem -> /usr/share/ca-certificates/15973.pem
	I0920 17:08:39.095989   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem -> /usr/share/ca-certificates/159732.pem
	I0920 17:08:39.096019   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHHostname
	I0920 17:08:39.099130   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:08:39.099635   27962 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:08:39.099664   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:08:39.099789   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHPort
	I0920 17:08:39.100010   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:08:39.100156   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHUsername
	I0920 17:08:39.100302   27962 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993/id_rsa Username:docker}
	I0920 17:08:39.178198   27962 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0920 17:08:39.183212   27962 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0920 17:08:39.194269   27962 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0920 17:08:39.198144   27962 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0920 17:08:39.207842   27962 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0920 17:08:39.212563   27962 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0920 17:08:39.225008   27962 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0920 17:08:39.228957   27962 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0920 17:08:39.240966   27962 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0920 17:08:39.244710   27962 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0920 17:08:39.255704   27962 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0920 17:08:39.261179   27962 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0920 17:08:39.272522   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 17:08:39.298671   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 17:08:39.323122   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 17:08:39.347904   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0920 17:08:39.372895   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0920 17:08:39.396433   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 17:08:39.420958   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 17:08:39.444600   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 17:08:39.468099   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 17:08:39.492182   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem --> /usr/share/ca-certificates/15973.pem (1338 bytes)
	I0920 17:08:39.516275   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /usr/share/ca-certificates/159732.pem (1708 bytes)
	I0920 17:08:39.538881   27962 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0920 17:08:39.554623   27962 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0920 17:08:39.569829   27962 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0920 17:08:39.585133   27962 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0920 17:08:39.601137   27962 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0920 17:08:39.617605   27962 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0920 17:08:39.633667   27962 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0920 17:08:39.650104   27962 ssh_runner.go:195] Run: openssl version
	I0920 17:08:39.656001   27962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 17:08:39.667261   27962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:08:39.671479   27962 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:08:39.671552   27962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:08:39.677168   27962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 17:08:39.687694   27962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15973.pem && ln -fs /usr/share/ca-certificates/15973.pem /etc/ssl/certs/15973.pem"
	I0920 17:08:39.697763   27962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15973.pem
	I0920 17:08:39.702178   27962 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:04 /usr/share/ca-certificates/15973.pem
	I0920 17:08:39.702233   27962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15973.pem
	I0920 17:08:39.708012   27962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15973.pem /etc/ssl/certs/51391683.0"
	I0920 17:08:39.718526   27962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/159732.pem && ln -fs /usr/share/ca-certificates/159732.pem /etc/ssl/certs/159732.pem"
	I0920 17:08:39.729775   27962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/159732.pem
	I0920 17:08:39.734571   27962 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:04 /usr/share/ca-certificates/159732.pem
	I0920 17:08:39.734627   27962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/159732.pem
	I0920 17:08:39.740342   27962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/159732.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 17:08:39.751136   27962 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 17:08:39.755553   27962 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 17:08:39.755646   27962 kubeadm.go:934] updating node {m02 192.168.39.227 8443 v1.31.1 crio true true} ...
	I0920 17:08:39.755760   27962 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-135993-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.227
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-135993 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 17:08:39.755800   27962 kube-vip.go:115] generating kube-vip config ...
	I0920 17:08:39.755854   27962 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0920 17:08:39.773764   27962 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0920 17:08:39.773847   27962 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0920 17:08:39.773905   27962 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 17:08:39.783942   27962 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0920 17:08:39.784007   27962 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0920 17:08:39.793636   27962 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0920 17:08:39.793672   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0920 17:08:39.793735   27962 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0920 17:08:39.793780   27962 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19672-8777/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I0920 17:08:39.793842   27962 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19672-8777/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I0920 17:08:39.798080   27962 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0920 17:08:39.798118   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0920 17:08:40.867820   27962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 17:08:40.882080   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0920 17:08:40.882178   27962 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0920 17:08:40.886572   27962 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0920 17:08:40.886607   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0920 17:08:41.226998   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0920 17:08:41.227076   27962 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0920 17:08:41.238040   27962 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0920 17:08:41.238078   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0920 17:08:41.520778   27962 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0920 17:08:41.530138   27962 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0920 17:08:41.546031   27962 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 17:08:41.561648   27962 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0920 17:08:41.577512   27962 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0920 17:08:41.581127   27962 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 17:08:41.593044   27962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:08:41.727078   27962 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 17:08:41.743823   27962 host.go:66] Checking if "ha-135993" exists ...
	I0920 17:08:41.744278   27962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:08:41.744326   27962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:08:41.759319   27962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43037
	I0920 17:08:41.759806   27962 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:08:41.760334   27962 main.go:141] libmachine: Using API Version  1
	I0920 17:08:41.760365   27962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:08:41.760710   27962 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:08:41.760950   27962 main.go:141] libmachine: (ha-135993) Calling .DriverName
	I0920 17:08:41.761092   27962 start.go:317] joinCluster: &{Name:ha-135993 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-135993 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 17:08:41.761208   27962 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0920 17:08:41.761228   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHHostname
	I0920 17:08:41.764476   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:08:41.765051   27962 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:08:41.765084   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:08:41.765229   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHPort
	I0920 17:08:41.765376   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:08:41.765547   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHUsername
	I0920 17:08:41.765689   27962 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993/id_rsa Username:docker}
	I0920 17:08:41.915104   27962 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 17:08:41.915146   27962 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 76lnz4.6r1ezgurod2l1q25 --discovery-token-ca-cert-hash sha256:736f3fb521855813decb4a7214f2b8beff5f81c3971b60decfa20a8807626a2d --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-135993-m02 --control-plane --apiserver-advertise-address=192.168.39.227 --apiserver-bind-port=8443"
	I0920 17:09:04.881318   27962 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 76lnz4.6r1ezgurod2l1q25 --discovery-token-ca-cert-hash sha256:736f3fb521855813decb4a7214f2b8beff5f81c3971b60decfa20a8807626a2d --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-135993-m02 --control-plane --apiserver-advertise-address=192.168.39.227 --apiserver-bind-port=8443": (22.966149697s)
	I0920 17:09:04.881355   27962 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0920 17:09:05.471754   27962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-135993-m02 minikube.k8s.io/updated_at=2024_09_20T17_09_05_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=0626f22cf0d915d75e291a5bce701f94395056e1 minikube.k8s.io/name=ha-135993 minikube.k8s.io/primary=false
	I0920 17:09:05.593812   27962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-135993-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0920 17:09:05.743557   27962 start.go:319] duration metric: took 23.982457966s to joinCluster
	I0920 17:09:05.743641   27962 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 17:09:05.743939   27962 config.go:182] Loaded profile config "ha-135993": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 17:09:05.745782   27962 out.go:177] * Verifying Kubernetes components...
	I0920 17:09:05.747592   27962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:09:06.068898   27962 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 17:09:06.098222   27962 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19672-8777/kubeconfig
	I0920 17:09:06.098478   27962 kapi.go:59] client config for ha-135993: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/client.crt", KeyFile:"/home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/client.key", CAFile:"/home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0920 17:09:06.098546   27962 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.60:8443
	I0920 17:09:06.098829   27962 node_ready.go:35] waiting up to 6m0s for node "ha-135993-m02" to be "Ready" ...
	I0920 17:09:06.098967   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:06.098980   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:06.098991   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:06.098997   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:06.110154   27962 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0920 17:09:06.599028   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:06.599058   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:06.599068   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:06.599080   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:06.607526   27962 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0920 17:09:07.100044   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:07.100066   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:07.100080   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:07.100088   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:07.104606   27962 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 17:09:07.599532   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:07.599561   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:07.599573   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:07.599592   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:07.603898   27962 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 17:09:08.099892   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:08.099925   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:08.099936   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:08.099939   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:08.104089   27962 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 17:09:08.104669   27962 node_ready.go:53] node "ha-135993-m02" has status "Ready":"False"
	I0920 17:09:08.599188   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:08.599216   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:08.599232   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:08.599237   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:08.602674   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:09.099543   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:09.099573   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:09.099590   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:09.099595   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:09.103157   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:09.599047   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:09.599068   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:09.599079   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:09.599083   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:09.602661   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:10.099869   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:10.099898   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:10.099910   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:10.099917   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:10.104382   27962 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 17:09:10.105025   27962 node_ready.go:53] node "ha-135993-m02" has status "Ready":"False"
	I0920 17:09:10.599990   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:10.600015   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:10.600025   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:10.600040   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:10.604181   27962 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 17:09:11.100016   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:11.100036   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:11.100044   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:11.100048   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:11.104486   27962 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 17:09:11.599135   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:11.599157   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:11.599167   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:11.599172   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:11.603466   27962 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 17:09:12.099094   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:12.099116   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:12.099124   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:12.099128   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:12.102631   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:12.600054   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:12.600077   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:12.600087   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:12.600091   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:12.603960   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:12.604540   27962 node_ready.go:53] node "ha-135993-m02" has status "Ready":"False"
	I0920 17:09:13.099920   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:13.099940   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:13.099947   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:13.099951   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:13.104962   27962 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 17:09:13.599362   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:13.599385   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:13.599392   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:13.599397   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:13.602694   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:14.099536   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:14.099555   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:14.099563   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:14.099566   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:14.110011   27962 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0920 17:09:14.600088   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:14.600116   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:14.600127   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:14.600132   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:14.603733   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:15.099810   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:15.099833   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:15.099842   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:15.099847   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:15.103493   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:15.106748   27962 node_ready.go:53] node "ha-135993-m02" has status "Ready":"False"
	I0920 17:09:15.599114   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:15.599137   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:15.599145   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:15.599149   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:15.602587   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:16.099797   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:16.099819   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:16.099836   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:16.099841   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:16.104385   27962 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 17:09:16.599221   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:16.599261   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:16.599273   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:16.599281   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:16.602198   27962 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 17:09:17.099641   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:17.099665   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:17.099674   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:17.099679   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:17.102538   27962 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 17:09:17.599451   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:17.599479   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:17.599488   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:17.599493   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:17.604108   27962 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 17:09:17.604651   27962 node_ready.go:53] node "ha-135993-m02" has status "Ready":"False"
	I0920 17:09:18.099653   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:18.099682   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:18.099694   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:18.099698   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:18.103414   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:18.599738   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:18.599765   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:18.599774   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:18.599781   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:18.603208   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:19.100125   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:19.100153   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:19.100166   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:19.100175   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:19.184153   27962 round_trippers.go:574] Response Status: 200 OK in 83 milliseconds
	I0920 17:09:19.600050   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:19.600072   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:19.600080   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:19.600085   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:19.603736   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:20.099655   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:20.099677   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:20.099685   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:20.099689   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:20.103774   27962 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 17:09:20.104534   27962 node_ready.go:53] node "ha-135993-m02" has status "Ready":"False"
	I0920 17:09:20.599975   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:20.599999   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:20.600008   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:20.600012   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:20.603324   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:21.099118   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:21.099157   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:21.099168   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:21.099174   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:21.102835   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:21.599923   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:21.599950   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:21.599959   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:21.599963   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:21.604036   27962 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 17:09:22.099740   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:22.099765   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:22.099774   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:22.099779   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:22.103432   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:22.599193   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:22.599216   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:22.599225   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:22.599230   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:22.602523   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:22.603230   27962 node_ready.go:53] node "ha-135993-m02" has status "Ready":"False"
	I0920 17:09:23.099535   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:23.099562   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:23.099571   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:23.099575   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:23.103060   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:23.600005   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:23.600028   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:23.600037   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:23.600042   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:23.602925   27962 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 17:09:24.099721   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:24.099748   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:24.099760   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:24.099768   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:24.103420   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:24.599142   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:24.599163   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:24.599171   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:24.599175   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:24.601879   27962 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 17:09:25.099978   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:25.100008   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:25.100020   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:25.100025   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:25.103311   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:25.104017   27962 node_ready.go:49] node "ha-135993-m02" has status "Ready":"True"
	I0920 17:09:25.104039   27962 node_ready.go:38] duration metric: took 19.005166756s for node "ha-135993-m02" to be "Ready" ...
	I0920 17:09:25.104051   27962 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 17:09:25.104149   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods
	I0920 17:09:25.104165   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:25.104177   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:25.104185   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:25.108765   27962 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 17:09:25.115719   27962 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-gcvg4" in "kube-system" namespace to be "Ready" ...
	I0920 17:09:25.115809   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-gcvg4
	I0920 17:09:25.115817   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:25.115832   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:25.115839   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:25.118912   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:25.119515   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993
	I0920 17:09:25.119530   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:25.119545   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:25.119553   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:25.122165   27962 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 17:09:25.123205   27962 pod_ready.go:93] pod "coredns-7c65d6cfc9-gcvg4" in "kube-system" namespace has status "Ready":"True"
	I0920 17:09:25.123229   27962 pod_ready.go:82] duration metric: took 7.483763ms for pod "coredns-7c65d6cfc9-gcvg4" in "kube-system" namespace to be "Ready" ...
	I0920 17:09:25.123245   27962 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-kpbhk" in "kube-system" namespace to be "Ready" ...
	I0920 17:09:25.123328   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-kpbhk
	I0920 17:09:25.123336   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:25.123346   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:25.123362   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:25.127621   27962 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 17:09:25.128286   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993
	I0920 17:09:25.128301   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:25.128309   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:25.128312   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:25.130781   27962 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 17:09:25.131328   27962 pod_ready.go:93] pod "coredns-7c65d6cfc9-kpbhk" in "kube-system" namespace has status "Ready":"True"
	I0920 17:09:25.131344   27962 pod_ready.go:82] duration metric: took 8.091385ms for pod "coredns-7c65d6cfc9-kpbhk" in "kube-system" namespace to be "Ready" ...
	I0920 17:09:25.131353   27962 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-135993" in "kube-system" namespace to be "Ready" ...
	I0920 17:09:25.131430   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-135993
	I0920 17:09:25.131441   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:25.131447   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:25.131452   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:25.133900   27962 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 17:09:25.134469   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993
	I0920 17:09:25.134482   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:25.134489   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:25.134491   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:25.136541   27962 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 17:09:25.137016   27962 pod_ready.go:93] pod "etcd-ha-135993" in "kube-system" namespace has status "Ready":"True"
	I0920 17:09:25.137035   27962 pod_ready.go:82] duration metric: took 5.675303ms for pod "etcd-ha-135993" in "kube-system" namespace to be "Ready" ...
	I0920 17:09:25.137046   27962 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-135993-m02" in "kube-system" namespace to be "Ready" ...
	I0920 17:09:25.137099   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-135993-m02
	I0920 17:09:25.137110   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:25.137120   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:25.137129   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:25.139596   27962 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 17:09:25.140245   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:25.140261   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:25.140268   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:25.140275   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:25.143653   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:25.144087   27962 pod_ready.go:93] pod "etcd-ha-135993-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 17:09:25.144104   27962 pod_ready.go:82] duration metric: took 7.049824ms for pod "etcd-ha-135993-m02" in "kube-system" namespace to be "Ready" ...
	I0920 17:09:25.144123   27962 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-135993" in "kube-system" namespace to be "Ready" ...
	I0920 17:09:25.300530   27962 request.go:632] Waited for 156.341043ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-135993
	I0920 17:09:25.300600   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-135993
	I0920 17:09:25.300608   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:25.300615   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:25.300619   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:25.303926   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:25.500905   27962 request.go:632] Waited for 196.365656ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-135993
	I0920 17:09:25.500972   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993
	I0920 17:09:25.500979   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:25.500991   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:25.501002   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:25.504242   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:25.504741   27962 pod_ready.go:93] pod "kube-apiserver-ha-135993" in "kube-system" namespace has status "Ready":"True"
	I0920 17:09:25.504761   27962 pod_ready.go:82] duration metric: took 360.627268ms for pod "kube-apiserver-ha-135993" in "kube-system" namespace to be "Ready" ...
	I0920 17:09:25.504775   27962 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-135993-m02" in "kube-system" namespace to be "Ready" ...
	I0920 17:09:25.700017   27962 request.go:632] Waited for 195.167851ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-135993-m02
	I0920 17:09:25.700099   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-135993-m02
	I0920 17:09:25.700105   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:25.700111   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:25.700116   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:25.703342   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:25.900444   27962 request.go:632] Waited for 196.370493ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:25.900528   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:25.900536   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:25.900546   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:25.900556   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:25.904185   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:25.904729   27962 pod_ready.go:93] pod "kube-apiserver-ha-135993-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 17:09:25.904749   27962 pod_ready.go:82] duration metric: took 399.965762ms for pod "kube-apiserver-ha-135993-m02" in "kube-system" namespace to be "Ready" ...
	I0920 17:09:25.904762   27962 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-135993" in "kube-system" namespace to be "Ready" ...
	I0920 17:09:26.100837   27962 request.go:632] Waited for 195.996544ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-135993
	I0920 17:09:26.100911   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-135993
	I0920 17:09:26.100922   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:26.100930   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:26.100934   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:26.104514   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:26.300664   27962 request.go:632] Waited for 195.385658ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-135993
	I0920 17:09:26.300743   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993
	I0920 17:09:26.300751   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:26.300761   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:26.300767   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:26.304576   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:26.305216   27962 pod_ready.go:93] pod "kube-controller-manager-ha-135993" in "kube-system" namespace has status "Ready":"True"
	I0920 17:09:26.305236   27962 pod_ready.go:82] duration metric: took 400.465668ms for pod "kube-controller-manager-ha-135993" in "kube-system" namespace to be "Ready" ...
	I0920 17:09:26.305250   27962 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-135993-m02" in "kube-system" namespace to be "Ready" ...
	I0920 17:09:26.500476   27962 request.go:632] Waited for 195.132114ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-135993-m02
	I0920 17:09:26.500563   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-135993-m02
	I0920 17:09:26.500573   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:26.500585   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:26.500595   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:26.503974   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:26.700109   27962 request.go:632] Waited for 195.31021ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:26.700178   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:26.700184   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:26.700192   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:26.700197   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:26.703786   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:26.704325   27962 pod_ready.go:93] pod "kube-controller-manager-ha-135993-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 17:09:26.704346   27962 pod_ready.go:82] duration metric: took 399.089711ms for pod "kube-controller-manager-ha-135993-m02" in "kube-system" namespace to be "Ready" ...
	I0920 17:09:26.704359   27962 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-52r49" in "kube-system" namespace to be "Ready" ...
	I0920 17:09:26.900914   27962 request.go:632] Waited for 196.454204ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-proxy-52r49
	I0920 17:09:26.900979   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-proxy-52r49
	I0920 17:09:26.900988   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:26.900999   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:26.901008   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:26.904465   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:27.100636   27962 request.go:632] Waited for 195.370556ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-135993
	I0920 17:09:27.100694   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993
	I0920 17:09:27.100700   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:27.100707   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:27.100713   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:27.104136   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:27.104731   27962 pod_ready.go:93] pod "kube-proxy-52r49" in "kube-system" namespace has status "Ready":"True"
	I0920 17:09:27.104752   27962 pod_ready.go:82] duration metric: took 400.38236ms for pod "kube-proxy-52r49" in "kube-system" namespace to be "Ready" ...
	I0920 17:09:27.104762   27962 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-z6xqt" in "kube-system" namespace to be "Ready" ...
	I0920 17:09:27.300919   27962 request.go:632] Waited for 196.074087ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z6xqt
	I0920 17:09:27.300987   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z6xqt
	I0920 17:09:27.300993   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:27.301002   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:27.301038   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:27.304315   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:27.500226   27962 request.go:632] Waited for 195.315282ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:27.500323   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:27.500337   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:27.500347   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:27.500353   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:27.503809   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:27.504585   27962 pod_ready.go:93] pod "kube-proxy-z6xqt" in "kube-system" namespace has status "Ready":"True"
	I0920 17:09:27.504607   27962 pod_ready.go:82] duration metric: took 399.833703ms for pod "kube-proxy-z6xqt" in "kube-system" namespace to be "Ready" ...
	I0920 17:09:27.504623   27962 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-135993" in "kube-system" namespace to be "Ready" ...
	I0920 17:09:27.700599   27962 request.go:632] Waited for 195.904246ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-135993
	I0920 17:09:27.700671   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-135993
	I0920 17:09:27.700677   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:27.700684   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:27.700691   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:27.704470   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:27.900633   27962 request.go:632] Waited for 195.387225ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-135993
	I0920 17:09:27.900688   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993
	I0920 17:09:27.900695   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:27.900708   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:27.900716   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:27.903956   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:27.904541   27962 pod_ready.go:93] pod "kube-scheduler-ha-135993" in "kube-system" namespace has status "Ready":"True"
	I0920 17:09:27.904563   27962 pod_ready.go:82] duration metric: took 399.932453ms for pod "kube-scheduler-ha-135993" in "kube-system" namespace to be "Ready" ...
	I0920 17:09:27.904573   27962 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-135993-m02" in "kube-system" namespace to be "Ready" ...
	I0920 17:09:28.100547   27962 request.go:632] Waited for 195.899157ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-135993-m02
	I0920 17:09:28.100623   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-135993-m02
	I0920 17:09:28.100628   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:28.100637   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:28.100642   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:28.104043   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:28.299961   27962 request.go:632] Waited for 195.327445ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:28.300029   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:09:28.300037   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:28.300046   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:28.300054   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:28.303288   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:28.303968   27962 pod_ready.go:93] pod "kube-scheduler-ha-135993-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 17:09:28.303986   27962 pod_ready.go:82] duration metric: took 399.402915ms for pod "kube-scheduler-ha-135993-m02" in "kube-system" namespace to be "Ready" ...
	I0920 17:09:28.304000   27962 pod_ready.go:39] duration metric: took 3.199931535s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 17:09:28.304019   27962 api_server.go:52] waiting for apiserver process to appear ...
	I0920 17:09:28.304077   27962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 17:09:28.320006   27962 api_server.go:72] duration metric: took 22.576329593s to wait for apiserver process to appear ...
	I0920 17:09:28.320037   27962 api_server.go:88] waiting for apiserver healthz status ...
	I0920 17:09:28.320064   27962 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0920 17:09:28.324668   27962 api_server.go:279] https://192.168.39.60:8443/healthz returned 200:
	ok
	I0920 17:09:28.324734   27962 round_trippers.go:463] GET https://192.168.39.60:8443/version
	I0920 17:09:28.324739   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:28.324747   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:28.324752   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:28.325606   27962 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0920 17:09:28.325696   27962 api_server.go:141] control plane version: v1.31.1
	I0920 17:09:28.325719   27962 api_server.go:131] duration metric: took 5.673918ms to wait for apiserver health ...
	I0920 17:09:28.325728   27962 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 17:09:28.500898   27962 request.go:632] Waited for 175.10825ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods
	I0920 17:09:28.500971   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods
	I0920 17:09:28.500978   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:28.500986   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:28.500995   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:28.506063   27962 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0920 17:09:28.510476   27962 system_pods.go:59] 17 kube-system pods found
	I0920 17:09:28.510506   27962 system_pods.go:61] "coredns-7c65d6cfc9-gcvg4" [899b2b8c-9009-46c0-816b-781e85eb8b19] Running
	I0920 17:09:28.510512   27962 system_pods.go:61] "coredns-7c65d6cfc9-kpbhk" [0dfd9f1a-148c-4dba-884a-8618b74f82d0] Running
	I0920 17:09:28.510516   27962 system_pods.go:61] "etcd-ha-135993" [d5e44451-ab41-4bdf-8f34-c8202bc33cda] Running
	I0920 17:09:28.510520   27962 system_pods.go:61] "etcd-ha-135993-m02" [415d8f47-3bc4-42fc-ae75-b8217f5a731d] Running
	I0920 17:09:28.510524   27962 system_pods.go:61] "kindnet-5m4r8" [c799ac5a-69f7-45d5-a291-61edfd753404] Running
	I0920 17:09:28.510528   27962 system_pods.go:61] "kindnet-6clt2" [d73a0817-d84f-4269-9de0-1532287a07db] Running
	I0920 17:09:28.510532   27962 system_pods.go:61] "kube-apiserver-ha-135993" [0c2adef2-0b98-4752-ba6c-0719b723c93c] Running
	I0920 17:09:28.510536   27962 system_pods.go:61] "kube-apiserver-ha-135993-m02" [e74438ac-347a-4f38-ba96-c7687763c669] Running
	I0920 17:09:28.510539   27962 system_pods.go:61] "kube-controller-manager-ha-135993" [ba82afa0-d0a1-4515-bd0f-574f03c2a5a5] Running
	I0920 17:09:28.510543   27962 system_pods.go:61] "kube-controller-manager-ha-135993-m02" [ba4a59ae-5348-43b7-b80d-96d5d63afea2] Running
	I0920 17:09:28.510548   27962 system_pods.go:61] "kube-proxy-52r49" [8d1124bd-e7cb-4239-a29d-c1d5b8870aff] Running
	I0920 17:09:28.510551   27962 system_pods.go:61] "kube-proxy-z6xqt" [70b8c8ab-77cc-4681-a9b1-3c28fc0f2674] Running
	I0920 17:09:28.510555   27962 system_pods.go:61] "kube-scheduler-ha-135993" [25eaf632-035a-44d9-8ce0-e6c3c0e4e0c4] Running
	I0920 17:09:28.510558   27962 system_pods.go:61] "kube-scheduler-ha-135993-m02" [6daee23a-d627-41e8-81b1-ceb4fa70ec3e] Running
	I0920 17:09:28.510563   27962 system_pods.go:61] "kube-vip-ha-135993" [6aa396e1-76b2-4911-bc93-660c51cef03d] Running
	I0920 17:09:28.510566   27962 system_pods.go:61] "kube-vip-ha-135993-m02" [2e9d1f8a-1ce9-46f9-b58b-f13302832062] Running
	I0920 17:09:28.510571   27962 system_pods.go:61] "storage-provisioner" [57137bee-9a7b-4659-a855-0da82d137cb0] Running
	I0920 17:09:28.510576   27962 system_pods.go:74] duration metric: took 184.843309ms to wait for pod list to return data ...
	I0920 17:09:28.510583   27962 default_sa.go:34] waiting for default service account to be created ...
	I0920 17:09:28.701010   27962 request.go:632] Waited for 190.33295ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/default/serviceaccounts
	I0920 17:09:28.701070   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/default/serviceaccounts
	I0920 17:09:28.701075   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:28.701082   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:28.701086   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:28.704833   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:28.705046   27962 default_sa.go:45] found service account: "default"
	I0920 17:09:28.705060   27962 default_sa.go:55] duration metric: took 194.471281ms for default service account to be created ...
	I0920 17:09:28.705068   27962 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 17:09:28.900520   27962 request.go:632] Waited for 195.386336ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods
	I0920 17:09:28.900601   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods
	I0920 17:09:28.900607   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:28.900614   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:28.900622   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:28.905157   27962 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 17:09:28.910152   27962 system_pods.go:86] 17 kube-system pods found
	I0920 17:09:28.910177   27962 system_pods.go:89] "coredns-7c65d6cfc9-gcvg4" [899b2b8c-9009-46c0-816b-781e85eb8b19] Running
	I0920 17:09:28.910183   27962 system_pods.go:89] "coredns-7c65d6cfc9-kpbhk" [0dfd9f1a-148c-4dba-884a-8618b74f82d0] Running
	I0920 17:09:28.910188   27962 system_pods.go:89] "etcd-ha-135993" [d5e44451-ab41-4bdf-8f34-c8202bc33cda] Running
	I0920 17:09:28.910193   27962 system_pods.go:89] "etcd-ha-135993-m02" [415d8f47-3bc4-42fc-ae75-b8217f5a731d] Running
	I0920 17:09:28.910197   27962 system_pods.go:89] "kindnet-5m4r8" [c799ac5a-69f7-45d5-a291-61edfd753404] Running
	I0920 17:09:28.910200   27962 system_pods.go:89] "kindnet-6clt2" [d73a0817-d84f-4269-9de0-1532287a07db] Running
	I0920 17:09:28.910204   27962 system_pods.go:89] "kube-apiserver-ha-135993" [0c2adef2-0b98-4752-ba6c-0719b723c93c] Running
	I0920 17:09:28.910210   27962 system_pods.go:89] "kube-apiserver-ha-135993-m02" [e74438ac-347a-4f38-ba96-c7687763c669] Running
	I0920 17:09:28.910216   27962 system_pods.go:89] "kube-controller-manager-ha-135993" [ba82afa0-d0a1-4515-bd0f-574f03c2a5a5] Running
	I0920 17:09:28.910221   27962 system_pods.go:89] "kube-controller-manager-ha-135993-m02" [ba4a59ae-5348-43b7-b80d-96d5d63afea2] Running
	I0920 17:09:28.910224   27962 system_pods.go:89] "kube-proxy-52r49" [8d1124bd-e7cb-4239-a29d-c1d5b8870aff] Running
	I0920 17:09:28.910232   27962 system_pods.go:89] "kube-proxy-z6xqt" [70b8c8ab-77cc-4681-a9b1-3c28fc0f2674] Running
	I0920 17:09:28.910236   27962 system_pods.go:89] "kube-scheduler-ha-135993" [25eaf632-035a-44d9-8ce0-e6c3c0e4e0c4] Running
	I0920 17:09:28.910240   27962 system_pods.go:89] "kube-scheduler-ha-135993-m02" [6daee23a-d627-41e8-81b1-ceb4fa70ec3e] Running
	I0920 17:09:28.910243   27962 system_pods.go:89] "kube-vip-ha-135993" [6aa396e1-76b2-4911-bc93-660c51cef03d] Running
	I0920 17:09:28.910246   27962 system_pods.go:89] "kube-vip-ha-135993-m02" [2e9d1f8a-1ce9-46f9-b58b-f13302832062] Running
	I0920 17:09:28.910249   27962 system_pods.go:89] "storage-provisioner" [57137bee-9a7b-4659-a855-0da82d137cb0] Running
	I0920 17:09:28.910257   27962 system_pods.go:126] duration metric: took 205.181263ms to wait for k8s-apps to be running ...
	I0920 17:09:28.910266   27962 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 17:09:28.910308   27962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 17:09:28.926895   27962 system_svc.go:56] duration metric: took 16.618557ms WaitForService to wait for kubelet
	I0920 17:09:28.926931   27962 kubeadm.go:582] duration metric: took 23.18325481s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 17:09:28.926955   27962 node_conditions.go:102] verifying NodePressure condition ...
	I0920 17:09:29.100293   27962 request.go:632] Waited for 173.230558ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes
	I0920 17:09:29.100347   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes
	I0920 17:09:29.100351   27962 round_trippers.go:469] Request Headers:
	I0920 17:09:29.100362   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:09:29.100368   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:09:29.104004   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:09:29.104756   27962 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 17:09:29.104780   27962 node_conditions.go:123] node cpu capacity is 2
	I0920 17:09:29.104790   27962 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 17:09:29.104794   27962 node_conditions.go:123] node cpu capacity is 2
	I0920 17:09:29.104798   27962 node_conditions.go:105] duration metric: took 177.838136ms to run NodePressure ...
	I0920 17:09:29.104811   27962 start.go:241] waiting for startup goroutines ...
	I0920 17:09:29.104835   27962 start.go:255] writing updated cluster config ...
	I0920 17:09:29.107129   27962 out.go:201] 
	I0920 17:09:29.108641   27962 config.go:182] Loaded profile config "ha-135993": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 17:09:29.108741   27962 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/config.json ...
	I0920 17:09:29.110401   27962 out.go:177] * Starting "ha-135993-m03" control-plane node in "ha-135993" cluster
	I0920 17:09:29.111695   27962 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 17:09:29.111718   27962 cache.go:56] Caching tarball of preloaded images
	I0920 17:09:29.111819   27962 preload.go:172] Found /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 17:09:29.111832   27962 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 17:09:29.111919   27962 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/config.json ...
	I0920 17:09:29.112087   27962 start.go:360] acquireMachinesLock for ha-135993-m03: {Name:mkfeedb385cf08b5d2aa00913e85815d02a180c2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 17:09:29.112125   27962 start.go:364] duration metric: took 21.568µs to acquireMachinesLock for "ha-135993-m03"
	I0920 17:09:29.112142   27962 start.go:93] Provisioning new machine with config: &{Name:ha-135993 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-135993 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor
-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 17:09:29.112230   27962 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0920 17:09:29.114039   27962 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 17:09:29.114124   27962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:09:29.114159   27962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:09:29.130067   27962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37683
	I0920 17:09:29.130534   27962 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:09:29.131025   27962 main.go:141] libmachine: Using API Version  1
	I0920 17:09:29.131052   27962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:09:29.131373   27962 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:09:29.131541   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetMachineName
	I0920 17:09:29.131727   27962 main.go:141] libmachine: (ha-135993-m03) Calling .DriverName
	I0920 17:09:29.131887   27962 start.go:159] libmachine.API.Create for "ha-135993" (driver="kvm2")
	I0920 17:09:29.131918   27962 client.go:168] LocalClient.Create starting
	I0920 17:09:29.131956   27962 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem
	I0920 17:09:29.131998   27962 main.go:141] libmachine: Decoding PEM data...
	I0920 17:09:29.132021   27962 main.go:141] libmachine: Parsing certificate...
	I0920 17:09:29.132086   27962 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem
	I0920 17:09:29.132115   27962 main.go:141] libmachine: Decoding PEM data...
	I0920 17:09:29.132130   27962 main.go:141] libmachine: Parsing certificate...
	I0920 17:09:29.132158   27962 main.go:141] libmachine: Running pre-create checks...
	I0920 17:09:29.132169   27962 main.go:141] libmachine: (ha-135993-m03) Calling .PreCreateCheck
	I0920 17:09:29.132361   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetConfigRaw
	I0920 17:09:29.132775   27962 main.go:141] libmachine: Creating machine...
	I0920 17:09:29.132791   27962 main.go:141] libmachine: (ha-135993-m03) Calling .Create
	I0920 17:09:29.132937   27962 main.go:141] libmachine: (ha-135993-m03) Creating KVM machine...
	I0920 17:09:29.134340   27962 main.go:141] libmachine: (ha-135993-m03) DBG | found existing default KVM network
	I0920 17:09:29.134482   27962 main.go:141] libmachine: (ha-135993-m03) DBG | found existing private KVM network mk-ha-135993
	I0920 17:09:29.134586   27962 main.go:141] libmachine: (ha-135993-m03) Setting up store path in /home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m03 ...
	I0920 17:09:29.134610   27962 main.go:141] libmachine: (ha-135993-m03) Building disk image from file:///home/jenkins/minikube-integration/19672-8777/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso
	I0920 17:09:29.134709   27962 main.go:141] libmachine: (ha-135993-m03) DBG | I0920 17:09:29.134570   28745 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19672-8777/.minikube
	I0920 17:09:29.134788   27962 main.go:141] libmachine: (ha-135993-m03) Downloading /home/jenkins/minikube-integration/19672-8777/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19672-8777/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso...
	I0920 17:09:29.623687   27962 main.go:141] libmachine: (ha-135993-m03) DBG | I0920 17:09:29.623559   28745 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m03/id_rsa...
	I0920 17:09:29.849339   27962 main.go:141] libmachine: (ha-135993-m03) DBG | I0920 17:09:29.849213   28745 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m03/ha-135993-m03.rawdisk...
	I0920 17:09:29.849379   27962 main.go:141] libmachine: (ha-135993-m03) DBG | Writing magic tar header
	I0920 17:09:29.849390   27962 main.go:141] libmachine: (ha-135993-m03) DBG | Writing SSH key tar header
	I0920 17:09:29.849398   27962 main.go:141] libmachine: (ha-135993-m03) DBG | I0920 17:09:29.849332   28745 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m03 ...
	I0920 17:09:29.849416   27962 main.go:141] libmachine: (ha-135993-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m03
	I0920 17:09:29.849450   27962 main.go:141] libmachine: (ha-135993-m03) Setting executable bit set on /home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m03 (perms=drwx------)
	I0920 17:09:29.849472   27962 main.go:141] libmachine: (ha-135993-m03) Setting executable bit set on /home/jenkins/minikube-integration/19672-8777/.minikube/machines (perms=drwxr-xr-x)
	I0920 17:09:29.849487   27962 main.go:141] libmachine: (ha-135993-m03) Setting executable bit set on /home/jenkins/minikube-integration/19672-8777/.minikube (perms=drwxr-xr-x)
	I0920 17:09:29.849501   27962 main.go:141] libmachine: (ha-135993-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-8777/.minikube/machines
	I0920 17:09:29.849511   27962 main.go:141] libmachine: (ha-135993-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-8777/.minikube
	I0920 17:09:29.849524   27962 main.go:141] libmachine: (ha-135993-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-8777
	I0920 17:09:29.849537   27962 main.go:141] libmachine: (ha-135993-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0920 17:09:29.849559   27962 main.go:141] libmachine: (ha-135993-m03) Setting executable bit set on /home/jenkins/minikube-integration/19672-8777 (perms=drwxrwxr-x)
	I0920 17:09:29.849572   27962 main.go:141] libmachine: (ha-135993-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0920 17:09:29.849581   27962 main.go:141] libmachine: (ha-135993-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0920 17:09:29.849589   27962 main.go:141] libmachine: (ha-135993-m03) DBG | Checking permissions on dir: /home/jenkins
	I0920 17:09:29.849596   27962 main.go:141] libmachine: (ha-135993-m03) DBG | Checking permissions on dir: /home
	I0920 17:09:29.849612   27962 main.go:141] libmachine: (ha-135993-m03) DBG | Skipping /home - not owner
	I0920 17:09:29.849623   27962 main.go:141] libmachine: (ha-135993-m03) Creating domain...
	I0920 17:09:29.850674   27962 main.go:141] libmachine: (ha-135993-m03) define libvirt domain using xml: 
	I0920 17:09:29.850697   27962 main.go:141] libmachine: (ha-135993-m03) <domain type='kvm'>
	I0920 17:09:29.850706   27962 main.go:141] libmachine: (ha-135993-m03)   <name>ha-135993-m03</name>
	I0920 17:09:29.850718   27962 main.go:141] libmachine: (ha-135993-m03)   <memory unit='MiB'>2200</memory>
	I0920 17:09:29.850725   27962 main.go:141] libmachine: (ha-135993-m03)   <vcpu>2</vcpu>
	I0920 17:09:29.850730   27962 main.go:141] libmachine: (ha-135993-m03)   <features>
	I0920 17:09:29.850737   27962 main.go:141] libmachine: (ha-135993-m03)     <acpi/>
	I0920 17:09:29.850744   27962 main.go:141] libmachine: (ha-135993-m03)     <apic/>
	I0920 17:09:29.850757   27962 main.go:141] libmachine: (ha-135993-m03)     <pae/>
	I0920 17:09:29.850769   27962 main.go:141] libmachine: (ha-135993-m03)     
	I0920 17:09:29.850776   27962 main.go:141] libmachine: (ha-135993-m03)   </features>
	I0920 17:09:29.850783   27962 main.go:141] libmachine: (ha-135993-m03)   <cpu mode='host-passthrough'>
	I0920 17:09:29.850803   27962 main.go:141] libmachine: (ha-135993-m03)   
	I0920 17:09:29.850826   27962 main.go:141] libmachine: (ha-135993-m03)   </cpu>
	I0920 17:09:29.850834   27962 main.go:141] libmachine: (ha-135993-m03)   <os>
	I0920 17:09:29.850839   27962 main.go:141] libmachine: (ha-135993-m03)     <type>hvm</type>
	I0920 17:09:29.850844   27962 main.go:141] libmachine: (ha-135993-m03)     <boot dev='cdrom'/>
	I0920 17:09:29.850850   27962 main.go:141] libmachine: (ha-135993-m03)     <boot dev='hd'/>
	I0920 17:09:29.850855   27962 main.go:141] libmachine: (ha-135993-m03)     <bootmenu enable='no'/>
	I0920 17:09:29.850866   27962 main.go:141] libmachine: (ha-135993-m03)   </os>
	I0920 17:09:29.850873   27962 main.go:141] libmachine: (ha-135993-m03)   <devices>
	I0920 17:09:29.850878   27962 main.go:141] libmachine: (ha-135993-m03)     <disk type='file' device='cdrom'>
	I0920 17:09:29.850887   27962 main.go:141] libmachine: (ha-135993-m03)       <source file='/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m03/boot2docker.iso'/>
	I0920 17:09:29.850894   27962 main.go:141] libmachine: (ha-135993-m03)       <target dev='hdc' bus='scsi'/>
	I0920 17:09:29.850925   27962 main.go:141] libmachine: (ha-135993-m03)       <readonly/>
	I0920 17:09:29.850951   27962 main.go:141] libmachine: (ha-135993-m03)     </disk>
	I0920 17:09:29.850962   27962 main.go:141] libmachine: (ha-135993-m03)     <disk type='file' device='disk'>
	I0920 17:09:29.850974   27962 main.go:141] libmachine: (ha-135993-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0920 17:09:29.850990   27962 main.go:141] libmachine: (ha-135993-m03)       <source file='/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m03/ha-135993-m03.rawdisk'/>
	I0920 17:09:29.851010   27962 main.go:141] libmachine: (ha-135993-m03)       <target dev='hda' bus='virtio'/>
	I0920 17:09:29.851030   27962 main.go:141] libmachine: (ha-135993-m03)     </disk>
	I0920 17:09:29.851045   27962 main.go:141] libmachine: (ha-135993-m03)     <interface type='network'>
	I0920 17:09:29.851055   27962 main.go:141] libmachine: (ha-135993-m03)       <source network='mk-ha-135993'/>
	I0920 17:09:29.851062   27962 main.go:141] libmachine: (ha-135993-m03)       <model type='virtio'/>
	I0920 17:09:29.851069   27962 main.go:141] libmachine: (ha-135993-m03)     </interface>
	I0920 17:09:29.851077   27962 main.go:141] libmachine: (ha-135993-m03)     <interface type='network'>
	I0920 17:09:29.851085   27962 main.go:141] libmachine: (ha-135993-m03)       <source network='default'/>
	I0920 17:09:29.851090   27962 main.go:141] libmachine: (ha-135993-m03)       <model type='virtio'/>
	I0920 17:09:29.851095   27962 main.go:141] libmachine: (ha-135993-m03)     </interface>
	I0920 17:09:29.851101   27962 main.go:141] libmachine: (ha-135993-m03)     <serial type='pty'>
	I0920 17:09:29.851109   27962 main.go:141] libmachine: (ha-135993-m03)       <target port='0'/>
	I0920 17:09:29.851115   27962 main.go:141] libmachine: (ha-135993-m03)     </serial>
	I0920 17:09:29.851133   27962 main.go:141] libmachine: (ha-135993-m03)     <console type='pty'>
	I0920 17:09:29.851153   27962 main.go:141] libmachine: (ha-135993-m03)       <target type='serial' port='0'/>
	I0920 17:09:29.851165   27962 main.go:141] libmachine: (ha-135993-m03)     </console>
	I0920 17:09:29.851172   27962 main.go:141] libmachine: (ha-135993-m03)     <rng model='virtio'>
	I0920 17:09:29.851184   27962 main.go:141] libmachine: (ha-135993-m03)       <backend model='random'>/dev/random</backend>
	I0920 17:09:29.851194   27962 main.go:141] libmachine: (ha-135993-m03)     </rng>
	I0920 17:09:29.851201   27962 main.go:141] libmachine: (ha-135993-m03)     
	I0920 17:09:29.851209   27962 main.go:141] libmachine: (ha-135993-m03)     
	I0920 17:09:29.851215   27962 main.go:141] libmachine: (ha-135993-m03)   </devices>
	I0920 17:09:29.851224   27962 main.go:141] libmachine: (ha-135993-m03) </domain>
	I0920 17:09:29.851251   27962 main.go:141] libmachine: (ha-135993-m03) 
	I0920 17:09:29.858905   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:e3:0b:70 in network default
	I0920 17:09:29.859443   27962 main.go:141] libmachine: (ha-135993-m03) Ensuring networks are active...
	I0920 17:09:29.859461   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:29.860217   27962 main.go:141] libmachine: (ha-135993-m03) Ensuring network default is active
	I0920 17:09:29.860531   27962 main.go:141] libmachine: (ha-135993-m03) Ensuring network mk-ha-135993 is active
	I0920 17:09:29.860904   27962 main.go:141] libmachine: (ha-135993-m03) Getting domain xml...
	I0920 17:09:29.861590   27962 main.go:141] libmachine: (ha-135993-m03) Creating domain...
	I0920 17:09:31.187018   27962 main.go:141] libmachine: (ha-135993-m03) Waiting to get IP...
	I0920 17:09:31.187715   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:31.188084   27962 main.go:141] libmachine: (ha-135993-m03) DBG | unable to find current IP address of domain ha-135993-m03 in network mk-ha-135993
	I0920 17:09:31.188106   27962 main.go:141] libmachine: (ha-135993-m03) DBG | I0920 17:09:31.188068   28745 retry.go:31] will retry after 213.512063ms: waiting for machine to come up
	I0920 17:09:31.403627   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:31.404039   27962 main.go:141] libmachine: (ha-135993-m03) DBG | unable to find current IP address of domain ha-135993-m03 in network mk-ha-135993
	I0920 17:09:31.404070   27962 main.go:141] libmachine: (ha-135993-m03) DBG | I0920 17:09:31.403991   28745 retry.go:31] will retry after 361.212458ms: waiting for machine to come up
	I0920 17:09:31.766642   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:31.767089   27962 main.go:141] libmachine: (ha-135993-m03) DBG | unable to find current IP address of domain ha-135993-m03 in network mk-ha-135993
	I0920 17:09:31.767116   27962 main.go:141] libmachine: (ha-135993-m03) DBG | I0920 17:09:31.767037   28745 retry.go:31] will retry after 376.833715ms: waiting for machine to come up
	I0920 17:09:32.145427   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:32.145898   27962 main.go:141] libmachine: (ha-135993-m03) DBG | unable to find current IP address of domain ha-135993-m03 in network mk-ha-135993
	I0920 17:09:32.145947   27962 main.go:141] libmachine: (ha-135993-m03) DBG | I0920 17:09:32.145871   28745 retry.go:31] will retry after 557.65015ms: waiting for machine to come up
	I0920 17:09:32.705540   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:32.705975   27962 main.go:141] libmachine: (ha-135993-m03) DBG | unable to find current IP address of domain ha-135993-m03 in network mk-ha-135993
	I0920 17:09:32.706023   27962 main.go:141] libmachine: (ha-135993-m03) DBG | I0920 17:09:32.705956   28745 retry.go:31] will retry after 695.507494ms: waiting for machine to come up
	I0920 17:09:33.402909   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:33.403356   27962 main.go:141] libmachine: (ha-135993-m03) DBG | unable to find current IP address of domain ha-135993-m03 in network mk-ha-135993
	I0920 17:09:33.403389   27962 main.go:141] libmachine: (ha-135993-m03) DBG | I0920 17:09:33.403304   28745 retry.go:31] will retry after 645.712565ms: waiting for machine to come up
	I0920 17:09:34.051477   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:34.052378   27962 main.go:141] libmachine: (ha-135993-m03) DBG | unable to find current IP address of domain ha-135993-m03 in network mk-ha-135993
	I0920 17:09:34.052405   27962 main.go:141] libmachine: (ha-135993-m03) DBG | I0920 17:09:34.052280   28745 retry.go:31] will retry after 770.593421ms: waiting for machine to come up
	I0920 17:09:34.824986   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:34.825490   27962 main.go:141] libmachine: (ha-135993-m03) DBG | unable to find current IP address of domain ha-135993-m03 in network mk-ha-135993
	I0920 17:09:34.825514   27962 main.go:141] libmachine: (ha-135993-m03) DBG | I0920 17:09:34.825451   28745 retry.go:31] will retry after 1.327368797s: waiting for machine to come up
	I0920 17:09:36.154205   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:36.154624   27962 main.go:141] libmachine: (ha-135993-m03) DBG | unable to find current IP address of domain ha-135993-m03 in network mk-ha-135993
	I0920 17:09:36.154646   27962 main.go:141] libmachine: (ha-135993-m03) DBG | I0920 17:09:36.154579   28745 retry.go:31] will retry after 1.581269715s: waiting for machine to come up
	I0920 17:09:37.738322   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:37.738736   27962 main.go:141] libmachine: (ha-135993-m03) DBG | unable to find current IP address of domain ha-135993-m03 in network mk-ha-135993
	I0920 17:09:37.738762   27962 main.go:141] libmachine: (ha-135993-m03) DBG | I0920 17:09:37.738689   28745 retry.go:31] will retry after 1.459267896s: waiting for machine to come up
	I0920 17:09:39.199274   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:39.199678   27962 main.go:141] libmachine: (ha-135993-m03) DBG | unable to find current IP address of domain ha-135993-m03 in network mk-ha-135993
	I0920 17:09:39.199706   27962 main.go:141] libmachine: (ha-135993-m03) DBG | I0920 17:09:39.199627   28745 retry.go:31] will retry after 2.386585249s: waiting for machine to come up
	I0920 17:09:41.588281   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:41.588804   27962 main.go:141] libmachine: (ha-135993-m03) DBG | unable to find current IP address of domain ha-135993-m03 in network mk-ha-135993
	I0920 17:09:41.588834   27962 main.go:141] libmachine: (ha-135993-m03) DBG | I0920 17:09:41.588752   28745 retry.go:31] will retry after 2.639705596s: waiting for machine to come up
	I0920 17:09:44.229971   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:44.230371   27962 main.go:141] libmachine: (ha-135993-m03) DBG | unable to find current IP address of domain ha-135993-m03 in network mk-ha-135993
	I0920 17:09:44.230422   27962 main.go:141] libmachine: (ha-135993-m03) DBG | I0920 17:09:44.230347   28745 retry.go:31] will retry after 3.819742823s: waiting for machine to come up
	I0920 17:09:48.054340   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:48.054705   27962 main.go:141] libmachine: (ha-135993-m03) DBG | unable to find current IP address of domain ha-135993-m03 in network mk-ha-135993
	I0920 17:09:48.054731   27962 main.go:141] libmachine: (ha-135993-m03) DBG | I0920 17:09:48.054671   28745 retry.go:31] will retry after 4.961691445s: waiting for machine to come up
	I0920 17:09:53.018825   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:53.019259   27962 main.go:141] libmachine: (ha-135993-m03) Found IP for machine: 192.168.39.133
	I0920 17:09:53.019281   27962 main.go:141] libmachine: (ha-135993-m03) Reserving static IP address...
	I0920 17:09:53.019295   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has current primary IP address 192.168.39.133 and MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:53.019682   27962 main.go:141] libmachine: (ha-135993-m03) DBG | unable to find host DHCP lease matching {name: "ha-135993-m03", mac: "52:54:00:4a:49:98", ip: "192.168.39.133"} in network mk-ha-135993
	I0920 17:09:53.093855   27962 main.go:141] libmachine: (ha-135993-m03) DBG | Getting to WaitForSSH function...
	I0920 17:09:53.093888   27962 main.go:141] libmachine: (ha-135993-m03) Reserved static IP address: 192.168.39.133
	I0920 17:09:53.093913   27962 main.go:141] libmachine: (ha-135993-m03) Waiting for SSH to be available...
	I0920 17:09:53.096549   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:53.096917   27962 main.go:141] libmachine: (ha-135993-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:49:98", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:09:45 +0000 UTC Type:0 Mac:52:54:00:4a:49:98 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:minikube Clientid:01:52:54:00:4a:49:98}
	I0920 17:09:53.096942   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:53.097072   27962 main.go:141] libmachine: (ha-135993-m03) DBG | Using SSH client type: external
	I0920 17:09:53.097099   27962 main.go:141] libmachine: (ha-135993-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m03/id_rsa (-rw-------)
	I0920 17:09:53.097137   27962 main.go:141] libmachine: (ha-135993-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.133 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 17:09:53.097159   27962 main.go:141] libmachine: (ha-135993-m03) DBG | About to run SSH command:
	I0920 17:09:53.097174   27962 main.go:141] libmachine: (ha-135993-m03) DBG | exit 0
	I0920 17:09:53.225462   27962 main.go:141] libmachine: (ha-135993-m03) DBG | SSH cmd err, output: <nil>: 
	I0920 17:09:53.225738   27962 main.go:141] libmachine: (ha-135993-m03) KVM machine creation complete!
	I0920 17:09:53.226079   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetConfigRaw
	I0920 17:09:53.226700   27962 main.go:141] libmachine: (ha-135993-m03) Calling .DriverName
	I0920 17:09:53.226858   27962 main.go:141] libmachine: (ha-135993-m03) Calling .DriverName
	I0920 17:09:53.226985   27962 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0920 17:09:53.226999   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetState
	I0920 17:09:53.228014   27962 main.go:141] libmachine: Detecting operating system of created instance...
	I0920 17:09:53.228026   27962 main.go:141] libmachine: Waiting for SSH to be available...
	I0920 17:09:53.228031   27962 main.go:141] libmachine: Getting to WaitForSSH function...
	I0920 17:09:53.228038   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHHostname
	I0920 17:09:53.230141   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:53.230494   27962 main.go:141] libmachine: (ha-135993-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:49:98", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:09:45 +0000 UTC Type:0 Mac:52:54:00:4a:49:98 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-135993-m03 Clientid:01:52:54:00:4a:49:98}
	I0920 17:09:53.230517   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:53.230669   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHPort
	I0920 17:09:53.230844   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHKeyPath
	I0920 17:09:53.230948   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHKeyPath
	I0920 17:09:53.231082   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHUsername
	I0920 17:09:53.231200   27962 main.go:141] libmachine: Using SSH client type: native
	I0920 17:09:53.231420   27962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.133 22 <nil> <nil>}
	I0920 17:09:53.231435   27962 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0920 17:09:53.341375   27962 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 17:09:53.341396   27962 main.go:141] libmachine: Detecting the provisioner...
	I0920 17:09:53.341403   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHHostname
	I0920 17:09:53.344112   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:53.344480   27962 main.go:141] libmachine: (ha-135993-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:49:98", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:09:45 +0000 UTC Type:0 Mac:52:54:00:4a:49:98 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-135993-m03 Clientid:01:52:54:00:4a:49:98}
	I0920 17:09:53.344511   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:53.344666   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHPort
	I0920 17:09:53.344839   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHKeyPath
	I0920 17:09:53.344987   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHKeyPath
	I0920 17:09:53.345174   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHUsername
	I0920 17:09:53.345354   27962 main.go:141] libmachine: Using SSH client type: native
	I0920 17:09:53.345510   27962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.133 22 <nil> <nil>}
	I0920 17:09:53.345521   27962 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0920 17:09:53.458337   27962 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0920 17:09:53.458388   27962 main.go:141] libmachine: found compatible host: buildroot
	I0920 17:09:53.458394   27962 main.go:141] libmachine: Provisioning with buildroot...
	I0920 17:09:53.458407   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetMachineName
	I0920 17:09:53.458649   27962 buildroot.go:166] provisioning hostname "ha-135993-m03"
	I0920 17:09:53.458675   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetMachineName
	I0920 17:09:53.458849   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHHostname
	I0920 17:09:53.461596   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:53.461987   27962 main.go:141] libmachine: (ha-135993-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:49:98", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:09:45 +0000 UTC Type:0 Mac:52:54:00:4a:49:98 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-135993-m03 Clientid:01:52:54:00:4a:49:98}
	I0920 17:09:53.462013   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:53.462204   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHPort
	I0920 17:09:53.462360   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHKeyPath
	I0920 17:09:53.462538   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHKeyPath
	I0920 17:09:53.462693   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHUsername
	I0920 17:09:53.462836   27962 main.go:141] libmachine: Using SSH client type: native
	I0920 17:09:53.463061   27962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.133 22 <nil> <nil>}
	I0920 17:09:53.463079   27962 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-135993-m03 && echo "ha-135993-m03" | sudo tee /etc/hostname
	I0920 17:09:53.590131   27962 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-135993-m03
	
	I0920 17:09:53.590160   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHHostname
	I0920 17:09:53.592877   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:53.593210   27962 main.go:141] libmachine: (ha-135993-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:49:98", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:09:45 +0000 UTC Type:0 Mac:52:54:00:4a:49:98 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-135993-m03 Clientid:01:52:54:00:4a:49:98}
	I0920 17:09:53.593257   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:53.593412   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHPort
	I0920 17:09:53.593615   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHKeyPath
	I0920 17:09:53.593758   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHKeyPath
	I0920 17:09:53.593944   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHUsername
	I0920 17:09:53.594124   27962 main.go:141] libmachine: Using SSH client type: native
	I0920 17:09:53.594335   27962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.133 22 <nil> <nil>}
	I0920 17:09:53.594356   27962 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-135993-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-135993-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-135993-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 17:09:53.715013   27962 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 17:09:53.715044   27962 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-8777/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-8777/.minikube}
	I0920 17:09:53.715074   27962 buildroot.go:174] setting up certificates
	I0920 17:09:53.715086   27962 provision.go:84] configureAuth start
	I0920 17:09:53.715098   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetMachineName
	I0920 17:09:53.715402   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetIP
	I0920 17:09:53.718102   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:53.718382   27962 main.go:141] libmachine: (ha-135993-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:49:98", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:09:45 +0000 UTC Type:0 Mac:52:54:00:4a:49:98 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-135993-m03 Clientid:01:52:54:00:4a:49:98}
	I0920 17:09:53.718400   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:53.718579   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHHostname
	I0920 17:09:53.720967   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:53.721315   27962 main.go:141] libmachine: (ha-135993-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:49:98", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:09:45 +0000 UTC Type:0 Mac:52:54:00:4a:49:98 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-135993-m03 Clientid:01:52:54:00:4a:49:98}
	I0920 17:09:53.721341   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:53.721476   27962 provision.go:143] copyHostCerts
	I0920 17:09:53.721506   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem
	I0920 17:09:53.721536   27962 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem, removing ...
	I0920 17:09:53.721544   27962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem
	I0920 17:09:53.721632   27962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem (1082 bytes)
	I0920 17:09:53.721706   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem
	I0920 17:09:53.721728   27962 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem, removing ...
	I0920 17:09:53.721734   27962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem
	I0920 17:09:53.721757   27962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem (1123 bytes)
	I0920 17:09:53.721801   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem
	I0920 17:09:53.721822   27962 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem, removing ...
	I0920 17:09:53.721828   27962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem
	I0920 17:09:53.721880   27962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem (1675 bytes)
	I0920 17:09:53.721951   27962 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem org=jenkins.ha-135993-m03 san=[127.0.0.1 192.168.39.133 ha-135993-m03 localhost minikube]
	I0920 17:09:53.848713   27962 provision.go:177] copyRemoteCerts
	I0920 17:09:53.848773   27962 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 17:09:53.848800   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHHostname
	I0920 17:09:53.851795   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:53.852202   27962 main.go:141] libmachine: (ha-135993-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:49:98", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:09:45 +0000 UTC Type:0 Mac:52:54:00:4a:49:98 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-135993-m03 Clientid:01:52:54:00:4a:49:98}
	I0920 17:09:53.852234   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:53.852521   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHPort
	I0920 17:09:53.852708   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHKeyPath
	I0920 17:09:53.852882   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHUsername
	I0920 17:09:53.853058   27962 sshutil.go:53] new ssh client: &{IP:192.168.39.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m03/id_rsa Username:docker}
	I0920 17:09:53.939365   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0920 17:09:53.939433   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0920 17:09:53.962495   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0920 17:09:53.962567   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0920 17:09:53.985499   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0920 17:09:53.985574   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0920 17:09:54.008320   27962 provision.go:87] duration metric: took 293.220585ms to configureAuth
	I0920 17:09:54.008349   27962 buildroot.go:189] setting minikube options for container-runtime
	I0920 17:09:54.008604   27962 config.go:182] Loaded profile config "ha-135993": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 17:09:54.008700   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHHostname
	I0920 17:09:54.011605   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:54.011968   27962 main.go:141] libmachine: (ha-135993-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:49:98", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:09:45 +0000 UTC Type:0 Mac:52:54:00:4a:49:98 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-135993-m03 Clientid:01:52:54:00:4a:49:98}
	I0920 17:09:54.012001   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:54.012140   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHPort
	I0920 17:09:54.012318   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHKeyPath
	I0920 17:09:54.012493   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHKeyPath
	I0920 17:09:54.012609   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHUsername
	I0920 17:09:54.012754   27962 main.go:141] libmachine: Using SSH client type: native
	I0920 17:09:54.012956   27962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.133 22 <nil> <nil>}
	I0920 17:09:54.012972   27962 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 17:09:54.245416   27962 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 17:09:54.245443   27962 main.go:141] libmachine: Checking connection to Docker...
	I0920 17:09:54.245453   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetURL
	I0920 17:09:54.246780   27962 main.go:141] libmachine: (ha-135993-m03) DBG | Using libvirt version 6000000
	I0920 17:09:54.249527   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:54.249947   27962 main.go:141] libmachine: (ha-135993-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:49:98", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:09:45 +0000 UTC Type:0 Mac:52:54:00:4a:49:98 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-135993-m03 Clientid:01:52:54:00:4a:49:98}
	I0920 17:09:54.249971   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:54.250158   27962 main.go:141] libmachine: Docker is up and running!
	I0920 17:09:54.250187   27962 main.go:141] libmachine: Reticulating splines...
	I0920 17:09:54.250195   27962 client.go:171] duration metric: took 25.118268806s to LocalClient.Create
	I0920 17:09:54.250222   27962 start.go:167] duration metric: took 25.118338101s to libmachine.API.Create "ha-135993"
	I0920 17:09:54.250241   27962 start.go:293] postStartSetup for "ha-135993-m03" (driver="kvm2")
	I0920 17:09:54.250252   27962 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 17:09:54.250268   27962 main.go:141] libmachine: (ha-135993-m03) Calling .DriverName
	I0920 17:09:54.250588   27962 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 17:09:54.250617   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHHostname
	I0920 17:09:54.252892   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:54.253325   27962 main.go:141] libmachine: (ha-135993-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:49:98", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:09:45 +0000 UTC Type:0 Mac:52:54:00:4a:49:98 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-135993-m03 Clientid:01:52:54:00:4a:49:98}
	I0920 17:09:54.253360   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:54.253498   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHPort
	I0920 17:09:54.253673   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHKeyPath
	I0920 17:09:54.253825   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHUsername
	I0920 17:09:54.253986   27962 sshutil.go:53] new ssh client: &{IP:192.168.39.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m03/id_rsa Username:docker}
	I0920 17:09:54.339595   27962 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 17:09:54.343490   27962 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 17:09:54.343513   27962 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/addons for local assets ...
	I0920 17:09:54.343594   27962 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/files for local assets ...
	I0920 17:09:54.343690   27962 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem -> 159732.pem in /etc/ssl/certs
	I0920 17:09:54.343700   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem -> /etc/ssl/certs/159732.pem
	I0920 17:09:54.343811   27962 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 17:09:54.352574   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /etc/ssl/certs/159732.pem (1708 bytes)
	I0920 17:09:54.376021   27962 start.go:296] duration metric: took 125.763298ms for postStartSetup
	I0920 17:09:54.376085   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetConfigRaw
	I0920 17:09:54.376726   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetIP
	I0920 17:09:54.379455   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:54.379860   27962 main.go:141] libmachine: (ha-135993-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:49:98", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:09:45 +0000 UTC Type:0 Mac:52:54:00:4a:49:98 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-135993-m03 Clientid:01:52:54:00:4a:49:98}
	I0920 17:09:54.379889   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:54.380133   27962 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/config.json ...
	I0920 17:09:54.380334   27962 start.go:128] duration metric: took 25.268094288s to createHost
	I0920 17:09:54.380356   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHHostname
	I0920 17:09:54.382551   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:54.382926   27962 main.go:141] libmachine: (ha-135993-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:49:98", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:09:45 +0000 UTC Type:0 Mac:52:54:00:4a:49:98 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-135993-m03 Clientid:01:52:54:00:4a:49:98}
	I0920 17:09:54.382948   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:54.383118   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHPort
	I0920 17:09:54.383308   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHKeyPath
	I0920 17:09:54.383448   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHKeyPath
	I0920 17:09:54.383614   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHUsername
	I0920 17:09:54.383768   27962 main.go:141] libmachine: Using SSH client type: native
	I0920 17:09:54.383925   27962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.133 22 <nil> <nil>}
	I0920 17:09:54.383934   27962 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 17:09:54.498180   27962 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726852194.467876031
	
	I0920 17:09:54.498204   27962 fix.go:216] guest clock: 1726852194.467876031
	I0920 17:09:54.498211   27962 fix.go:229] Guest: 2024-09-20 17:09:54.467876031 +0000 UTC Remote: 2024-09-20 17:09:54.38034625 +0000 UTC m=+146.191055828 (delta=87.529781ms)
	I0920 17:09:54.498227   27962 fix.go:200] guest clock delta is within tolerance: 87.529781ms
	I0920 17:09:54.498231   27962 start.go:83] releasing machines lock for "ha-135993-m03", held for 25.386097949s
	I0920 17:09:54.498253   27962 main.go:141] libmachine: (ha-135993-m03) Calling .DriverName
	I0920 17:09:54.498534   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetIP
	I0920 17:09:54.501028   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:54.501386   27962 main.go:141] libmachine: (ha-135993-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:49:98", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:09:45 +0000 UTC Type:0 Mac:52:54:00:4a:49:98 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-135993-m03 Clientid:01:52:54:00:4a:49:98}
	I0920 17:09:54.501414   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:54.503574   27962 out.go:177] * Found network options:
	I0920 17:09:54.504800   27962 out.go:177]   - NO_PROXY=192.168.39.60,192.168.39.227
	W0920 17:09:54.505950   27962 proxy.go:119] fail to check proxy env: Error ip not in block
	W0920 17:09:54.505970   27962 proxy.go:119] fail to check proxy env: Error ip not in block
	I0920 17:09:54.505986   27962 main.go:141] libmachine: (ha-135993-m03) Calling .DriverName
	I0920 17:09:54.506533   27962 main.go:141] libmachine: (ha-135993-m03) Calling .DriverName
	I0920 17:09:54.506677   27962 main.go:141] libmachine: (ha-135993-m03) Calling .DriverName
	I0920 17:09:54.506748   27962 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 17:09:54.506777   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHHostname
	W0920 17:09:54.506811   27962 proxy.go:119] fail to check proxy env: Error ip not in block
	W0920 17:09:54.506837   27962 proxy.go:119] fail to check proxy env: Error ip not in block
	I0920 17:09:54.506918   27962 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 17:09:54.506942   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHHostname
	I0920 17:09:54.510430   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:54.510572   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:54.510840   27962 main.go:141] libmachine: (ha-135993-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:49:98", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:09:45 +0000 UTC Type:0 Mac:52:54:00:4a:49:98 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-135993-m03 Clientid:01:52:54:00:4a:49:98}
	I0920 17:09:54.510857   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:54.511009   27962 main.go:141] libmachine: (ha-135993-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:49:98", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:09:45 +0000 UTC Type:0 Mac:52:54:00:4a:49:98 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-135993-m03 Clientid:01:52:54:00:4a:49:98}
	I0920 17:09:54.511022   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHPort
	I0920 17:09:54.511025   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:54.511158   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHPort
	I0920 17:09:54.511238   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHKeyPath
	I0920 17:09:54.511306   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHKeyPath
	I0920 17:09:54.511366   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHUsername
	I0920 17:09:54.511419   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHUsername
	I0920 17:09:54.511477   27962 sshutil.go:53] new ssh client: &{IP:192.168.39.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m03/id_rsa Username:docker}
	I0920 17:09:54.511516   27962 sshutil.go:53] new ssh client: &{IP:192.168.39.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m03/id_rsa Username:docker}
	I0920 17:09:54.752778   27962 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 17:09:54.758470   27962 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 17:09:54.758545   27962 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 17:09:54.777293   27962 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 17:09:54.777314   27962 start.go:495] detecting cgroup driver to use...
	I0920 17:09:54.777373   27962 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 17:09:54.794867   27962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 17:09:54.812379   27962 docker.go:217] disabling cri-docker service (if available) ...
	I0920 17:09:54.812435   27962 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 17:09:54.829513   27962 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 17:09:54.844058   27962 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 17:09:54.965032   27962 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 17:09:55.105410   27962 docker.go:233] disabling docker service ...
	I0920 17:09:55.105473   27962 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 17:09:55.119024   27962 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 17:09:55.131474   27962 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 17:09:55.280550   27962 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 17:09:55.424589   27962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 17:09:55.438591   27962 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 17:09:55.457023   27962 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 17:09:55.457079   27962 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:09:55.469113   27962 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 17:09:55.469204   27962 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:09:55.480768   27962 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:09:55.491997   27962 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:09:55.503252   27962 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 17:09:55.515007   27962 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:09:55.527072   27962 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:09:55.544868   27962 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:09:55.556070   27962 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 17:09:55.566274   27962 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 17:09:55.566347   27962 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 17:09:55.579815   27962 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 17:09:55.591271   27962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:09:55.721172   27962 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 17:09:55.816671   27962 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 17:09:55.816750   27962 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 17:09:55.821593   27962 start.go:563] Will wait 60s for crictl version
	I0920 17:09:55.821670   27962 ssh_runner.go:195] Run: which crictl
	I0920 17:09:55.825326   27962 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 17:09:55.861139   27962 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 17:09:55.861214   27962 ssh_runner.go:195] Run: crio --version
	I0920 17:09:55.889848   27962 ssh_runner.go:195] Run: crio --version
	I0920 17:09:55.919422   27962 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 17:09:55.920775   27962 out.go:177]   - env NO_PROXY=192.168.39.60
	I0920 17:09:55.922083   27962 out.go:177]   - env NO_PROXY=192.168.39.60,192.168.39.227
	I0920 17:09:55.923747   27962 main.go:141] libmachine: (ha-135993-m03) Calling .GetIP
	I0920 17:09:55.926252   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:55.926556   27962 main.go:141] libmachine: (ha-135993-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:49:98", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:09:45 +0000 UTC Type:0 Mac:52:54:00:4a:49:98 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-135993-m03 Clientid:01:52:54:00:4a:49:98}
	I0920 17:09:55.926586   27962 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:09:55.926743   27962 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 17:09:55.930814   27962 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 17:09:55.943504   27962 mustload.go:65] Loading cluster: ha-135993
	I0920 17:09:55.943748   27962 config.go:182] Loaded profile config "ha-135993": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 17:09:55.944067   27962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:09:55.944109   27962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:09:55.959177   27962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34285
	I0920 17:09:55.959707   27962 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:09:55.960208   27962 main.go:141] libmachine: Using API Version  1
	I0920 17:09:55.960231   27962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:09:55.960549   27962 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:09:55.960794   27962 main.go:141] libmachine: (ha-135993) Calling .GetState
	I0920 17:09:55.962489   27962 host.go:66] Checking if "ha-135993" exists ...
	I0920 17:09:55.962798   27962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:09:55.962843   27962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:09:55.977302   27962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45181
	I0920 17:09:55.977710   27962 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:09:55.978227   27962 main.go:141] libmachine: Using API Version  1
	I0920 17:09:55.978253   27962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:09:55.978558   27962 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:09:55.978742   27962 main.go:141] libmachine: (ha-135993) Calling .DriverName
	I0920 17:09:55.978879   27962 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993 for IP: 192.168.39.133
	I0920 17:09:55.978893   27962 certs.go:194] generating shared ca certs ...
	I0920 17:09:55.978913   27962 certs.go:226] acquiring lock for ca certs: {Name:mkc7ef6c737c6bdc3fdd9dcff8f57029c020d8f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:09:55.979064   27962 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key
	I0920 17:09:55.979123   27962 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key
	I0920 17:09:55.979137   27962 certs.go:256] generating profile certs ...
	I0920 17:09:55.979252   27962 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/client.key
	I0920 17:09:55.979287   27962 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.key.9a1e7345
	I0920 17:09:55.979305   27962 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.crt.9a1e7345 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.60 192.168.39.227 192.168.39.133 192.168.39.254]
	I0920 17:09:56.205622   27962 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.crt.9a1e7345 ...
	I0920 17:09:56.205652   27962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.crt.9a1e7345: {Name:mk741001df891368c2b48ce6ca33636b00474c7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:09:56.205862   27962 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.key.9a1e7345 ...
	I0920 17:09:56.205885   27962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.key.9a1e7345: {Name:mka8bfccee8c9e3909ae2b3c3cb9e59688362565 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:09:56.206039   27962 certs.go:381] copying /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.crt.9a1e7345 -> /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.crt
	I0920 17:09:56.206211   27962 certs.go:385] copying /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.key.9a1e7345 -> /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.key
	I0920 17:09:56.206388   27962 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/proxy-client.key
	I0920 17:09:56.206407   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0920 17:09:56.206426   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0920 17:09:56.206446   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0920 17:09:56.206464   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0920 17:09:56.206480   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0920 17:09:56.206494   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0920 17:09:56.206511   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0920 17:09:56.225918   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0920 17:09:56.225997   27962 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem (1338 bytes)
	W0920 17:09:56.226041   27962 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973_empty.pem, impossibly tiny 0 bytes
	I0920 17:09:56.226052   27962 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 17:09:56.226073   27962 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem (1082 bytes)
	I0920 17:09:56.226113   27962 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem (1123 bytes)
	I0920 17:09:56.226142   27962 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem (1675 bytes)
	I0920 17:09:56.226194   27962 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem (1708 bytes)
	I0920 17:09:56.226220   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem -> /usr/share/ca-certificates/159732.pem
	I0920 17:09:56.226236   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:09:56.226256   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem -> /usr/share/ca-certificates/15973.pem
	I0920 17:09:56.226300   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHHostname
	I0920 17:09:56.229337   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:09:56.229721   27962 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:09:56.229749   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:09:56.229930   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHPort
	I0920 17:09:56.230128   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:09:56.230302   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHUsername
	I0920 17:09:56.230392   27962 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993/id_rsa Username:docker}
	I0920 17:09:56.306176   27962 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0920 17:09:56.311850   27962 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0920 17:09:56.324295   27962 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0920 17:09:56.330346   27962 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0920 17:09:56.342029   27962 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0920 17:09:56.345907   27962 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0920 17:09:56.356185   27962 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0920 17:09:56.360478   27962 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0920 17:09:56.372648   27962 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0920 17:09:56.377310   27962 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0920 17:09:56.392310   27962 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0920 17:09:56.398873   27962 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0920 17:09:56.416705   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 17:09:56.442036   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 17:09:56.465893   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 17:09:56.491259   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0920 17:09:56.515541   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0920 17:09:56.538762   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 17:09:56.561229   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 17:09:56.583847   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 17:09:56.607936   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /usr/share/ca-certificates/159732.pem (1708 bytes)
	I0920 17:09:56.634323   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 17:09:56.662363   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem --> /usr/share/ca-certificates/15973.pem (1338 bytes)
	I0920 17:09:56.687040   27962 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0920 17:09:56.702914   27962 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0920 17:09:56.719096   27962 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0920 17:09:56.735043   27962 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0920 17:09:56.751375   27962 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0920 17:09:56.767907   27962 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0920 17:09:56.785247   27962 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0920 17:09:56.800819   27962 ssh_runner.go:195] Run: openssl version
	I0920 17:09:56.807059   27962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15973.pem && ln -fs /usr/share/ca-certificates/15973.pem /etc/ssl/certs/15973.pem"
	I0920 17:09:56.819325   27962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15973.pem
	I0920 17:09:56.823881   27962 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:04 /usr/share/ca-certificates/15973.pem
	I0920 17:09:56.823942   27962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15973.pem
	I0920 17:09:56.829735   27962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15973.pem /etc/ssl/certs/51391683.0"
	I0920 17:09:56.840229   27962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/159732.pem && ln -fs /usr/share/ca-certificates/159732.pem /etc/ssl/certs/159732.pem"
	I0920 17:09:56.850295   27962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/159732.pem
	I0920 17:09:56.854454   27962 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:04 /usr/share/ca-certificates/159732.pem
	I0920 17:09:56.854516   27962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/159732.pem
	I0920 17:09:56.859987   27962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/159732.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 17:09:56.870869   27962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 17:09:56.881683   27962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:09:56.886087   27962 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:09:56.886162   27962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:09:56.891826   27962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 17:09:56.902542   27962 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 17:09:56.906493   27962 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 17:09:56.906563   27962 kubeadm.go:934] updating node {m03 192.168.39.133 8443 v1.31.1 crio true true} ...
	I0920 17:09:56.906662   27962 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-135993-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.133
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-135993 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 17:09:56.906694   27962 kube-vip.go:115] generating kube-vip config ...
	I0920 17:09:56.906737   27962 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0920 17:09:56.924849   27962 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0920 17:09:56.924928   27962 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0920 17:09:56.924987   27962 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 17:09:56.935083   27962 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0920 17:09:56.935139   27962 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0920 17:09:56.944640   27962 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0920 17:09:56.944640   27962 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0920 17:09:56.944675   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0920 17:09:56.944710   27962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 17:09:56.944648   27962 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0920 17:09:56.944785   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0920 17:09:56.944830   27962 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0920 17:09:56.944765   27962 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0920 17:09:56.962033   27962 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0920 17:09:56.962071   27962 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0920 17:09:56.962074   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0920 17:09:56.962167   27962 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0920 17:09:56.962114   27962 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0920 17:09:56.962188   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0920 17:09:56.995038   27962 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0920 17:09:56.995085   27962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0920 17:09:57.877062   27962 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0920 17:09:57.886499   27962 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0920 17:09:57.902951   27962 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 17:09:57.919648   27962 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0920 17:09:57.936776   27962 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0920 17:09:57.940394   27962 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 17:09:57.952344   27962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:09:58.086995   27962 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 17:09:58.104838   27962 host.go:66] Checking if "ha-135993" exists ...
	I0920 17:09:58.105202   27962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:09:58.105252   27962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:09:58.121702   27962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37739
	I0920 17:09:58.122199   27962 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:09:58.122665   27962 main.go:141] libmachine: Using API Version  1
	I0920 17:09:58.122690   27962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:09:58.123042   27962 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:09:58.123222   27962 main.go:141] libmachine: (ha-135993) Calling .DriverName
	I0920 17:09:58.123436   27962 start.go:317] joinCluster: &{Name:ha-135993 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-135993 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.133 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:fa
lse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 17:09:58.123567   27962 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0920 17:09:58.123585   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHHostname
	I0920 17:09:58.126769   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:09:58.127177   27962 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:09:58.127198   27962 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:09:58.127380   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHPort
	I0920 17:09:58.127561   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:09:58.127676   27962 main.go:141] libmachine: (ha-135993) Calling .GetSSHUsername
	I0920 17:09:58.127807   27962 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993/id_rsa Username:docker}
	I0920 17:09:58.304684   27962 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.133 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 17:09:58.304742   27962 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token xem78t.sac8uh54rhaf5wj8 --discovery-token-ca-cert-hash sha256:736f3fb521855813decb4a7214f2b8beff5f81c3971b60decfa20a8807626a2d --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-135993-m03 --control-plane --apiserver-advertise-address=192.168.39.133 --apiserver-bind-port=8443"
	I0920 17:10:20.782828   27962 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token xem78t.sac8uh54rhaf5wj8 --discovery-token-ca-cert-hash sha256:736f3fb521855813decb4a7214f2b8beff5f81c3971b60decfa20a8807626a2d --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-135993-m03 --control-plane --apiserver-advertise-address=192.168.39.133 --apiserver-bind-port=8443": (22.478064097s)
	I0920 17:10:20.782862   27962 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0920 17:10:21.369579   27962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-135993-m03 minikube.k8s.io/updated_at=2024_09_20T17_10_21_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=0626f22cf0d915d75e291a5bce701f94395056e1 minikube.k8s.io/name=ha-135993 minikube.k8s.io/primary=false
	I0920 17:10:21.545661   27962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-135993-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0920 17:10:21.676455   27962 start.go:319] duration metric: took 23.553017419s to joinCluster
	I0920 17:10:21.676541   27962 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.133 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 17:10:21.676981   27962 config.go:182] Loaded profile config "ha-135993": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 17:10:21.678497   27962 out.go:177] * Verifying Kubernetes components...
	I0920 17:10:21.679903   27962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:10:21.961073   27962 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 17:10:21.996476   27962 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19672-8777/kubeconfig
	I0920 17:10:21.996707   27962 kapi.go:59] client config for ha-135993: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/client.crt", KeyFile:"/home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/client.key", CAFile:"/home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0920 17:10:21.996765   27962 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.60:8443
	I0920 17:10:21.996997   27962 node_ready.go:35] waiting up to 6m0s for node "ha-135993-m03" to be "Ready" ...
	I0920 17:10:21.997072   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:21.997080   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:21.997090   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:21.997095   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:22.001181   27962 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 17:10:22.497463   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:22.497485   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:22.497495   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:22.497507   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:22.502449   27962 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 17:10:22.997389   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:22.997418   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:22.997429   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:22.997438   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:23.001501   27962 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 17:10:23.497533   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:23.497557   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:23.497566   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:23.497570   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:23.500839   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:23.997331   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:23.997361   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:23.997370   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:23.997375   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:24.001172   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:24.001662   27962 node_ready.go:53] node "ha-135993-m03" has status "Ready":"False"
	I0920 17:10:24.497248   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:24.497270   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:24.497279   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:24.497284   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:24.501584   27962 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 17:10:24.997441   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:24.997461   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:24.997474   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:24.997479   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:25.001314   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:25.497255   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:25.497284   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:25.497297   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:25.497302   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:25.500828   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:25.997812   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:25.997877   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:25.997892   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:25.997897   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:26.001955   27962 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 17:10:26.002456   27962 node_ready.go:53] node "ha-135993-m03" has status "Ready":"False"
	I0920 17:10:26.497957   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:26.497985   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:26.498009   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:26.498014   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:26.505329   27962 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0920 17:10:26.997635   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:26.997665   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:26.997677   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:26.997681   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:27.001531   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:27.497548   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:27.497572   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:27.497582   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:27.497587   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:27.501038   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:27.998155   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:27.998184   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:27.998196   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:27.998201   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:28.002255   27962 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 17:10:28.002946   27962 node_ready.go:53] node "ha-135993-m03" has status "Ready":"False"
	I0920 17:10:28.497717   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:28.497741   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:28.497752   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:28.497759   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:28.501375   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:28.997522   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:28.997548   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:28.997556   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:28.997562   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:29.002576   27962 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 17:10:29.498184   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:29.498217   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:29.498230   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:29.498237   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:29.502043   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:29.998000   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:29.998032   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:29.998044   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:29.998050   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:30.001668   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:30.497469   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:30.497508   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:30.497521   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:30.497530   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:30.500913   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:30.501381   27962 node_ready.go:53] node "ha-135993-m03" has status "Ready":"False"
	I0920 17:10:30.997662   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:30.997683   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:30.997692   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:30.997696   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:31.001443   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:31.497374   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:31.497396   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:31.497406   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:31.497411   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:31.500970   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:31.998212   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:31.998237   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:31.998245   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:31.998250   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:32.005715   27962 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0920 17:10:32.497621   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:32.497644   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:32.497652   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:32.497656   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:32.501947   27962 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 17:10:32.502498   27962 node_ready.go:53] node "ha-135993-m03" has status "Ready":"False"
	I0920 17:10:32.998138   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:32.998162   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:32.998170   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:32.998174   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:33.002736   27962 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 17:10:33.497634   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:33.497655   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:33.497663   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:33.497669   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:33.501049   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:33.997307   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:33.997332   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:33.997340   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:33.997343   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:34.001271   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:34.497449   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:34.497471   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:34.497479   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:34.497483   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:34.501394   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:34.997478   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:34.997503   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:34.997512   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:34.997518   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:35.001257   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:35.001994   27962 node_ready.go:53] node "ha-135993-m03" has status "Ready":"False"
	I0920 17:10:35.497192   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:35.497221   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:35.497238   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:35.497244   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:35.501544   27962 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 17:10:35.997358   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:35.997383   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:35.997390   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:35.997394   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:36.000988   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:36.498031   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:36.498054   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:36.498064   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:36.498069   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:36.501887   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:36.997545   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:36.997568   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:36.997576   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:36.997579   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:37.001444   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:37.002042   27962 node_ready.go:53] node "ha-135993-m03" has status "Ready":"False"
	I0920 17:10:37.497312   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:37.497339   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:37.497347   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:37.497352   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:37.500690   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:37.997364   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:37.997392   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:37.997402   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:37.997406   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:38.000903   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:38.498015   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:38.498036   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:38.498046   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:38.498053   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:38.501382   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:38.997276   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:38.997298   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:38.997307   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:38.997311   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:39.000962   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:39.497287   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:39.497313   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:39.497323   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:39.497329   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:39.501180   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:39.501915   27962 node_ready.go:53] node "ha-135993-m03" has status "Ready":"False"
	I0920 17:10:39.997251   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:39.997274   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:39.997285   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:39.997291   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:40.000356   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:40.000916   27962 node_ready.go:49] node "ha-135993-m03" has status "Ready":"True"
	I0920 17:10:40.000937   27962 node_ready.go:38] duration metric: took 18.003923058s for node "ha-135993-m03" to be "Ready" ...
	I0920 17:10:40.000949   27962 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 17:10:40.001029   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods
	I0920 17:10:40.001041   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:40.001051   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:40.001059   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:40.007086   27962 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0920 17:10:40.013456   27962 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-gcvg4" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:40.013531   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-gcvg4
	I0920 17:10:40.013539   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:40.013547   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:40.013551   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:40.016217   27962 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 17:10:40.016928   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993
	I0920 17:10:40.016944   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:40.016951   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:40.016954   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:40.019552   27962 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 17:10:40.020302   27962 pod_ready.go:93] pod "coredns-7c65d6cfc9-gcvg4" in "kube-system" namespace has status "Ready":"True"
	I0920 17:10:40.020321   27962 pod_ready.go:82] duration metric: took 6.8416ms for pod "coredns-7c65d6cfc9-gcvg4" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:40.020329   27962 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-kpbhk" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:40.020387   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-kpbhk
	I0920 17:10:40.020395   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:40.020402   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:40.020405   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:40.022739   27962 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 17:10:40.023876   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993
	I0920 17:10:40.023897   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:40.023907   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:40.023914   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:40.026180   27962 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 17:10:40.026617   27962 pod_ready.go:93] pod "coredns-7c65d6cfc9-kpbhk" in "kube-system" namespace has status "Ready":"True"
	I0920 17:10:40.026633   27962 pod_ready.go:82] duration metric: took 6.291183ms for pod "coredns-7c65d6cfc9-kpbhk" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:40.026644   27962 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-135993" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:40.026708   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-135993
	I0920 17:10:40.026721   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:40.026729   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:40.026733   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:40.029955   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:40.030688   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993
	I0920 17:10:40.030707   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:40.030717   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:40.030724   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:40.033291   27962 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 17:10:40.033722   27962 pod_ready.go:93] pod "etcd-ha-135993" in "kube-system" namespace has status "Ready":"True"
	I0920 17:10:40.033740   27962 pod_ready.go:82] duration metric: took 7.086877ms for pod "etcd-ha-135993" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:40.033752   27962 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-135993-m02" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:40.033808   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-135993-m02
	I0920 17:10:40.033816   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:40.033823   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:40.033827   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:40.036180   27962 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 17:10:40.036735   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:10:40.036750   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:40.036757   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:40.036761   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:40.039148   27962 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 17:10:40.039672   27962 pod_ready.go:93] pod "etcd-ha-135993-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 17:10:40.039690   27962 pod_ready.go:82] duration metric: took 5.930508ms for pod "etcd-ha-135993-m02" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:40.039699   27962 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-135993-m03" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:40.198080   27962 request.go:632] Waited for 158.310883ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-135993-m03
	I0920 17:10:40.198142   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-135993-m03
	I0920 17:10:40.198147   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:40.198156   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:40.198165   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:40.201559   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:40.397955   27962 request.go:632] Waited for 195.344828ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:40.398036   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:40.398047   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:40.398057   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:40.398064   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:40.401572   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:40.402144   27962 pod_ready.go:93] pod "etcd-ha-135993-m03" in "kube-system" namespace has status "Ready":"True"
	I0920 17:10:40.402168   27962 pod_ready.go:82] duration metric: took 362.461912ms for pod "etcd-ha-135993-m03" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:40.402191   27962 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-135993" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:40.598190   27962 request.go:632] Waited for 195.924651ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-135993
	I0920 17:10:40.598265   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-135993
	I0920 17:10:40.598274   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:40.598282   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:40.598292   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:40.601449   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:40.797361   27962 request.go:632] Waited for 195.295556ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-135993
	I0920 17:10:40.797452   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993
	I0920 17:10:40.797463   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:40.797474   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:40.797479   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:40.800725   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:40.801428   27962 pod_ready.go:93] pod "kube-apiserver-ha-135993" in "kube-system" namespace has status "Ready":"True"
	I0920 17:10:40.801448   27962 pod_ready.go:82] duration metric: took 399.249989ms for pod "kube-apiserver-ha-135993" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:40.801457   27962 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-135993-m02" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:40.997409   27962 request.go:632] Waited for 195.878449ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-135993-m02
	I0920 17:10:40.997467   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-135993-m02
	I0920 17:10:40.997472   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:40.997479   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:40.997488   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:41.001457   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:41.197787   27962 request.go:632] Waited for 195.349078ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:10:41.197860   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:10:41.197871   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:41.197879   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:41.197882   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:41.201485   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:41.202105   27962 pod_ready.go:93] pod "kube-apiserver-ha-135993-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 17:10:41.202124   27962 pod_ready.go:82] duration metric: took 400.661085ms for pod "kube-apiserver-ha-135993-m02" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:41.202133   27962 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-135993-m03" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:41.398233   27962 request.go:632] Waited for 195.997178ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-135993-m03
	I0920 17:10:41.398303   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-135993-m03
	I0920 17:10:41.398378   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:41.398394   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:41.398400   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:41.402317   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:41.597319   27962 request.go:632] Waited for 194.299169ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:41.597378   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:41.597383   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:41.597411   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:41.597417   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:41.600918   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:41.601672   27962 pod_ready.go:93] pod "kube-apiserver-ha-135993-m03" in "kube-system" namespace has status "Ready":"True"
	I0920 17:10:41.601692   27962 pod_ready.go:82] duration metric: took 399.551518ms for pod "kube-apiserver-ha-135993-m03" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:41.601704   27962 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-135993" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:41.797255   27962 request.go:632] Waited for 195.471307ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-135993
	I0920 17:10:41.797312   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-135993
	I0920 17:10:41.797318   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:41.797325   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:41.797329   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:41.801261   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:41.997269   27962 request.go:632] Waited for 195.294616ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-135993
	I0920 17:10:41.997363   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993
	I0920 17:10:41.997371   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:41.997382   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:41.997392   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:42.001363   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:42.002111   27962 pod_ready.go:93] pod "kube-controller-manager-ha-135993" in "kube-system" namespace has status "Ready":"True"
	I0920 17:10:42.002135   27962 pod_ready.go:82] duration metric: took 400.422144ms for pod "kube-controller-manager-ha-135993" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:42.002152   27962 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-135993-m02" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:42.198137   27962 request.go:632] Waited for 195.883622ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-135993-m02
	I0920 17:10:42.198204   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-135993-m02
	I0920 17:10:42.198211   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:42.198224   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:42.198233   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:42.201776   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:42.397933   27962 request.go:632] Waited for 195.390844ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:10:42.397990   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:10:42.397996   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:42.398003   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:42.398008   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:42.401639   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:42.402402   27962 pod_ready.go:93] pod "kube-controller-manager-ha-135993-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 17:10:42.402423   27962 pod_ready.go:82] duration metric: took 400.260074ms for pod "kube-controller-manager-ha-135993-m02" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:42.402438   27962 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-135993-m03" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:42.597289   27962 request.go:632] Waited for 194.763978ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-135993-m03
	I0920 17:10:42.597371   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-135993-m03
	I0920 17:10:42.597384   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:42.597393   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:42.597400   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:42.601014   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:42.797863   27962 request.go:632] Waited for 195.944092ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:42.797944   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:42.797955   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:42.797965   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:42.797974   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:42.801609   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:42.802166   27962 pod_ready.go:93] pod "kube-controller-manager-ha-135993-m03" in "kube-system" namespace has status "Ready":"True"
	I0920 17:10:42.802184   27962 pod_ready.go:82] duration metric: took 399.739056ms for pod "kube-controller-manager-ha-135993-m03" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:42.802194   27962 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-45c9m" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:42.997304   27962 request.go:632] Waited for 195.040269ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-proxy-45c9m
	I0920 17:10:42.997408   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-proxy-45c9m
	I0920 17:10:42.997421   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:42.997432   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:42.997437   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:43.001257   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:43.198020   27962 request.go:632] Waited for 196.102413ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:43.198085   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:43.198092   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:43.198100   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:43.198106   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:43.201658   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:43.202252   27962 pod_ready.go:93] pod "kube-proxy-45c9m" in "kube-system" namespace has status "Ready":"True"
	I0920 17:10:43.202273   27962 pod_ready.go:82] duration metric: took 400.072197ms for pod "kube-proxy-45c9m" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:43.202287   27962 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-52r49" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:43.397914   27962 request.go:632] Waited for 195.445037ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-proxy-52r49
	I0920 17:10:43.397992   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-proxy-52r49
	I0920 17:10:43.397998   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:43.398005   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:43.398011   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:43.401788   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:43.597874   27962 request.go:632] Waited for 195.37712ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-135993
	I0920 17:10:43.597952   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993
	I0920 17:10:43.597964   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:43.597978   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:43.597989   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:43.600840   27962 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 17:10:43.601662   27962 pod_ready.go:93] pod "kube-proxy-52r49" in "kube-system" namespace has status "Ready":"True"
	I0920 17:10:43.601684   27962 pod_ready.go:82] duration metric: took 399.386758ms for pod "kube-proxy-52r49" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:43.601693   27962 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-z6xqt" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:43.797664   27962 request.go:632] Waited for 195.909482ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z6xqt
	I0920 17:10:43.797730   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z6xqt
	I0920 17:10:43.797738   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:43.797745   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:43.797750   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:43.801166   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:43.998193   27962 request.go:632] Waited for 196.396377ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:10:43.998304   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:10:43.998313   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:43.998325   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:43.998334   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:44.001971   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:44.002756   27962 pod_ready.go:93] pod "kube-proxy-z6xqt" in "kube-system" namespace has status "Ready":"True"
	I0920 17:10:44.002782   27962 pod_ready.go:82] duration metric: took 401.080699ms for pod "kube-proxy-z6xqt" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:44.002795   27962 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-135993" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:44.198129   27962 request.go:632] Waited for 195.259225ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-135993
	I0920 17:10:44.198208   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-135993
	I0920 17:10:44.198217   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:44.198225   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:44.198229   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:44.202058   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:44.398232   27962 request.go:632] Waited for 195.373668ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-135993
	I0920 17:10:44.398304   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993
	I0920 17:10:44.398313   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:44.398322   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:44.398336   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:44.402177   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:44.402890   27962 pod_ready.go:93] pod "kube-scheduler-ha-135993" in "kube-system" namespace has status "Ready":"True"
	I0920 17:10:44.402910   27962 pod_ready.go:82] duration metric: took 400.107134ms for pod "kube-scheduler-ha-135993" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:44.402920   27962 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-135993-m02" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:44.598018   27962 request.go:632] Waited for 195.007589ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-135993-m02
	I0920 17:10:44.598096   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-135993-m02
	I0920 17:10:44.598103   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:44.598114   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:44.598131   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:44.601458   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:44.797367   27962 request.go:632] Waited for 195.276041ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:10:44.797421   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m02
	I0920 17:10:44.797426   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:44.797434   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:44.797437   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:44.800953   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:44.801547   27962 pod_ready.go:93] pod "kube-scheduler-ha-135993-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 17:10:44.801566   27962 pod_ready.go:82] duration metric: took 398.637509ms for pod "kube-scheduler-ha-135993-m02" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:44.801580   27962 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-135993-m03" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:44.997661   27962 request.go:632] Waited for 195.986647ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-135993-m03
	I0920 17:10:44.997741   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-135993-m03
	I0920 17:10:44.997749   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:44.997760   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:44.997769   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:45.001737   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:45.197777   27962 request.go:632] Waited for 195.358869ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:45.197842   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-135993-m03
	I0920 17:10:45.197848   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:45.197858   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:45.197867   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:45.201296   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:45.201689   27962 pod_ready.go:93] pod "kube-scheduler-ha-135993-m03" in "kube-system" namespace has status "Ready":"True"
	I0920 17:10:45.201707   27962 pod_ready.go:82] duration metric: took 400.119509ms for pod "kube-scheduler-ha-135993-m03" in "kube-system" namespace to be "Ready" ...
	I0920 17:10:45.201719   27962 pod_ready.go:39] duration metric: took 5.200758265s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 17:10:45.201733   27962 api_server.go:52] waiting for apiserver process to appear ...
	I0920 17:10:45.201783   27962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 17:10:45.218374   27962 api_server.go:72] duration metric: took 23.541794087s to wait for apiserver process to appear ...
	I0920 17:10:45.218402   27962 api_server.go:88] waiting for apiserver healthz status ...
	I0920 17:10:45.218421   27962 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0920 17:10:45.222904   27962 api_server.go:279] https://192.168.39.60:8443/healthz returned 200:
	ok
	I0920 17:10:45.222982   27962 round_trippers.go:463] GET https://192.168.39.60:8443/version
	I0920 17:10:45.222994   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:45.223006   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:45.223010   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:45.224049   27962 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0920 17:10:45.224222   27962 api_server.go:141] control plane version: v1.31.1
	I0920 17:10:45.224245   27962 api_server.go:131] duration metric: took 5.83633ms to wait for apiserver health ...
	I0920 17:10:45.224256   27962 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 17:10:45.397714   27962 request.go:632] Waited for 173.358789ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods
	I0920 17:10:45.397793   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods
	I0920 17:10:45.397805   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:45.397818   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:45.397824   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:45.404937   27962 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0920 17:10:45.411424   27962 system_pods.go:59] 24 kube-system pods found
	I0920 17:10:45.411457   27962 system_pods.go:61] "coredns-7c65d6cfc9-gcvg4" [899b2b8c-9009-46c0-816b-781e85eb8b19] Running
	I0920 17:10:45.411462   27962 system_pods.go:61] "coredns-7c65d6cfc9-kpbhk" [0dfd9f1a-148c-4dba-884a-8618b74f82d0] Running
	I0920 17:10:45.411466   27962 system_pods.go:61] "etcd-ha-135993" [d5e44451-ab41-4bdf-8f34-c8202bc33cda] Running
	I0920 17:10:45.411470   27962 system_pods.go:61] "etcd-ha-135993-m02" [415d8f47-3bc4-42fc-ae75-b8217f5a731d] Running
	I0920 17:10:45.411473   27962 system_pods.go:61] "etcd-ha-135993-m03" [e5b78345-b51c-4e26-871e-76e190b209b1] Running
	I0920 17:10:45.411476   27962 system_pods.go:61] "kindnet-5m4r8" [c799ac5a-69f7-45d5-a291-61edfd753404] Running
	I0920 17:10:45.411479   27962 system_pods.go:61] "kindnet-6clt2" [d73a0817-d84f-4269-9de0-1532287a07db] Running
	I0920 17:10:45.411483   27962 system_pods.go:61] "kindnet-hcqf8" [727437d8-e050-4785-8bb7-90f8a496a2cb] Running
	I0920 17:10:45.411485   27962 system_pods.go:61] "kube-apiserver-ha-135993" [0c2adef2-0b98-4752-ba6c-0719b723c93c] Running
	I0920 17:10:45.411489   27962 system_pods.go:61] "kube-apiserver-ha-135993-m02" [e74438ac-347a-4f38-ba96-c7687763c669] Running
	I0920 17:10:45.411492   27962 system_pods.go:61] "kube-apiserver-ha-135993-m03" [9b8aa964-0aee-4301-94be-5c3f4e6f67bb] Running
	I0920 17:10:45.411495   27962 system_pods.go:61] "kube-controller-manager-ha-135993" [ba82afa0-d0a1-4515-bd0f-574f03c2a5a5] Running
	I0920 17:10:45.411498   27962 system_pods.go:61] "kube-controller-manager-ha-135993-m02" [ba4a59ae-5348-43b7-b80d-96d5d63afea2] Running
	I0920 17:10:45.411501   27962 system_pods.go:61] "kube-controller-manager-ha-135993-m03" [d8ae5058-1dd2-4012-90ce-75dc09f57a92] Running
	I0920 17:10:45.411504   27962 system_pods.go:61] "kube-proxy-45c9m" [c04f91a8-5bae-4e79-bfca-66b941217821] Running
	I0920 17:10:45.411507   27962 system_pods.go:61] "kube-proxy-52r49" [8d1124bd-e7cb-4239-a29d-c1d5b8870aff] Running
	I0920 17:10:45.411510   27962 system_pods.go:61] "kube-proxy-z6xqt" [70b8c8ab-77cc-4681-a9b1-3c28fc0f2674] Running
	I0920 17:10:45.411514   27962 system_pods.go:61] "kube-scheduler-ha-135993" [25eaf632-035a-44d9-8ce0-e6c3c0e4e0c4] Running
	I0920 17:10:45.411520   27962 system_pods.go:61] "kube-scheduler-ha-135993-m02" [6daee23a-d627-41e8-81b1-ceb4fa70ec3e] Running
	I0920 17:10:45.411522   27962 system_pods.go:61] "kube-scheduler-ha-135993-m03" [3ae2b99d-898a-448c-8e87-2b1e4b5fbae9] Running
	I0920 17:10:45.411525   27962 system_pods.go:61] "kube-vip-ha-135993" [6aa396e1-76b2-4911-bc93-660c51cef03d] Running
	I0920 17:10:45.411528   27962 system_pods.go:61] "kube-vip-ha-135993-m02" [2e9d1f8a-1ce9-46f9-b58b-f13302832062] Running
	I0920 17:10:45.411531   27962 system_pods.go:61] "kube-vip-ha-135993-m03" [ccf084e2-8b4f-4dde-a290-e644d7a0dde3] Running
	I0920 17:10:45.411536   27962 system_pods.go:61] "storage-provisioner" [57137bee-9a7b-4659-a855-0da82d137cb0] Running
	I0920 17:10:45.411542   27962 system_pods.go:74] duration metric: took 187.277251ms to wait for pod list to return data ...
	I0920 17:10:45.411551   27962 default_sa.go:34] waiting for default service account to be created ...
	I0920 17:10:45.597901   27962 request.go:632] Waited for 186.270484ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/default/serviceaccounts
	I0920 17:10:45.597955   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/default/serviceaccounts
	I0920 17:10:45.597961   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:45.597969   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:45.597974   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:45.601352   27962 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 17:10:45.601480   27962 default_sa.go:45] found service account: "default"
	I0920 17:10:45.601500   27962 default_sa.go:55] duration metric: took 189.941966ms for default service account to be created ...
	I0920 17:10:45.601512   27962 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 17:10:45.797900   27962 request.go:632] Waited for 196.315857ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods
	I0920 17:10:45.797971   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods
	I0920 17:10:45.797976   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:45.797983   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:45.797988   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:45.805414   27962 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0920 17:10:45.812236   27962 system_pods.go:86] 24 kube-system pods found
	I0920 17:10:45.812269   27962 system_pods.go:89] "coredns-7c65d6cfc9-gcvg4" [899b2b8c-9009-46c0-816b-781e85eb8b19] Running
	I0920 17:10:45.812275   27962 system_pods.go:89] "coredns-7c65d6cfc9-kpbhk" [0dfd9f1a-148c-4dba-884a-8618b74f82d0] Running
	I0920 17:10:45.812279   27962 system_pods.go:89] "etcd-ha-135993" [d5e44451-ab41-4bdf-8f34-c8202bc33cda] Running
	I0920 17:10:45.812282   27962 system_pods.go:89] "etcd-ha-135993-m02" [415d8f47-3bc4-42fc-ae75-b8217f5a731d] Running
	I0920 17:10:45.812287   27962 system_pods.go:89] "etcd-ha-135993-m03" [e5b78345-b51c-4e26-871e-76e190b209b1] Running
	I0920 17:10:45.812290   27962 system_pods.go:89] "kindnet-5m4r8" [c799ac5a-69f7-45d5-a291-61edfd753404] Running
	I0920 17:10:45.812294   27962 system_pods.go:89] "kindnet-6clt2" [d73a0817-d84f-4269-9de0-1532287a07db] Running
	I0920 17:10:45.812297   27962 system_pods.go:89] "kindnet-hcqf8" [727437d8-e050-4785-8bb7-90f8a496a2cb] Running
	I0920 17:10:45.812301   27962 system_pods.go:89] "kube-apiserver-ha-135993" [0c2adef2-0b98-4752-ba6c-0719b723c93c] Running
	I0920 17:10:45.812304   27962 system_pods.go:89] "kube-apiserver-ha-135993-m02" [e74438ac-347a-4f38-ba96-c7687763c669] Running
	I0920 17:10:45.812308   27962 system_pods.go:89] "kube-apiserver-ha-135993-m03" [9b8aa964-0aee-4301-94be-5c3f4e6f67bb] Running
	I0920 17:10:45.812311   27962 system_pods.go:89] "kube-controller-manager-ha-135993" [ba82afa0-d0a1-4515-bd0f-574f03c2a5a5] Running
	I0920 17:10:45.812314   27962 system_pods.go:89] "kube-controller-manager-ha-135993-m02" [ba4a59ae-5348-43b7-b80d-96d5d63afea2] Running
	I0920 17:10:45.812319   27962 system_pods.go:89] "kube-controller-manager-ha-135993-m03" [d8ae5058-1dd2-4012-90ce-75dc09f57a92] Running
	I0920 17:10:45.812324   27962 system_pods.go:89] "kube-proxy-45c9m" [c04f91a8-5bae-4e79-bfca-66b941217821] Running
	I0920 17:10:45.812328   27962 system_pods.go:89] "kube-proxy-52r49" [8d1124bd-e7cb-4239-a29d-c1d5b8870aff] Running
	I0920 17:10:45.812333   27962 system_pods.go:89] "kube-proxy-z6xqt" [70b8c8ab-77cc-4681-a9b1-3c28fc0f2674] Running
	I0920 17:10:45.812336   27962 system_pods.go:89] "kube-scheduler-ha-135993" [25eaf632-035a-44d9-8ce0-e6c3c0e4e0c4] Running
	I0920 17:10:45.812340   27962 system_pods.go:89] "kube-scheduler-ha-135993-m02" [6daee23a-d627-41e8-81b1-ceb4fa70ec3e] Running
	I0920 17:10:45.812344   27962 system_pods.go:89] "kube-scheduler-ha-135993-m03" [3ae2b99d-898a-448c-8e87-2b1e4b5fbae9] Running
	I0920 17:10:45.812348   27962 system_pods.go:89] "kube-vip-ha-135993" [6aa396e1-76b2-4911-bc93-660c51cef03d] Running
	I0920 17:10:45.812351   27962 system_pods.go:89] "kube-vip-ha-135993-m02" [2e9d1f8a-1ce9-46f9-b58b-f13302832062] Running
	I0920 17:10:45.812354   27962 system_pods.go:89] "kube-vip-ha-135993-m03" [ccf084e2-8b4f-4dde-a290-e644d7a0dde3] Running
	I0920 17:10:45.812360   27962 system_pods.go:89] "storage-provisioner" [57137bee-9a7b-4659-a855-0da82d137cb0] Running
	I0920 17:10:45.812366   27962 system_pods.go:126] duration metric: took 210.848794ms to wait for k8s-apps to be running ...
	I0920 17:10:45.812375   27962 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 17:10:45.812419   27962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 17:10:45.827985   27962 system_svc.go:56] duration metric: took 15.600828ms WaitForService to wait for kubelet
	I0920 17:10:45.828023   27962 kubeadm.go:582] duration metric: took 24.151442817s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 17:10:45.828047   27962 node_conditions.go:102] verifying NodePressure condition ...
	I0920 17:10:45.998195   27962 request.go:632] Waited for 170.064742ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes
	I0920 17:10:45.998254   27962 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes
	I0920 17:10:45.998260   27962 round_trippers.go:469] Request Headers:
	I0920 17:10:45.998267   27962 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:10:45.998275   27962 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:10:46.002746   27962 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 17:10:46.003936   27962 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 17:10:46.003959   27962 node_conditions.go:123] node cpu capacity is 2
	I0920 17:10:46.003973   27962 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 17:10:46.003983   27962 node_conditions.go:123] node cpu capacity is 2
	I0920 17:10:46.003987   27962 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 17:10:46.003992   27962 node_conditions.go:123] node cpu capacity is 2
	I0920 17:10:46.004000   27962 node_conditions.go:105] duration metric: took 175.947788ms to run NodePressure ...
	I0920 17:10:46.004016   27962 start.go:241] waiting for startup goroutines ...
	I0920 17:10:46.004041   27962 start.go:255] writing updated cluster config ...
	I0920 17:10:46.004403   27962 ssh_runner.go:195] Run: rm -f paused
	I0920 17:10:46.058462   27962 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 17:10:46.060232   27962 out.go:177] * Done! kubectl is now configured to use "ha-135993" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 20 17:14:33 ha-135993 crio[661]: time="2024-09-20 17:14:33.513733655Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852473513710507,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=111c8ed6-90c7-4599-ad73-2f7416909473 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 17:14:33 ha-135993 crio[661]: time="2024-09-20 17:14:33.514501946Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ead476b2-4afe-48be-b7c2-02581790eee8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:14:33 ha-135993 crio[661]: time="2024-09-20 17:14:33.514577402Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ead476b2-4afe-48be-b7c2-02581790eee8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:14:33 ha-135993 crio[661]: time="2024-09-20 17:14:33.514816095Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d2a30264a8299336cfb5a4338bab42f8ef663f285ffa7fcb290de88635658a97,PodSandboxId:afa282bba6347297904c2a6423e39984da1a95a2caf8b23dd6857518cc0bb05d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726852250916004479,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-df429,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ef8ed1eb-a7c6-4c49-9e46-924bad4d9577,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36f3e8a4356ff70d6ac1d50a79bc078c32683201ca9c0787097b28f3417fdc90,PodSandboxId:c1cd70ce60a8324767003f8d8ab2c4e9bbafba1edc0b62b8ad54d09201e8b82c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726852104314815295,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57137bee-9a7b-4659-a855-0da82d137cb0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c668f6376655011c61ac0bac2456f3e751f265e08a925a427f7ecfca3d54d97,PodSandboxId:6e8ccc1edc7282309966f52b57c94684aa08f468f2023c8e5a61044327ad0cdb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726852104355492838,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kpbhk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dfd9f1a-148c-4dba-884a-8618b74f82d0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5054778f39bbb435a191b7cbdabfca614d8b6d64b7bce122294d50415d601787,PodSandboxId:6fda3c09e12fee3d9bfbdf163c989e0242d8464e853282bca12154b468cf1a1d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726852104287014426,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gcvg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 899b2b8c-90
09-46c0-816b-781e85eb8b19,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8792a3b1249ffb1a157682b07dec9a0edf11b81c44e53ea068e0fc6f4548be22,PodSandboxId:ed014d23a111faa2c65032e35b2ef554b33cd4cb8eddbc89e3b77efabb53a5ce,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17268520
92541411891,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6clt2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d73a0817-d84f-4269-9de0-1532287a07db,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4b462c3efaa10b1790ebacd5fe8845d6eb91643cb6db9b2830f33afe1744d57,PodSandboxId:1971096e9fdaaef4dfb856b23a044da810f3ea4c5d05511c7fc53105be9d664b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726852092275090369,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-52r49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d1124bd-e7cb-4239-a29d-c1d5b8870aff,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a56cd54bb369a6e7651ec4ac3a33220fcfa19226c4ca75ab535aeddbf2811a9,PodSandboxId:bd7dad5ca0acddfe976ec11fd8f4a2cd641ca2b663a812a1209683ee830b901e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726852084813198134,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a641242a9f304152af48553fabd7a110,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b48cf1f03207bb258f12f5506b5e60fd4bb742d7b2ced222664cc2b996ff15d,PodSandboxId:f3f5771528b9cb4aa1d625ba84f09a7820fe3964ceb281b04f73fe5e13f6f894,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726852081654759731,Labels:map[string]string{io.kubernetes.container.
name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fa5fbf3465d3154576a11474b7b8548,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f5eb92cf36b033e765f0681f8a6634251dfd6dc0db3410efd3ef6e8580a8b2f,PodSandboxId:b0a0c7068266ae4cdca9368f9606704a9cf02c9b4e905e2d6c179562e88cf8ba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726852081628521436,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bce625d942177d9274bb431f7a7012b2,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db80f5e2505940b81bc20c39e98d873b35eff2e71aac748d8b406e21f5435fca,PodSandboxId:77a9434f5f03e0fce6c021d0cb97ce2bbe8dbb697371c561907dc8656adca49c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726852081504080347,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.
kubernetes.pod.name: kube-scheduler-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086260df87dea88f53b3b3ca08d61864,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e70d74afe0f7f7c1e0f084aba99e9c488a1f999e94866b12072fdcbc24b0dad7,PodSandboxId:74a0a0888b0f65cfdbc3321666aaf8f377feaae2a4e3c132c5ed40ccd4468689,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726852081538334631,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-135993,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f0e3932935569a5c49edd0da5a87eba,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ead476b2-4afe-48be-b7c2-02581790eee8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:14:33 ha-135993 crio[661]: time="2024-09-20 17:14:33.556478026Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ed8dc67e-5a87-43f3-9fd8-14a59b928d39 name=/runtime.v1.RuntimeService/Version
	Sep 20 17:14:33 ha-135993 crio[661]: time="2024-09-20 17:14:33.556563819Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ed8dc67e-5a87-43f3-9fd8-14a59b928d39 name=/runtime.v1.RuntimeService/Version
	Sep 20 17:14:33 ha-135993 crio[661]: time="2024-09-20 17:14:33.557861943Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9a3a6dbe-4d63-4db8-bcf4-8db901f575ac name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 17:14:33 ha-135993 crio[661]: time="2024-09-20 17:14:33.558372832Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852473558346446,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9a3a6dbe-4d63-4db8-bcf4-8db901f575ac name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 17:14:33 ha-135993 crio[661]: time="2024-09-20 17:14:33.559328138Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ac810043-cd14-424e-8d7e-679e3178e5d0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:14:33 ha-135993 crio[661]: time="2024-09-20 17:14:33.559397570Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ac810043-cd14-424e-8d7e-679e3178e5d0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:14:33 ha-135993 crio[661]: time="2024-09-20 17:14:33.559639026Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d2a30264a8299336cfb5a4338bab42f8ef663f285ffa7fcb290de88635658a97,PodSandboxId:afa282bba6347297904c2a6423e39984da1a95a2caf8b23dd6857518cc0bb05d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726852250916004479,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-df429,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ef8ed1eb-a7c6-4c49-9e46-924bad4d9577,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36f3e8a4356ff70d6ac1d50a79bc078c32683201ca9c0787097b28f3417fdc90,PodSandboxId:c1cd70ce60a8324767003f8d8ab2c4e9bbafba1edc0b62b8ad54d09201e8b82c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726852104314815295,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57137bee-9a7b-4659-a855-0da82d137cb0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c668f6376655011c61ac0bac2456f3e751f265e08a925a427f7ecfca3d54d97,PodSandboxId:6e8ccc1edc7282309966f52b57c94684aa08f468f2023c8e5a61044327ad0cdb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726852104355492838,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kpbhk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dfd9f1a-148c-4dba-884a-8618b74f82d0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5054778f39bbb435a191b7cbdabfca614d8b6d64b7bce122294d50415d601787,PodSandboxId:6fda3c09e12fee3d9bfbdf163c989e0242d8464e853282bca12154b468cf1a1d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726852104287014426,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gcvg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 899b2b8c-90
09-46c0-816b-781e85eb8b19,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8792a3b1249ffb1a157682b07dec9a0edf11b81c44e53ea068e0fc6f4548be22,PodSandboxId:ed014d23a111faa2c65032e35b2ef554b33cd4cb8eddbc89e3b77efabb53a5ce,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17268520
92541411891,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6clt2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d73a0817-d84f-4269-9de0-1532287a07db,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4b462c3efaa10b1790ebacd5fe8845d6eb91643cb6db9b2830f33afe1744d57,PodSandboxId:1971096e9fdaaef4dfb856b23a044da810f3ea4c5d05511c7fc53105be9d664b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726852092275090369,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-52r49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d1124bd-e7cb-4239-a29d-c1d5b8870aff,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a56cd54bb369a6e7651ec4ac3a33220fcfa19226c4ca75ab535aeddbf2811a9,PodSandboxId:bd7dad5ca0acddfe976ec11fd8f4a2cd641ca2b663a812a1209683ee830b901e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726852084813198134,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a641242a9f304152af48553fabd7a110,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b48cf1f03207bb258f12f5506b5e60fd4bb742d7b2ced222664cc2b996ff15d,PodSandboxId:f3f5771528b9cb4aa1d625ba84f09a7820fe3964ceb281b04f73fe5e13f6f894,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726852081654759731,Labels:map[string]string{io.kubernetes.container.
name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fa5fbf3465d3154576a11474b7b8548,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f5eb92cf36b033e765f0681f8a6634251dfd6dc0db3410efd3ef6e8580a8b2f,PodSandboxId:b0a0c7068266ae4cdca9368f9606704a9cf02c9b4e905e2d6c179562e88cf8ba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726852081628521436,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bce625d942177d9274bb431f7a7012b2,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db80f5e2505940b81bc20c39e98d873b35eff2e71aac748d8b406e21f5435fca,PodSandboxId:77a9434f5f03e0fce6c021d0cb97ce2bbe8dbb697371c561907dc8656adca49c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726852081504080347,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.
kubernetes.pod.name: kube-scheduler-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086260df87dea88f53b3b3ca08d61864,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e70d74afe0f7f7c1e0f084aba99e9c488a1f999e94866b12072fdcbc24b0dad7,PodSandboxId:74a0a0888b0f65cfdbc3321666aaf8f377feaae2a4e3c132c5ed40ccd4468689,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726852081538334631,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-135993,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f0e3932935569a5c49edd0da5a87eba,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ac810043-cd14-424e-8d7e-679e3178e5d0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:14:33 ha-135993 crio[661]: time="2024-09-20 17:14:33.601334957Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e8ca655f-a903-4276-a637-d4b7718375a2 name=/runtime.v1.RuntimeService/Version
	Sep 20 17:14:33 ha-135993 crio[661]: time="2024-09-20 17:14:33.601430979Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e8ca655f-a903-4276-a637-d4b7718375a2 name=/runtime.v1.RuntimeService/Version
	Sep 20 17:14:33 ha-135993 crio[661]: time="2024-09-20 17:14:33.602877173Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8024814a-7759-4e6a-b445-7cefa23f88e3 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 17:14:33 ha-135993 crio[661]: time="2024-09-20 17:14:33.603390517Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852473603365945,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8024814a-7759-4e6a-b445-7cefa23f88e3 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 17:14:33 ha-135993 crio[661]: time="2024-09-20 17:14:33.603984545Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e9efafe5-d5c4-47b8-b951-47da18558c64 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:14:33 ha-135993 crio[661]: time="2024-09-20 17:14:33.604048081Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e9efafe5-d5c4-47b8-b951-47da18558c64 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:14:33 ha-135993 crio[661]: time="2024-09-20 17:14:33.604357000Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d2a30264a8299336cfb5a4338bab42f8ef663f285ffa7fcb290de88635658a97,PodSandboxId:afa282bba6347297904c2a6423e39984da1a95a2caf8b23dd6857518cc0bb05d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726852250916004479,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-df429,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ef8ed1eb-a7c6-4c49-9e46-924bad4d9577,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36f3e8a4356ff70d6ac1d50a79bc078c32683201ca9c0787097b28f3417fdc90,PodSandboxId:c1cd70ce60a8324767003f8d8ab2c4e9bbafba1edc0b62b8ad54d09201e8b82c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726852104314815295,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57137bee-9a7b-4659-a855-0da82d137cb0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c668f6376655011c61ac0bac2456f3e751f265e08a925a427f7ecfca3d54d97,PodSandboxId:6e8ccc1edc7282309966f52b57c94684aa08f468f2023c8e5a61044327ad0cdb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726852104355492838,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kpbhk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dfd9f1a-148c-4dba-884a-8618b74f82d0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5054778f39bbb435a191b7cbdabfca614d8b6d64b7bce122294d50415d601787,PodSandboxId:6fda3c09e12fee3d9bfbdf163c989e0242d8464e853282bca12154b468cf1a1d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726852104287014426,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gcvg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 899b2b8c-90
09-46c0-816b-781e85eb8b19,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8792a3b1249ffb1a157682b07dec9a0edf11b81c44e53ea068e0fc6f4548be22,PodSandboxId:ed014d23a111faa2c65032e35b2ef554b33cd4cb8eddbc89e3b77efabb53a5ce,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17268520
92541411891,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6clt2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d73a0817-d84f-4269-9de0-1532287a07db,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4b462c3efaa10b1790ebacd5fe8845d6eb91643cb6db9b2830f33afe1744d57,PodSandboxId:1971096e9fdaaef4dfb856b23a044da810f3ea4c5d05511c7fc53105be9d664b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726852092275090369,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-52r49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d1124bd-e7cb-4239-a29d-c1d5b8870aff,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a56cd54bb369a6e7651ec4ac3a33220fcfa19226c4ca75ab535aeddbf2811a9,PodSandboxId:bd7dad5ca0acddfe976ec11fd8f4a2cd641ca2b663a812a1209683ee830b901e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726852084813198134,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a641242a9f304152af48553fabd7a110,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b48cf1f03207bb258f12f5506b5e60fd4bb742d7b2ced222664cc2b996ff15d,PodSandboxId:f3f5771528b9cb4aa1d625ba84f09a7820fe3964ceb281b04f73fe5e13f6f894,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726852081654759731,Labels:map[string]string{io.kubernetes.container.
name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fa5fbf3465d3154576a11474b7b8548,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f5eb92cf36b033e765f0681f8a6634251dfd6dc0db3410efd3ef6e8580a8b2f,PodSandboxId:b0a0c7068266ae4cdca9368f9606704a9cf02c9b4e905e2d6c179562e88cf8ba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726852081628521436,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bce625d942177d9274bb431f7a7012b2,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db80f5e2505940b81bc20c39e98d873b35eff2e71aac748d8b406e21f5435fca,PodSandboxId:77a9434f5f03e0fce6c021d0cb97ce2bbe8dbb697371c561907dc8656adca49c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726852081504080347,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.
kubernetes.pod.name: kube-scheduler-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086260df87dea88f53b3b3ca08d61864,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e70d74afe0f7f7c1e0f084aba99e9c488a1f999e94866b12072fdcbc24b0dad7,PodSandboxId:74a0a0888b0f65cfdbc3321666aaf8f377feaae2a4e3c132c5ed40ccd4468689,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726852081538334631,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-135993,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f0e3932935569a5c49edd0da5a87eba,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e9efafe5-d5c4-47b8-b951-47da18558c64 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:14:33 ha-135993 crio[661]: time="2024-09-20 17:14:33.640835384Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5ae3574d-edb9-43f9-9ca2-623fc7eb30c8 name=/runtime.v1.RuntimeService/Version
	Sep 20 17:14:33 ha-135993 crio[661]: time="2024-09-20 17:14:33.640918969Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5ae3574d-edb9-43f9-9ca2-623fc7eb30c8 name=/runtime.v1.RuntimeService/Version
	Sep 20 17:14:33 ha-135993 crio[661]: time="2024-09-20 17:14:33.642184013Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=00552ea0-850b-4802-af29-ec6fe364c0e3 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 17:14:33 ha-135993 crio[661]: time="2024-09-20 17:14:33.642775367Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852473642749906,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=00552ea0-850b-4802-af29-ec6fe364c0e3 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 17:14:33 ha-135993 crio[661]: time="2024-09-20 17:14:33.643274229Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6fc0ac62-8afb-4e11-bee3-4008ad86a664 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:14:33 ha-135993 crio[661]: time="2024-09-20 17:14:33.643330423Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6fc0ac62-8afb-4e11-bee3-4008ad86a664 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:14:33 ha-135993 crio[661]: time="2024-09-20 17:14:33.643598707Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d2a30264a8299336cfb5a4338bab42f8ef663f285ffa7fcb290de88635658a97,PodSandboxId:afa282bba6347297904c2a6423e39984da1a95a2caf8b23dd6857518cc0bb05d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726852250916004479,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-df429,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ef8ed1eb-a7c6-4c49-9e46-924bad4d9577,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36f3e8a4356ff70d6ac1d50a79bc078c32683201ca9c0787097b28f3417fdc90,PodSandboxId:c1cd70ce60a8324767003f8d8ab2c4e9bbafba1edc0b62b8ad54d09201e8b82c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726852104314815295,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57137bee-9a7b-4659-a855-0da82d137cb0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c668f6376655011c61ac0bac2456f3e751f265e08a925a427f7ecfca3d54d97,PodSandboxId:6e8ccc1edc7282309966f52b57c94684aa08f468f2023c8e5a61044327ad0cdb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726852104355492838,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kpbhk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dfd9f1a-148c-4dba-884a-8618b74f82d0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5054778f39bbb435a191b7cbdabfca614d8b6d64b7bce122294d50415d601787,PodSandboxId:6fda3c09e12fee3d9bfbdf163c989e0242d8464e853282bca12154b468cf1a1d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726852104287014426,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gcvg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 899b2b8c-90
09-46c0-816b-781e85eb8b19,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8792a3b1249ffb1a157682b07dec9a0edf11b81c44e53ea068e0fc6f4548be22,PodSandboxId:ed014d23a111faa2c65032e35b2ef554b33cd4cb8eddbc89e3b77efabb53a5ce,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17268520
92541411891,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6clt2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d73a0817-d84f-4269-9de0-1532287a07db,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4b462c3efaa10b1790ebacd5fe8845d6eb91643cb6db9b2830f33afe1744d57,PodSandboxId:1971096e9fdaaef4dfb856b23a044da810f3ea4c5d05511c7fc53105be9d664b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726852092275090369,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-52r49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d1124bd-e7cb-4239-a29d-c1d5b8870aff,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a56cd54bb369a6e7651ec4ac3a33220fcfa19226c4ca75ab535aeddbf2811a9,PodSandboxId:bd7dad5ca0acddfe976ec11fd8f4a2cd641ca2b663a812a1209683ee830b901e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726852084813198134,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a641242a9f304152af48553fabd7a110,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b48cf1f03207bb258f12f5506b5e60fd4bb742d7b2ced222664cc2b996ff15d,PodSandboxId:f3f5771528b9cb4aa1d625ba84f09a7820fe3964ceb281b04f73fe5e13f6f894,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726852081654759731,Labels:map[string]string{io.kubernetes.container.
name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fa5fbf3465d3154576a11474b7b8548,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f5eb92cf36b033e765f0681f8a6634251dfd6dc0db3410efd3ef6e8580a8b2f,PodSandboxId:b0a0c7068266ae4cdca9368f9606704a9cf02c9b4e905e2d6c179562e88cf8ba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726852081628521436,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bce625d942177d9274bb431f7a7012b2,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db80f5e2505940b81bc20c39e98d873b35eff2e71aac748d8b406e21f5435fca,PodSandboxId:77a9434f5f03e0fce6c021d0cb97ce2bbe8dbb697371c561907dc8656adca49c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726852081504080347,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.
kubernetes.pod.name: kube-scheduler-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086260df87dea88f53b3b3ca08d61864,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e70d74afe0f7f7c1e0f084aba99e9c488a1f999e94866b12072fdcbc24b0dad7,PodSandboxId:74a0a0888b0f65cfdbc3321666aaf8f377feaae2a4e3c132c5ed40ccd4468689,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726852081538334631,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-135993,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f0e3932935569a5c49edd0da5a87eba,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6fc0ac62-8afb-4e11-bee3-4008ad86a664 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d2a30264a8299       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   afa282bba6347       busybox-7dff88458-df429
	7c668f6376655       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   6e8ccc1edc728       coredns-7c65d6cfc9-kpbhk
	36f3e8a4356ff       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   c1cd70ce60a83       storage-provisioner
	5054778f39bbb       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   6fda3c09e12fe       coredns-7c65d6cfc9-gcvg4
	8792a3b1249ff       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      6 minutes ago       Running             kindnet-cni               0                   ed014d23a111f       kindnet-6clt2
	e4b462c3efaa1       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   1971096e9fdaa       kube-proxy-52r49
	1a56cd54bb369       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   bd7dad5ca0acd       kube-vip-ha-135993
	2b48cf1f03207       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   f3f5771528b9c       kube-controller-manager-ha-135993
	1f5eb92cf36b0       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   b0a0c7068266a       kube-apiserver-ha-135993
	e70d74afe0f7f       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   74a0a0888b0f6       etcd-ha-135993
	db80f5e250594       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   77a9434f5f03e       kube-scheduler-ha-135993
	
	
	==> coredns [5054778f39bbb435a191b7cbdabfca614d8b6d64b7bce122294d50415d601787] <==
	[INFO] 10.244.0.4:37855 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001838356s
	[INFO] 10.244.0.4:49834 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000062858s
	[INFO] 10.244.0.4:37202 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001240214s
	[INFO] 10.244.0.4:56343 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000095387s
	[INFO] 10.244.0.4:41974 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000080526s
	[INFO] 10.244.2.2:50089 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000170402s
	[INFO] 10.244.2.2:41205 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001201877s
	[INFO] 10.244.2.2:49094 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000154615s
	[INFO] 10.244.2.2:54226 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000116561s
	[INFO] 10.244.2.2:56885 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000137064s
	[INFO] 10.244.1.2:43199 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133082s
	[INFO] 10.244.1.2:54300 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000122573s
	[INFO] 10.244.1.2:57535 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000095892s
	[INFO] 10.244.1.2:45845 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000088385s
	[INFO] 10.244.0.4:53452 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000193594s
	[INFO] 10.244.0.4:46571 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000075164s
	[INFO] 10.244.2.2:44125 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000166147s
	[INFO] 10.244.2.2:59364 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000113432s
	[INFO] 10.244.2.2:54562 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000112311s
	[INFO] 10.244.1.2:60066 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132637s
	[INFO] 10.244.1.2:43717 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00017413s
	[INFO] 10.244.1.2:51684 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000156522s
	[INFO] 10.244.0.4:56213 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000141144s
	[INFO] 10.244.2.2:56175 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000117658s
	[INFO] 10.244.2.2:59810 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000111868s
	
	
	==> coredns [7c668f6376655011c61ac0bac2456f3e751f265e08a925a427f7ecfca3d54d97] <==
	[INFO] 10.244.0.4:48619 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00021775s
	[INFO] 10.244.0.4:46660 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000082726s
	[INFO] 10.244.2.2:38551 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.001366629s
	[INFO] 10.244.2.2:52956 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001396555s
	[INFO] 10.244.1.2:37231 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000279388s
	[INFO] 10.244.1.2:48508 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000280908s
	[INFO] 10.244.1.2:47714 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004766702s
	[INFO] 10.244.1.2:42041 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000169898s
	[INFO] 10.244.1.2:35115 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000212804s
	[INFO] 10.244.1.2:39956 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000247275s
	[INFO] 10.244.0.4:46191 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134745s
	[INFO] 10.244.0.4:49235 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000135262s
	[INFO] 10.244.0.4:33483 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000051965s
	[INFO] 10.244.2.2:40337 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000151683s
	[INFO] 10.244.2.2:54318 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001827239s
	[INFO] 10.244.2.2:58127 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000121998s
	[INFO] 10.244.0.4:54582 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000104228s
	[INFO] 10.244.0.4:57447 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000174115s
	[INFO] 10.244.2.2:39583 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000117382s
	[INFO] 10.244.1.2:55713 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00021321s
	[INFO] 10.244.0.4:57049 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000099997s
	[INFO] 10.244.0.4:39453 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000227319s
	[INFO] 10.244.0.4:46666 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000102501s
	[INFO] 10.244.2.2:49743 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159057s
	[INFO] 10.244.2.2:55499 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000197724s
	
	
	==> describe nodes <==
	Name:               ha-135993
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-135993
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0626f22cf0d915d75e291a5bce701f94395056e1
	                    minikube.k8s.io/name=ha-135993
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T17_08_08_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 17:08:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-135993
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 17:14:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 17:11:12 +0000   Fri, 20 Sep 2024 17:08:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 17:11:12 +0000   Fri, 20 Sep 2024 17:08:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 17:11:12 +0000   Fri, 20 Sep 2024 17:08:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 17:11:12 +0000   Fri, 20 Sep 2024 17:08:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.60
	  Hostname:    ha-135993
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6e83ceee6b834466a3a10733ff3c06b4
	  System UUID:                6e83ceee-6b83-4466-a3a1-0733ff3c06b4
	  Boot ID:                    ddcdaa90-2381-4c26-932e-b18d04f91d88
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-df429              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m47s
	  kube-system                 coredns-7c65d6cfc9-gcvg4             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m21s
	  kube-system                 coredns-7c65d6cfc9-kpbhk             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m21s
	  kube-system                 etcd-ha-135993                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m26s
	  kube-system                 kindnet-6clt2                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m22s
	  kube-system                 kube-apiserver-ha-135993             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m26s
	  kube-system                 kube-controller-manager-ha-135993    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m26s
	  kube-system                 kube-proxy-52r49                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m22s
	  kube-system                 kube-scheduler-ha-135993             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m26s
	  kube-system                 kube-vip-ha-135993                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m28s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m21s  kube-proxy       
	  Normal  Starting                 6m26s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m26s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m26s  kubelet          Node ha-135993 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m26s  kubelet          Node ha-135993 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m26s  kubelet          Node ha-135993 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m22s  node-controller  Node ha-135993 event: Registered Node ha-135993 in Controller
	  Normal  NodeReady                6m10s  kubelet          Node ha-135993 status is now: NodeReady
	  Normal  RegisteredNode           5m22s  node-controller  Node ha-135993 event: Registered Node ha-135993 in Controller
	  Normal  RegisteredNode           4m7s   node-controller  Node ha-135993 event: Registered Node ha-135993 in Controller
	
	
	Name:               ha-135993-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-135993-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0626f22cf0d915d75e291a5bce701f94395056e1
	                    minikube.k8s.io/name=ha-135993
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_20T17_09_05_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 17:09:03 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-135993-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 17:11:56 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 20 Sep 2024 17:11:05 +0000   Fri, 20 Sep 2024 17:12:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 20 Sep 2024 17:11:05 +0000   Fri, 20 Sep 2024 17:12:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 20 Sep 2024 17:11:05 +0000   Fri, 20 Sep 2024 17:12:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 20 Sep 2024 17:11:05 +0000   Fri, 20 Sep 2024 17:12:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.227
	  Hostname:    ha-135993-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 50c529298e8f4fbb9207cda8fc4b8abe
	  System UUID:                50c52929-8e8f-4fbb-9207-cda8fc4b8abe
	  Boot ID:                    7739b1d1-ac71-4753-b570-c987dc1deaff
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-cw8r4                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m47s
	  kube-system                 etcd-ha-135993-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m28s
	  kube-system                 kindnet-5m4r8                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m30s
	  kube-system                 kube-apiserver-ha-135993-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m28s
	  kube-system                 kube-controller-manager-ha-135993-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m23s
	  kube-system                 kube-proxy-z6xqt                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m30s
	  kube-system                 kube-scheduler-ha-135993-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m28s
	  kube-system                 kube-vip-ha-135993-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m26s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  5m31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  CIDRAssignmentFailed     5m30s                  cidrAllocator    Node ha-135993-m02 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  5m30s (x8 over 5m31s)  kubelet          Node ha-135993-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m30s (x8 over 5m31s)  kubelet          Node ha-135993-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m30s (x7 over 5m31s)  kubelet          Node ha-135993-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m27s                  node-controller  Node ha-135993-m02 event: Registered Node ha-135993-m02 in Controller
	  Normal  RegisteredNode           5m22s                  node-controller  Node ha-135993-m02 event: Registered Node ha-135993-m02 in Controller
	  Normal  RegisteredNode           4m7s                   node-controller  Node ha-135993-m02 event: Registered Node ha-135993-m02 in Controller
	  Normal  NodeNotReady             117s                   node-controller  Node ha-135993-m02 status is now: NodeNotReady
	
	
	Name:               ha-135993-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-135993-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0626f22cf0d915d75e291a5bce701f94395056e1
	                    minikube.k8s.io/name=ha-135993
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_20T17_10_21_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 17:10:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-135993-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 17:14:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 17:11:19 +0000   Fri, 20 Sep 2024 17:10:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 17:11:19 +0000   Fri, 20 Sep 2024 17:10:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 17:11:19 +0000   Fri, 20 Sep 2024 17:10:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 17:11:19 +0000   Fri, 20 Sep 2024 17:10:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.133
	  Hostname:    ha-135993-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a16666848f8545f6bbb9419c97d0a0cd
	  System UUID:                a1666684-8f85-45f6-bbb9-419c97d0a0cd
	  Boot ID:                    fe050582-04ee-4cce-a278-cfc26db3e639
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-ksx56                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m48s
	  kube-system                 etcd-ha-135993-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m14s
	  kube-system                 kindnet-hcqf8                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m16s
	  kube-system                 kube-apiserver-ha-135993-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m15s
	  kube-system                 kube-controller-manager-ha-135993-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m15s
	  kube-system                 kube-proxy-45c9m                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m16s
	  kube-system                 kube-scheduler-ha-135993-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m14s
	  kube-system                 kube-vip-ha-135993-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m11s                  kube-proxy       
	  Normal  CIDRAssignmentFailed     4m16s                  cidrAllocator    Node ha-135993-m03 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  4m16s (x8 over 4m16s)  kubelet          Node ha-135993-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m16s (x8 over 4m16s)  kubelet          Node ha-135993-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m16s (x7 over 4m16s)  kubelet          Node ha-135993-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m16s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m13s                  node-controller  Node ha-135993-m03 event: Registered Node ha-135993-m03 in Controller
	  Normal  RegisteredNode           4m13s                  node-controller  Node ha-135993-m03 event: Registered Node ha-135993-m03 in Controller
	  Normal  RegisteredNode           4m8s                   node-controller  Node ha-135993-m03 event: Registered Node ha-135993-m03 in Controller
	
	
	Name:               ha-135993-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-135993-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0626f22cf0d915d75e291a5bce701f94395056e1
	                    minikube.k8s.io/name=ha-135993
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_20T17_11_21_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 17:11:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-135993-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 17:14:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 17:11:51 +0000   Fri, 20 Sep 2024 17:11:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 17:11:51 +0000   Fri, 20 Sep 2024 17:11:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 17:11:51 +0000   Fri, 20 Sep 2024 17:11:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 17:11:51 +0000   Fri, 20 Sep 2024 17:11:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.101
	  Hostname:    ha-135993-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 16a282b7a18241dba73a5c13e70f4f98
	  System UUID:                16a282b7-a182-41db-a73a-5c13e70f4f98
	  Boot ID:                    57ea2493-1758-4be8-813f-bc554e901359
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.4.0/24
	PodCIDRs:                     10.244.4.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-88sbs       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m13s
	  kube-system                 kube-proxy-2q8mx    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m8s                   kube-proxy       
	  Normal  RegisteredNode           3m13s                  node-controller  Node ha-135993-m04 event: Registered Node ha-135993-m04 in Controller
	  Normal  RegisteredNode           3m13s                  node-controller  Node ha-135993-m04 event: Registered Node ha-135993-m04 in Controller
	  Normal  CIDRAssignmentFailed     3m13s                  cidrAllocator    Node ha-135993-m04 status is now: CIDRAssignmentFailed
	  Normal  CIDRAssignmentFailed     3m13s                  cidrAllocator    Node ha-135993-m04 status is now: CIDRAssignmentFailed
	  Normal  RegisteredNode           3m13s                  node-controller  Node ha-135993-m04 event: Registered Node ha-135993-m04 in Controller
	  Normal  NodeHasSufficientMemory  3m13s (x2 over 3m14s)  kubelet          Node ha-135993-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m13s (x2 over 3m14s)  kubelet          Node ha-135993-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m13s (x2 over 3m14s)  kubelet          Node ha-135993-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m13s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m54s                  kubelet          Node ha-135993-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep20 17:07] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051754] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036812] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.151587] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.924820] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.564513] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.722394] systemd-fstab-generator[585]: Ignoring "noauto" option for root device
	[  +0.057997] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064240] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.169257] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.120861] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.270464] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +4.125709] systemd-fstab-generator[747]: Ignoring "noauto" option for root device
	[Sep20 17:08] systemd-fstab-generator[882]: Ignoring "noauto" option for root device
	[  +0.057676] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.984086] systemd-fstab-generator[1298]: Ignoring "noauto" option for root device
	[  +0.083524] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.134244] kauditd_printk_skb: 33 callbacks suppressed
	[ +11.488548] kauditd_printk_skb: 26 callbacks suppressed
	[Sep20 17:09] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [e70d74afe0f7f7c1e0f084aba99e9c488a1f999e94866b12072fdcbc24b0dad7] <==
	{"level":"warn","ts":"2024-09-20T17:14:33.930439Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"a8909dfe06e7dd47","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T17:14:33.937684Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"a8909dfe06e7dd47","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T17:14:33.951934Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"a8909dfe06e7dd47","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T17:14:33.954824Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"a8909dfe06e7dd47","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T17:14:33.956042Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"a8909dfe06e7dd47","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T17:14:33.960021Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"a8909dfe06e7dd47","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T17:14:33.967537Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"a8909dfe06e7dd47","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T17:14:33.975092Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"a8909dfe06e7dd47","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T17:14:33.982636Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"a8909dfe06e7dd47","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T17:14:33.987417Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"a8909dfe06e7dd47","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T17:14:33.992179Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"a8909dfe06e7dd47","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T17:14:33.999614Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"a8909dfe06e7dd47","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T17:14:34.005288Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"a8909dfe06e7dd47","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T17:14:34.011292Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"a8909dfe06e7dd47","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T17:14:34.018274Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"a8909dfe06e7dd47","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T17:14:34.022415Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"a8909dfe06e7dd47","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T17:14:34.026274Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"a8909dfe06e7dd47","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T17:14:34.030561Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"a8909dfe06e7dd47","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T17:14:34.034444Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"a8909dfe06e7dd47","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T17:14:34.038771Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"a8909dfe06e7dd47","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T17:14:34.046607Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"a8909dfe06e7dd47","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T17:14:34.054161Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"a8909dfe06e7dd47","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T17:14:34.054944Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"a8909dfe06e7dd47","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T17:14:34.055528Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"a8909dfe06e7dd47","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T17:14:34.101471Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"a8909dfe06e7dd47","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 17:14:34 up 7 min,  0 users,  load average: 0.10, 0.25, 0.15
	Linux ha-135993 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [8792a3b1249ffb1a157682b07dec9a0edf11b81c44e53ea068e0fc6f4548be22] <==
	I0920 17:14:03.590843       1 main.go:322] Node ha-135993-m04 has CIDR [10.244.4.0/24] 
	I0920 17:14:13.583195       1 main.go:295] Handling node with IPs: map[192.168.39.60:{}]
	I0920 17:14:13.583335       1 main.go:299] handling current node
	I0920 17:14:13.583420       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0920 17:14:13.583466       1 main.go:322] Node ha-135993-m02 has CIDR [10.244.1.0/24] 
	I0920 17:14:13.583620       1 main.go:295] Handling node with IPs: map[192.168.39.133:{}]
	I0920 17:14:13.583644       1 main.go:322] Node ha-135993-m03 has CIDR [10.244.2.0/24] 
	I0920 17:14:13.583702       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I0920 17:14:13.583720       1 main.go:322] Node ha-135993-m04 has CIDR [10.244.4.0/24] 
	I0920 17:14:23.591621       1 main.go:295] Handling node with IPs: map[192.168.39.60:{}]
	I0920 17:14:23.591791       1 main.go:299] handling current node
	I0920 17:14:23.591830       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0920 17:14:23.591879       1 main.go:322] Node ha-135993-m02 has CIDR [10.244.1.0/24] 
	I0920 17:14:23.592101       1 main.go:295] Handling node with IPs: map[192.168.39.133:{}]
	I0920 17:14:23.592144       1 main.go:322] Node ha-135993-m03 has CIDR [10.244.2.0/24] 
	I0920 17:14:23.592330       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I0920 17:14:23.592360       1 main.go:322] Node ha-135993-m04 has CIDR [10.244.4.0/24] 
	I0920 17:14:33.591634       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I0920 17:14:33.591696       1 main.go:322] Node ha-135993-m04 has CIDR [10.244.4.0/24] 
	I0920 17:14:33.591862       1 main.go:295] Handling node with IPs: map[192.168.39.60:{}]
	I0920 17:14:33.591882       1 main.go:299] handling current node
	I0920 17:14:33.591902       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0920 17:14:33.591907       1 main.go:322] Node ha-135993-m02 has CIDR [10.244.1.0/24] 
	I0920 17:14:33.591963       1 main.go:295] Handling node with IPs: map[192.168.39.133:{}]
	I0920 17:14:33.591978       1 main.go:322] Node ha-135993-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [1f5eb92cf36b033e765f0681f8a6634251dfd6dc0db3410efd3ef6e8580a8b2f] <==
	I0920 17:08:07.820550       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0920 17:08:07.842885       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0920 17:08:07.862886       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0920 17:08:11.804724       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0920 17:08:12.220544       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0920 17:09:03.875074       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E0920 17:09:03.875307       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 9.525µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E0920 17:09:03.876629       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E0920 17:09:03.877931       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0920 17:09:03.879420       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="4.477542ms" method="POST" path="/api/v1/namespaces/kube-system/events" result=null
	E0920 17:10:52.052815       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53414: use of closed network connection
	E0920 17:10:52.239817       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53432: use of closed network connection
	E0920 17:10:52.430950       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53454: use of closed network connection
	E0920 17:10:52.630448       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53478: use of closed network connection
	E0920 17:10:52.817389       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53506: use of closed network connection
	E0920 17:10:52.989544       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53526: use of closed network connection
	E0920 17:10:53.190104       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53554: use of closed network connection
	E0920 17:10:53.362503       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53570: use of closed network connection
	E0920 17:10:53.531925       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53576: use of closed network connection
	E0920 17:10:53.828718       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53614: use of closed network connection
	E0920 17:10:53.999814       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53638: use of closed network connection
	E0920 17:10:54.192818       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53650: use of closed network connection
	E0920 17:10:54.370009       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53670: use of closed network connection
	E0920 17:10:54.550881       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53696: use of closed network connection
	E0920 17:10:54.730661       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53720: use of closed network connection
	
	
	==> kube-controller-manager [2b48cf1f03207bb258f12f5506b5e60fd4bb742d7b2ced222664cc2b996ff15d] <==
	E0920 17:11:20.808313       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-vs5cl failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-vs5cl\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E0920 17:11:20.822359       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-vs5cl failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-vs5cl\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0920 17:11:21.218667       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-135993-m04\" does not exist"
	I0920 17:11:21.266531       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-135993-m04" podCIDRs=["10.244.4.0/24"]
	I0920 17:11:21.268323       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-135993-m04"
	I0920 17:11:21.270125       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-135993-m04"
	I0920 17:11:21.352675       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-135993-m04"
	I0920 17:11:21.402439       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-135993-m04"
	I0920 17:11:21.449183       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-135993-m04"
	I0920 17:11:21.529576       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-135993-m04"
	I0920 17:11:21.640943       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-135993-m04"
	I0920 17:11:21.919088       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-135993-m04"
	I0920 17:11:31.476194       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-135993-m04"
	I0920 17:11:40.764702       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-135993-m04"
	I0920 17:11:40.765063       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-135993-m04"
	I0920 17:11:40.780191       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-135993-m04"
	I0920 17:11:41.285623       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-135993-m04"
	I0920 17:11:51.690173       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-135993-m04"
	I0920 17:12:36.378745       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-135993-m04"
	I0920 17:12:36.380639       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-135993-m02"
	I0920 17:12:36.411090       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-135993-m02"
	I0920 17:12:36.576962       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="32.12946ms"
	I0920 17:12:36.577066       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="57.179µs"
	I0920 17:12:36.637966       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-135993-m02"
	I0920 17:12:41.581669       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-135993-m02"
	
	
	==> kube-proxy [e4b462c3efaa10b1790ebacd5fe8845d6eb91643cb6db9b2830f33afe1744d57] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 17:08:12.692616       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0920 17:08:12.737645       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.60"]
	E0920 17:08:12.737744       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 17:08:12.838388       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 17:08:12.838464       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 17:08:12.838491       1 server_linux.go:169] "Using iptables Proxier"
	I0920 17:08:12.844425       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 17:08:12.846303       1 server.go:483] "Version info" version="v1.31.1"
	I0920 17:08:12.846331       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 17:08:12.851490       1 config.go:199] "Starting service config controller"
	I0920 17:08:12.851939       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 17:08:12.853474       1 config.go:105] "Starting endpoint slice config controller"
	I0920 17:08:12.855057       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 17:08:12.854368       1 config.go:328] "Starting node config controller"
	I0920 17:08:12.883844       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 17:08:12.954338       1 shared_informer.go:320] Caches are synced for service config
	I0920 17:08:12.955452       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 17:08:12.985151       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [db80f5e2505940b81bc20c39e98d873b35eff2e71aac748d8b406e21f5435fca] <==
	W0920 17:08:05.980455       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0920 17:08:05.980516       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0920 17:08:05.980456       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0920 17:08:07.504058       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0920 17:10:18.405414       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-hcqf8\": pod kindnet-hcqf8 is already assigned to node \"ha-135993-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-hcqf8" node="ha-135993-m03"
	E0920 17:10:18.405548       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-45c9m\": pod kube-proxy-45c9m is already assigned to node \"ha-135993-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-45c9m" node="ha-135993-m03"
	E0920 17:10:18.409425       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-45c9m\": pod kube-proxy-45c9m is already assigned to node \"ha-135993-m03\"" pod="kube-system/kube-proxy-45c9m"
	E0920 17:10:18.411700       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-hcqf8\": pod kindnet-hcqf8 is already assigned to node \"ha-135993-m03\"" pod="kube-system/kindnet-hcqf8"
	I0920 17:10:18.416087       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-hcqf8" node="ha-135993-m03"
	E0920 17:10:46.972562       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-ksx56\": pod busybox-7dff88458-ksx56 is already assigned to node \"ha-135993-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-ksx56" node="ha-135993-m03"
	E0920 17:10:46.972640       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod f499b34f-4e98-4ebc-90b5-90b1b13d26c7(default/busybox-7dff88458-ksx56) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-ksx56"
	E0920 17:10:46.972665       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-ksx56\": pod busybox-7dff88458-ksx56 is already assigned to node \"ha-135993-m03\"" pod="default/busybox-7dff88458-ksx56"
	I0920 17:10:46.972689       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-ksx56" node="ha-135993-m03"
	E0920 17:11:21.276134       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-w6gf8\": pod kube-proxy-w6gf8 is already assigned to node \"ha-135993-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-w6gf8" node="ha-135993-m04"
	E0920 17:11:21.276387       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 344e8822-62e5-4678-9654-381b97c31527(kube-system/kube-proxy-w6gf8) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-w6gf8"
	E0920 17:11:21.277109       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-w6gf8\": pod kube-proxy-w6gf8 is already assigned to node \"ha-135993-m04\"" pod="kube-system/kube-proxy-w6gf8"
	I0920 17:11:21.277247       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-w6gf8" node="ha-135993-m04"
	E0920 17:11:21.344572       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-n6xl6\": pod kindnet-n6xl6 is already assigned to node \"ha-135993-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-n6xl6" node="ha-135993-m04"
	E0920 17:11:21.344755       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-n6xl6\": pod kindnet-n6xl6 is already assigned to node \"ha-135993-m04\"" pod="kube-system/kindnet-n6xl6"
	E0920 17:11:21.388481       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-jfsxq\": pod kube-proxy-jfsxq is already assigned to node \"ha-135993-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-jfsxq" node="ha-135993-m04"
	E0920 17:11:21.388679       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-jfsxq\": pod kube-proxy-jfsxq is already assigned to node \"ha-135993-m04\"" pod="kube-system/kube-proxy-jfsxq"
	E0920 17:11:21.399720       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-svxp4\": pod kindnet-svxp4 is already assigned to node \"ha-135993-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-svxp4" node="ha-135993-m04"
	E0920 17:11:21.401135       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod a758ff76-3e8c-40c1-9742-2fbcddd4aa87(kube-system/kindnet-svxp4) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-svxp4"
	E0920 17:11:21.401322       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-svxp4\": pod kindnet-svxp4 is already assigned to node \"ha-135993-m04\"" pod="kube-system/kindnet-svxp4"
	I0920 17:11:21.401439       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-svxp4" node="ha-135993-m04"
	
	
	==> kubelet <==
	Sep 20 17:13:07 ha-135993 kubelet[1305]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 20 17:13:07 ha-135993 kubelet[1305]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 20 17:13:07 ha-135993 kubelet[1305]: E0920 17:13:07.854081    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852387853510446,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:13:07 ha-135993 kubelet[1305]: E0920 17:13:07.854113    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852387853510446,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:13:17 ha-135993 kubelet[1305]: E0920 17:13:17.855865    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852397855363761,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:13:17 ha-135993 kubelet[1305]: E0920 17:13:17.856405    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852397855363761,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:13:27 ha-135993 kubelet[1305]: E0920 17:13:27.859417    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852407858772404,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:13:27 ha-135993 kubelet[1305]: E0920 17:13:27.859469    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852407858772404,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:13:37 ha-135993 kubelet[1305]: E0920 17:13:37.861128    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852417860446509,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:13:37 ha-135993 kubelet[1305]: E0920 17:13:37.861168    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852417860446509,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:13:47 ha-135993 kubelet[1305]: E0920 17:13:47.864331    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852427863997805,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:13:47 ha-135993 kubelet[1305]: E0920 17:13:47.864372    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852427863997805,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:13:57 ha-135993 kubelet[1305]: E0920 17:13:57.866952    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852437866556596,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:13:57 ha-135993 kubelet[1305]: E0920 17:13:57.866977    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852437866556596,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:14:07 ha-135993 kubelet[1305]: E0920 17:14:07.772947    1305 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 20 17:14:07 ha-135993 kubelet[1305]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 20 17:14:07 ha-135993 kubelet[1305]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 20 17:14:07 ha-135993 kubelet[1305]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 20 17:14:07 ha-135993 kubelet[1305]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 20 17:14:07 ha-135993 kubelet[1305]: E0920 17:14:07.869325    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852447868022083,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:14:07 ha-135993 kubelet[1305]: E0920 17:14:07.869353    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852447868022083,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:14:17 ha-135993 kubelet[1305]: E0920 17:14:17.871289    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852457870862251,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:14:17 ha-135993 kubelet[1305]: E0920 17:14:17.871679    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852457870862251,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:14:27 ha-135993 kubelet[1305]: E0920 17:14:27.873531    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852467873028868,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:14:27 ha-135993 kubelet[1305]: E0920 17:14:27.873568    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852467873028868,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-135993 -n ha-135993
helpers_test.go:261: (dbg) Run:  kubectl --context ha-135993 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (6.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (379.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-135993 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-135993 -v=7 --alsologtostderr
E0920 17:16:39.931982   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/functional-945494/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-135993 -v=7 --alsologtostderr: exit status 82 (2m1.788045242s)

                                                
                                                
-- stdout --
	* Stopping node "ha-135993-m04"  ...
	* Stopping node "ha-135993-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 17:14:39.254168   33127 out.go:345] Setting OutFile to fd 1 ...
	I0920 17:14:39.254310   33127 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:14:39.254322   33127 out.go:358] Setting ErrFile to fd 2...
	I0920 17:14:39.254330   33127 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:14:39.254536   33127 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-8777/.minikube/bin
	I0920 17:14:39.254766   33127 out.go:352] Setting JSON to false
	I0920 17:14:39.254857   33127 mustload.go:65] Loading cluster: ha-135993
	I0920 17:14:39.255260   33127 config.go:182] Loaded profile config "ha-135993": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 17:14:39.255346   33127 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/config.json ...
	I0920 17:14:39.255526   33127 mustload.go:65] Loading cluster: ha-135993
	I0920 17:14:39.255651   33127 config.go:182] Loaded profile config "ha-135993": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 17:14:39.255686   33127 stop.go:39] StopHost: ha-135993-m04
	I0920 17:14:39.256037   33127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:14:39.256075   33127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:14:39.271566   33127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34555
	I0920 17:14:39.272127   33127 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:14:39.272758   33127 main.go:141] libmachine: Using API Version  1
	I0920 17:14:39.272787   33127 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:14:39.273175   33127 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:14:39.276047   33127 out.go:177] * Stopping node "ha-135993-m04"  ...
	I0920 17:14:39.277507   33127 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0920 17:14:39.277552   33127 main.go:141] libmachine: (ha-135993-m04) Calling .DriverName
	I0920 17:14:39.277873   33127 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0920 17:14:39.277898   33127 main.go:141] libmachine: (ha-135993-m04) Calling .GetSSHHostname
	I0920 17:14:39.281452   33127 main.go:141] libmachine: (ha-135993-m04) DBG | domain ha-135993-m04 has defined MAC address 52:54:00:fc:55:36 in network mk-ha-135993
	I0920 17:14:39.281950   33127 main.go:141] libmachine: (ha-135993-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:55:36", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:11:09 +0000 UTC Type:0 Mac:52:54:00:fc:55:36 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-135993-m04 Clientid:01:52:54:00:fc:55:36}
	I0920 17:14:39.281974   33127 main.go:141] libmachine: (ha-135993-m04) DBG | domain ha-135993-m04 has defined IP address 192.168.39.101 and MAC address 52:54:00:fc:55:36 in network mk-ha-135993
	I0920 17:14:39.282140   33127 main.go:141] libmachine: (ha-135993-m04) Calling .GetSSHPort
	I0920 17:14:39.282368   33127 main.go:141] libmachine: (ha-135993-m04) Calling .GetSSHKeyPath
	I0920 17:14:39.282539   33127 main.go:141] libmachine: (ha-135993-m04) Calling .GetSSHUsername
	I0920 17:14:39.282691   33127 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m04/id_rsa Username:docker}
	I0920 17:14:39.370854   33127 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0920 17:14:39.424482   33127 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0920 17:14:39.477772   33127 main.go:141] libmachine: Stopping "ha-135993-m04"...
	I0920 17:14:39.477854   33127 main.go:141] libmachine: (ha-135993-m04) Calling .GetState
	I0920 17:14:39.479608   33127 main.go:141] libmachine: (ha-135993-m04) Calling .Stop
	I0920 17:14:39.482830   33127 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 0/120
	I0920 17:14:40.574121   33127 main.go:141] libmachine: (ha-135993-m04) Calling .GetState
	I0920 17:14:40.575709   33127 main.go:141] libmachine: Machine "ha-135993-m04" was stopped.
	I0920 17:14:40.575737   33127 stop.go:75] duration metric: took 1.298225459s to stop
	I0920 17:14:40.575773   33127 stop.go:39] StopHost: ha-135993-m03
	I0920 17:14:40.576105   33127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:14:40.576148   33127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:14:40.590951   33127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44577
	I0920 17:14:40.591446   33127 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:14:40.591966   33127 main.go:141] libmachine: Using API Version  1
	I0920 17:14:40.591989   33127 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:14:40.592304   33127 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:14:40.594545   33127 out.go:177] * Stopping node "ha-135993-m03"  ...
	I0920 17:14:40.595965   33127 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0920 17:14:40.595987   33127 main.go:141] libmachine: (ha-135993-m03) Calling .DriverName
	I0920 17:14:40.596203   33127 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0920 17:14:40.596227   33127 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHHostname
	I0920 17:14:40.599724   33127 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:14:40.600247   33127 main.go:141] libmachine: (ha-135993-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:49:98", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:09:45 +0000 UTC Type:0 Mac:52:54:00:4a:49:98 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-135993-m03 Clientid:01:52:54:00:4a:49:98}
	I0920 17:14:40.600277   33127 main.go:141] libmachine: (ha-135993-m03) DBG | domain ha-135993-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:4a:49:98 in network mk-ha-135993
	I0920 17:14:40.600445   33127 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHPort
	I0920 17:14:40.600637   33127 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHKeyPath
	I0920 17:14:40.600795   33127 main.go:141] libmachine: (ha-135993-m03) Calling .GetSSHUsername
	I0920 17:14:40.600947   33127 sshutil.go:53] new ssh client: &{IP:192.168.39.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m03/id_rsa Username:docker}
	I0920 17:14:40.690302   33127 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0920 17:14:40.744409   33127 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0920 17:14:40.798306   33127 main.go:141] libmachine: Stopping "ha-135993-m03"...
	I0920 17:14:40.798335   33127 main.go:141] libmachine: (ha-135993-m03) Calling .GetState
	I0920 17:14:40.800013   33127 main.go:141] libmachine: (ha-135993-m03) Calling .Stop
	I0920 17:14:40.803627   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 0/120
	I0920 17:14:41.805079   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 1/120
	I0920 17:14:42.806457   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 2/120
	I0920 17:14:43.807974   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 3/120
	I0920 17:14:44.809474   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 4/120
	I0920 17:14:45.811277   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 5/120
	I0920 17:14:46.812970   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 6/120
	I0920 17:14:47.814617   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 7/120
	I0920 17:14:48.816710   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 8/120
	I0920 17:14:49.818194   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 9/120
	I0920 17:14:50.820227   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 10/120
	I0920 17:14:51.821822   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 11/120
	I0920 17:14:52.823288   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 12/120
	I0920 17:14:53.824905   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 13/120
	I0920 17:14:54.826636   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 14/120
	I0920 17:14:55.828706   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 15/120
	I0920 17:14:56.830564   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 16/120
	I0920 17:14:57.831929   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 17/120
	I0920 17:14:58.833453   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 18/120
	I0920 17:14:59.834775   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 19/120
	I0920 17:15:00.836181   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 20/120
	I0920 17:15:01.837783   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 21/120
	I0920 17:15:02.839238   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 22/120
	I0920 17:15:03.840772   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 23/120
	I0920 17:15:04.842506   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 24/120
	I0920 17:15:05.844358   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 25/120
	I0920 17:15:06.845773   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 26/120
	I0920 17:15:07.847159   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 27/120
	I0920 17:15:08.848812   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 28/120
	I0920 17:15:09.850206   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 29/120
	I0920 17:15:10.851728   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 30/120
	I0920 17:15:11.853278   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 31/120
	I0920 17:15:12.854728   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 32/120
	I0920 17:15:13.856121   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 33/120
	I0920 17:15:14.857719   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 34/120
	I0920 17:15:15.860061   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 35/120
	I0920 17:15:16.861314   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 36/120
	I0920 17:15:17.862972   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 37/120
	I0920 17:15:18.864397   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 38/120
	I0920 17:15:19.865733   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 39/120
	I0920 17:15:20.867999   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 40/120
	I0920 17:15:21.869272   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 41/120
	I0920 17:15:22.870591   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 42/120
	I0920 17:15:23.871855   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 43/120
	I0920 17:15:24.873128   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 44/120
	I0920 17:15:25.874938   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 45/120
	I0920 17:15:26.876196   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 46/120
	I0920 17:15:27.877500   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 47/120
	I0920 17:15:28.878868   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 48/120
	I0920 17:15:29.880355   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 49/120
	I0920 17:15:30.882161   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 50/120
	I0920 17:15:31.883451   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 51/120
	I0920 17:15:32.884787   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 52/120
	I0920 17:15:33.885975   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 53/120
	I0920 17:15:34.888612   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 54/120
	I0920 17:15:35.890579   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 55/120
	I0920 17:15:36.891968   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 56/120
	I0920 17:15:37.893334   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 57/120
	I0920 17:15:38.894638   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 58/120
	I0920 17:15:39.896252   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 59/120
	I0920 17:15:40.897860   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 60/120
	I0920 17:15:41.899212   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 61/120
	I0920 17:15:42.900552   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 62/120
	I0920 17:15:43.902805   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 63/120
	I0920 17:15:44.904338   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 64/120
	I0920 17:15:45.906139   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 65/120
	I0920 17:15:46.907588   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 66/120
	I0920 17:15:47.909374   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 67/120
	I0920 17:15:48.910679   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 68/120
	I0920 17:15:49.912433   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 69/120
	I0920 17:15:50.914286   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 70/120
	I0920 17:15:51.915593   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 71/120
	I0920 17:15:52.916931   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 72/120
	I0920 17:15:53.918293   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 73/120
	I0920 17:15:54.919555   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 74/120
	I0920 17:15:55.920693   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 75/120
	I0920 17:15:56.922115   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 76/120
	I0920 17:15:57.923811   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 77/120
	I0920 17:15:58.925215   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 78/120
	I0920 17:15:59.927207   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 79/120
	I0920 17:16:00.928999   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 80/120
	I0920 17:16:01.930539   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 81/120
	I0920 17:16:02.931996   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 82/120
	I0920 17:16:03.933433   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 83/120
	I0920 17:16:04.935028   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 84/120
	I0920 17:16:05.937104   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 85/120
	I0920 17:16:06.938771   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 86/120
	I0920 17:16:07.940397   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 87/120
	I0920 17:16:08.941800   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 88/120
	I0920 17:16:09.943247   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 89/120
	I0920 17:16:10.944961   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 90/120
	I0920 17:16:11.946352   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 91/120
	I0920 17:16:12.947983   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 92/120
	I0920 17:16:13.949405   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 93/120
	I0920 17:16:14.950904   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 94/120
	I0920 17:16:15.952900   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 95/120
	I0920 17:16:16.954392   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 96/120
	I0920 17:16:17.955964   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 97/120
	I0920 17:16:18.958322   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 98/120
	I0920 17:16:19.959754   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 99/120
	I0920 17:16:20.962190   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 100/120
	I0920 17:16:21.963580   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 101/120
	I0920 17:16:22.964886   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 102/120
	I0920 17:16:23.966427   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 103/120
	I0920 17:16:24.967987   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 104/120
	I0920 17:16:25.969856   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 105/120
	I0920 17:16:26.971194   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 106/120
	I0920 17:16:27.972503   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 107/120
	I0920 17:16:28.973867   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 108/120
	I0920 17:16:29.975313   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 109/120
	I0920 17:16:30.977531   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 110/120
	I0920 17:16:31.978942   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 111/120
	I0920 17:16:32.980691   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 112/120
	I0920 17:16:33.983125   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 113/120
	I0920 17:16:34.984609   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 114/120
	I0920 17:16:35.986031   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 115/120
	I0920 17:16:36.987320   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 116/120
	I0920 17:16:37.989081   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 117/120
	I0920 17:16:38.990715   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 118/120
	I0920 17:16:39.992657   33127 main.go:141] libmachine: (ha-135993-m03) Waiting for machine to stop 119/120
	I0920 17:16:40.993273   33127 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0920 17:16:40.993339   33127 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0920 17:16:40.995098   33127 out.go:201] 
	W0920 17:16:40.996435   33127 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0920 17:16:40.996452   33127 out.go:270] * 
	* 
	W0920 17:16:40.998629   33127 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 17:16:40.999938   33127 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-135993 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-135993 --wait=true -v=7 --alsologtostderr
E0920 17:17:07.638614   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/functional-945494/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:17:43.198402   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-135993 --wait=true -v=7 --alsologtostderr: (4m15.387210911s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-135993
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-135993 -n ha-135993
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-135993 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-135993 logs -n 25: (1.889086382s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-135993 cp ha-135993-m03:/home/docker/cp-test.txt                              | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:11 UTC | 20 Sep 24 17:11 UTC |
	|         | ha-135993-m02:/home/docker/cp-test_ha-135993-m03_ha-135993-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-135993 ssh -n                                                                 | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:11 UTC | 20 Sep 24 17:11 UTC |
	|         | ha-135993-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-135993 ssh -n ha-135993-m02 sudo cat                                          | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:11 UTC | 20 Sep 24 17:11 UTC |
	|         | /home/docker/cp-test_ha-135993-m03_ha-135993-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-135993 cp ha-135993-m03:/home/docker/cp-test.txt                              | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:11 UTC | 20 Sep 24 17:11 UTC |
	|         | ha-135993-m04:/home/docker/cp-test_ha-135993-m03_ha-135993-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-135993 ssh -n                                                                 | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:11 UTC | 20 Sep 24 17:11 UTC |
	|         | ha-135993-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-135993 ssh -n ha-135993-m04 sudo cat                                          | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:11 UTC | 20 Sep 24 17:11 UTC |
	|         | /home/docker/cp-test_ha-135993-m03_ha-135993-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-135993 cp testdata/cp-test.txt                                                | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:11 UTC | 20 Sep 24 17:11 UTC |
	|         | ha-135993-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-135993 ssh -n                                                                 | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:11 UTC | 20 Sep 24 17:11 UTC |
	|         | ha-135993-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-135993 cp ha-135993-m04:/home/docker/cp-test.txt                              | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:11 UTC | 20 Sep 24 17:11 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2672215621/001/cp-test_ha-135993-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-135993 ssh -n                                                                 | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:11 UTC | 20 Sep 24 17:11 UTC |
	|         | ha-135993-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-135993 cp ha-135993-m04:/home/docker/cp-test.txt                              | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:11 UTC | 20 Sep 24 17:11 UTC |
	|         | ha-135993:/home/docker/cp-test_ha-135993-m04_ha-135993.txt                       |           |         |         |                     |                     |
	| ssh     | ha-135993 ssh -n                                                                 | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:11 UTC | 20 Sep 24 17:11 UTC |
	|         | ha-135993-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-135993 ssh -n ha-135993 sudo cat                                              | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:11 UTC | 20 Sep 24 17:11 UTC |
	|         | /home/docker/cp-test_ha-135993-m04_ha-135993.txt                                 |           |         |         |                     |                     |
	| cp      | ha-135993 cp ha-135993-m04:/home/docker/cp-test.txt                              | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:11 UTC | 20 Sep 24 17:12 UTC |
	|         | ha-135993-m02:/home/docker/cp-test_ha-135993-m04_ha-135993-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-135993 ssh -n                                                                 | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:12 UTC | 20 Sep 24 17:12 UTC |
	|         | ha-135993-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-135993 ssh -n ha-135993-m02 sudo cat                                          | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:12 UTC | 20 Sep 24 17:12 UTC |
	|         | /home/docker/cp-test_ha-135993-m04_ha-135993-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-135993 cp ha-135993-m04:/home/docker/cp-test.txt                              | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:12 UTC | 20 Sep 24 17:12 UTC |
	|         | ha-135993-m03:/home/docker/cp-test_ha-135993-m04_ha-135993-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-135993 ssh -n                                                                 | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:12 UTC | 20 Sep 24 17:12 UTC |
	|         | ha-135993-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-135993 ssh -n ha-135993-m03 sudo cat                                          | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:12 UTC | 20 Sep 24 17:12 UTC |
	|         | /home/docker/cp-test_ha-135993-m04_ha-135993-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-135993 node stop m02 -v=7                                                     | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:12 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-135993 node start m02 -v=7                                                    | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:14 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-135993 -v=7                                                           | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:14 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-135993 -v=7                                                                | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:14 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-135993 --wait=true -v=7                                                    | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:16 UTC | 20 Sep 24 17:20 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-135993                                                                | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:20 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 17:16:41
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 17:16:41.047220   33598 out.go:345] Setting OutFile to fd 1 ...
	I0920 17:16:41.047342   33598 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:16:41.047351   33598 out.go:358] Setting ErrFile to fd 2...
	I0920 17:16:41.047355   33598 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:16:41.047557   33598 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-8777/.minikube/bin
	I0920 17:16:41.048079   33598 out.go:352] Setting JSON to false
	I0920 17:16:41.048951   33598 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3544,"bootTime":1726849057,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 17:16:41.049070   33598 start.go:139] virtualization: kvm guest
	I0920 17:16:41.051716   33598 out.go:177] * [ha-135993] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 17:16:41.053139   33598 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 17:16:41.053139   33598 notify.go:220] Checking for updates...
	I0920 17:16:41.056139   33598 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 17:16:41.058016   33598 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-8777/kubeconfig
	I0920 17:16:41.059232   33598 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-8777/.minikube
	I0920 17:16:41.060502   33598 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 17:16:41.061778   33598 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 17:16:41.063449   33598 config.go:182] Loaded profile config "ha-135993": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 17:16:41.063539   33598 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 17:16:41.063993   33598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:16:41.064051   33598 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:16:41.080093   33598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41179
	I0920 17:16:41.080586   33598 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:16:41.081132   33598 main.go:141] libmachine: Using API Version  1
	I0920 17:16:41.081156   33598 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:16:41.081481   33598 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:16:41.081667   33598 main.go:141] libmachine: (ha-135993) Calling .DriverName
	I0920 17:16:41.119444   33598 out.go:177] * Using the kvm2 driver based on existing profile
	I0920 17:16:41.120608   33598 start.go:297] selected driver: kvm2
	I0920 17:16:41.120624   33598 start.go:901] validating driver "kvm2" against &{Name:ha-135993 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.1 ClusterName:ha-135993 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.133 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.101 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:d
ocker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 17:16:41.120761   33598 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 17:16:41.121069   33598 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 17:16:41.121141   33598 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19672-8777/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 17:16:41.135858   33598 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 17:16:41.136617   33598 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 17:16:41.136648   33598 cni.go:84] Creating CNI manager for ""
	I0920 17:16:41.136702   33598 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0920 17:16:41.136753   33598 start.go:340] cluster config:
	{Name:ha-135993 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-135993 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.133 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.101 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:
false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 17:16:41.136869   33598 iso.go:125] acquiring lock: {Name:mkba95ef0488e46f622333e9f317f43def93040b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 17:16:41.138651   33598 out.go:177] * Starting "ha-135993" primary control-plane node in "ha-135993" cluster
	I0920 17:16:41.139875   33598 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 17:16:41.139906   33598 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0920 17:16:41.139914   33598 cache.go:56] Caching tarball of preloaded images
	I0920 17:16:41.139993   33598 preload.go:172] Found /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 17:16:41.140006   33598 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 17:16:41.140131   33598 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/config.json ...
	I0920 17:16:41.140348   33598 start.go:360] acquireMachinesLock for ha-135993: {Name:mkfeedb385cf08b5d2aa00913e85815d02a180c2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 17:16:41.140410   33598 start.go:364] duration metric: took 32.044µs to acquireMachinesLock for "ha-135993"
	I0920 17:16:41.140430   33598 start.go:96] Skipping create...Using existing machine configuration
	I0920 17:16:41.140439   33598 fix.go:54] fixHost starting: 
	I0920 17:16:41.140753   33598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:16:41.140790   33598 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:16:41.155288   33598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37231
	I0920 17:16:41.155734   33598 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:16:41.156230   33598 main.go:141] libmachine: Using API Version  1
	I0920 17:16:41.156251   33598 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:16:41.156566   33598 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:16:41.156734   33598 main.go:141] libmachine: (ha-135993) Calling .DriverName
	I0920 17:16:41.156907   33598 main.go:141] libmachine: (ha-135993) Calling .GetState
	I0920 17:16:41.158426   33598 fix.go:112] recreateIfNeeded on ha-135993: state=Running err=<nil>
	W0920 17:16:41.158444   33598 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 17:16:41.160612   33598 out.go:177] * Updating the running kvm2 "ha-135993" VM ...
	I0920 17:16:41.161990   33598 machine.go:93] provisionDockerMachine start ...
	I0920 17:16:41.162013   33598 main.go:141] libmachine: (ha-135993) Calling .DriverName
	I0920 17:16:41.162202   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHHostname
	I0920 17:16:41.164624   33598 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:16:41.165096   33598 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:16:41.165123   33598 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:16:41.165218   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHPort
	I0920 17:16:41.165364   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:16:41.165477   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:16:41.165681   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHUsername
	I0920 17:16:41.166001   33598 main.go:141] libmachine: Using SSH client type: native
	I0920 17:16:41.166242   33598 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0920 17:16:41.166257   33598 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 17:16:41.283859   33598 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-135993
	
	I0920 17:16:41.283887   33598 main.go:141] libmachine: (ha-135993) Calling .GetMachineName
	I0920 17:16:41.284102   33598 buildroot.go:166] provisioning hostname "ha-135993"
	I0920 17:16:41.284130   33598 main.go:141] libmachine: (ha-135993) Calling .GetMachineName
	I0920 17:16:41.284300   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHHostname
	I0920 17:16:41.287033   33598 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:16:41.287500   33598 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:16:41.287527   33598 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:16:41.287716   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHPort
	I0920 17:16:41.287906   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:16:41.288047   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:16:41.288157   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHUsername
	I0920 17:16:41.288326   33598 main.go:141] libmachine: Using SSH client type: native
	I0920 17:16:41.288556   33598 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0920 17:16:41.288570   33598 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-135993 && echo "ha-135993" | sudo tee /etc/hostname
	I0920 17:16:41.424441   33598 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-135993
	
	I0920 17:16:41.424468   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHHostname
	I0920 17:16:41.427224   33598 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:16:41.427623   33598 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:16:41.427649   33598 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:16:41.427806   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHPort
	I0920 17:16:41.427947   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:16:41.428104   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:16:41.428236   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHUsername
	I0920 17:16:41.428402   33598 main.go:141] libmachine: Using SSH client type: native
	I0920 17:16:41.428565   33598 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0920 17:16:41.428579   33598 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-135993' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-135993/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-135993' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 17:16:41.542679   33598 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 17:16:41.542708   33598 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-8777/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-8777/.minikube}
	I0920 17:16:41.542741   33598 buildroot.go:174] setting up certificates
	I0920 17:16:41.542750   33598 provision.go:84] configureAuth start
	I0920 17:16:41.542759   33598 main.go:141] libmachine: (ha-135993) Calling .GetMachineName
	I0920 17:16:41.543029   33598 main.go:141] libmachine: (ha-135993) Calling .GetIP
	I0920 17:16:41.545600   33598 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:16:41.545988   33598 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:16:41.546012   33598 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:16:41.546153   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHHostname
	I0920 17:16:41.548216   33598 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:16:41.548577   33598 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:16:41.548601   33598 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:16:41.548693   33598 provision.go:143] copyHostCerts
	I0920 17:16:41.548730   33598 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem
	I0920 17:16:41.548772   33598 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem, removing ...
	I0920 17:16:41.548786   33598 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem
	I0920 17:16:41.548853   33598 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem (1082 bytes)
	I0920 17:16:41.548938   33598 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem
	I0920 17:16:41.548959   33598 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem, removing ...
	I0920 17:16:41.548965   33598 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem
	I0920 17:16:41.548990   33598 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem (1123 bytes)
	I0920 17:16:41.549032   33598 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem
	I0920 17:16:41.549048   33598 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem, removing ...
	I0920 17:16:41.549053   33598 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem
	I0920 17:16:41.549073   33598 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem (1675 bytes)
	I0920 17:16:41.549128   33598 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem org=jenkins.ha-135993 san=[127.0.0.1 192.168.39.60 ha-135993 localhost minikube]
	I0920 17:16:41.644963   33598 provision.go:177] copyRemoteCerts
	I0920 17:16:41.645025   33598 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 17:16:41.645047   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHHostname
	I0920 17:16:41.647896   33598 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:16:41.648220   33598 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:16:41.648254   33598 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:16:41.648442   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHPort
	I0920 17:16:41.648620   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:16:41.648762   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHUsername
	I0920 17:16:41.648888   33598 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993/id_rsa Username:docker}
	I0920 17:16:41.732042   33598 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0920 17:16:41.732114   33598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0920 17:16:41.757969   33598 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0920 17:16:41.758058   33598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0920 17:16:41.784024   33598 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0920 17:16:41.784101   33598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 17:16:41.812189   33598 provision.go:87] duration metric: took 269.425627ms to configureAuth
	I0920 17:16:41.812218   33598 buildroot.go:189] setting minikube options for container-runtime
	I0920 17:16:41.812468   33598 config.go:182] Loaded profile config "ha-135993": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 17:16:41.812550   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHHostname
	I0920 17:16:41.815200   33598 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:16:41.815553   33598 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:16:41.815590   33598 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:16:41.815875   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHPort
	I0920 17:16:41.816045   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:16:41.816201   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:16:41.816338   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHUsername
	I0920 17:16:41.816486   33598 main.go:141] libmachine: Using SSH client type: native
	I0920 17:16:41.816657   33598 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0920 17:16:41.816673   33598 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 17:18:12.634246   33598 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 17:18:12.634277   33598 machine.go:96] duration metric: took 1m31.472270807s to provisionDockerMachine
	I0920 17:18:12.634291   33598 start.go:293] postStartSetup for "ha-135993" (driver="kvm2")
	I0920 17:18:12.634301   33598 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 17:18:12.634314   33598 main.go:141] libmachine: (ha-135993) Calling .DriverName
	I0920 17:18:12.634643   33598 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 17:18:12.634666   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHHostname
	I0920 17:18:12.638207   33598 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:18:12.638667   33598 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:18:12.638706   33598 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:18:12.638855   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHPort
	I0920 17:18:12.639006   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:18:12.639130   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHUsername
	I0920 17:18:12.639223   33598 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993/id_rsa Username:docker}
	I0920 17:18:12.725278   33598 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 17:18:12.729203   33598 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 17:18:12.729228   33598 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/addons for local assets ...
	I0920 17:18:12.729293   33598 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/files for local assets ...
	I0920 17:18:12.729370   33598 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem -> 159732.pem in /etc/ssl/certs
	I0920 17:18:12.729380   33598 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem -> /etc/ssl/certs/159732.pem
	I0920 17:18:12.729467   33598 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 17:18:12.738768   33598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /etc/ssl/certs/159732.pem (1708 bytes)
	I0920 17:18:12.761911   33598 start.go:296] duration metric: took 127.604607ms for postStartSetup
	I0920 17:18:12.761964   33598 main.go:141] libmachine: (ha-135993) Calling .DriverName
	I0920 17:18:12.762270   33598 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0920 17:18:12.762301   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHHostname
	I0920 17:18:12.765232   33598 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:18:12.765708   33598 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:18:12.765736   33598 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:18:12.765898   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHPort
	I0920 17:18:12.766066   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:18:12.766257   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHUsername
	I0920 17:18:12.766424   33598 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993/id_rsa Username:docker}
	W0920 17:18:12.852696   33598 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0920 17:18:12.852744   33598 fix.go:56] duration metric: took 1m31.712303326s for fixHost
	I0920 17:18:12.852769   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHHostname
	I0920 17:18:12.855583   33598 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:18:12.856028   33598 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:18:12.856054   33598 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:18:12.856220   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHPort
	I0920 17:18:12.856504   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:18:12.856699   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:18:12.856818   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHUsername
	I0920 17:18:12.856948   33598 main.go:141] libmachine: Using SSH client type: native
	I0920 17:18:12.857141   33598 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0920 17:18:12.857155   33598 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 17:18:12.966786   33598 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726852692.934958979
	
	I0920 17:18:12.966811   33598 fix.go:216] guest clock: 1726852692.934958979
	I0920 17:18:12.966822   33598 fix.go:229] Guest: 2024-09-20 17:18:12.934958979 +0000 UTC Remote: 2024-09-20 17:18:12.852754141 +0000 UTC m=+91.842791203 (delta=82.204838ms)
	I0920 17:18:12.966874   33598 fix.go:200] guest clock delta is within tolerance: 82.204838ms
	I0920 17:18:12.966885   33598 start.go:83] releasing machines lock for "ha-135993", held for 1m31.826462761s
	I0920 17:18:12.966919   33598 main.go:141] libmachine: (ha-135993) Calling .DriverName
	I0920 17:18:12.967177   33598 main.go:141] libmachine: (ha-135993) Calling .GetIP
	I0920 17:18:12.969883   33598 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:18:12.970266   33598 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:18:12.970289   33598 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:18:12.970449   33598 main.go:141] libmachine: (ha-135993) Calling .DriverName
	I0920 17:18:12.970898   33598 main.go:141] libmachine: (ha-135993) Calling .DriverName
	I0920 17:18:12.971062   33598 main.go:141] libmachine: (ha-135993) Calling .DriverName
	I0920 17:18:12.971183   33598 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 17:18:12.971222   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHHostname
	I0920 17:18:12.971263   33598 ssh_runner.go:195] Run: cat /version.json
	I0920 17:18:12.971281   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHHostname
	I0920 17:18:12.973633   33598 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:18:12.973941   33598 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:18:12.973975   33598 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:18:12.974002   33598 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:18:12.974215   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHPort
	I0920 17:18:12.974389   33598 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:18:12.974416   33598 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:18:12.974431   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:18:12.974596   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHUsername
	I0920 17:18:12.974598   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHPort
	I0920 17:18:12.974791   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:18:12.974779   33598 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993/id_rsa Username:docker}
	I0920 17:18:12.974948   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHUsername
	I0920 17:18:12.975075   33598 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993/id_rsa Username:docker}
	I0920 17:18:13.132285   33598 ssh_runner.go:195] Run: systemctl --version
	I0920 17:18:13.147148   33598 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 17:18:13.332224   33598 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 17:18:13.338804   33598 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 17:18:13.338870   33598 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 17:18:13.348683   33598 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0920 17:18:13.348704   33598 start.go:495] detecting cgroup driver to use...
	I0920 17:18:13.348822   33598 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 17:18:13.366373   33598 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 17:18:13.381029   33598 docker.go:217] disabling cri-docker service (if available) ...
	I0920 17:18:13.381094   33598 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 17:18:13.394936   33598 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 17:18:13.408126   33598 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 17:18:13.568611   33598 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 17:18:13.710663   33598 docker.go:233] disabling docker service ...
	I0920 17:18:13.710748   33598 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 17:18:13.729462   33598 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 17:18:13.743141   33598 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 17:18:13.890163   33598 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 17:18:14.033917   33598 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 17:18:14.049385   33598 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 17:18:14.070244   33598 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 17:18:14.070308   33598 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:18:14.080864   33598 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 17:18:14.080925   33598 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:18:14.091364   33598 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:18:14.101584   33598 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:18:14.112134   33598 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 17:18:14.122703   33598 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:18:14.133401   33598 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:18:14.145652   33598 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:18:14.156586   33598 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 17:18:14.166711   33598 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 17:18:14.176768   33598 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:18:14.324934   33598 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 17:18:23.604835   33598 ssh_runner.go:235] Completed: sudo systemctl restart crio: (9.279861964s)
	I0920 17:18:23.604865   33598 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 17:18:23.604916   33598 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 17:18:23.610469   33598 start.go:563] Will wait 60s for crictl version
	I0920 17:18:23.610526   33598 ssh_runner.go:195] Run: which crictl
	I0920 17:18:23.614208   33598 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 17:18:23.655015   33598 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 17:18:23.655090   33598 ssh_runner.go:195] Run: crio --version
	I0920 17:18:23.684627   33598 ssh_runner.go:195] Run: crio --version
	I0920 17:18:23.714884   33598 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 17:18:23.716118   33598 main.go:141] libmachine: (ha-135993) Calling .GetIP
	I0920 17:18:23.718915   33598 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:18:23.719340   33598 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:18:23.719368   33598 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:18:23.719598   33598 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 17:18:23.724114   33598 kubeadm.go:883] updating cluster {Name:ha-135993 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-135993 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.133 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.101 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 17:18:23.724306   33598 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 17:18:23.724369   33598 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 17:18:23.768064   33598 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 17:18:23.768086   33598 crio.go:433] Images already preloaded, skipping extraction
	I0920 17:18:23.768131   33598 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 17:18:23.803975   33598 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 17:18:23.803995   33598 cache_images.go:84] Images are preloaded, skipping loading
	I0920 17:18:23.804013   33598 kubeadm.go:934] updating node { 192.168.39.60 8443 v1.31.1 crio true true} ...
	I0920 17:18:23.804102   33598 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-135993 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.60
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-135993 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 17:18:23.804163   33598 ssh_runner.go:195] Run: crio config
	I0920 17:18:23.851405   33598 cni.go:84] Creating CNI manager for ""
	I0920 17:18:23.851432   33598 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0920 17:18:23.851446   33598 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 17:18:23.851473   33598 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.60 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-135993 NodeName:ha-135993 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.60"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.60 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 17:18:23.851660   33598 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.60
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-135993"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.60
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.60"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 17:18:23.851707   33598 kube-vip.go:115] generating kube-vip config ...
	I0920 17:18:23.851762   33598 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0920 17:18:23.864562   33598 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0920 17:18:23.864687   33598 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0920 17:18:23.864759   33598 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 17:18:23.874841   33598 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 17:18:23.874924   33598 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0920 17:18:23.884136   33598 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0920 17:18:23.900368   33598 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 17:18:23.916507   33598 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0920 17:18:23.933587   33598 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0920 17:18:23.950802   33598 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0920 17:18:23.956753   33598 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:18:24.106480   33598 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 17:18:24.121794   33598 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993 for IP: 192.168.39.60
	I0920 17:18:24.121828   33598 certs.go:194] generating shared ca certs ...
	I0920 17:18:24.121864   33598 certs.go:226] acquiring lock for ca certs: {Name:mkc7ef6c737c6bdc3fdd9dcff8f57029c020d8f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:18:24.122057   33598 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key
	I0920 17:18:24.122119   33598 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key
	I0920 17:18:24.122129   33598 certs.go:256] generating profile certs ...
	I0920 17:18:24.122252   33598 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/client.key
	I0920 17:18:24.122291   33598 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.key.472dc397
	I0920 17:18:24.122312   33598 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.crt.472dc397 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.60 192.168.39.227 192.168.39.133 192.168.39.254]
	I0920 17:18:24.216639   33598 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.crt.472dc397 ...
	I0920 17:18:24.216681   33598 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.crt.472dc397: {Name:mkc4c7f841959bc717da1436551b45ad85e47b88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:18:24.216876   33598 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.key.472dc397 ...
	I0920 17:18:24.216895   33598 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.key.472dc397: {Name:mk453e0cdf35f4405f060c4fae21ecb8d229cc40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:18:24.217010   33598 certs.go:381] copying /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.crt.472dc397 -> /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.crt
	I0920 17:18:24.217157   33598 certs.go:385] copying /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.key.472dc397 -> /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.key
	I0920 17:18:24.217287   33598 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/proxy-client.key
	I0920 17:18:24.217302   33598 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0920 17:18:24.217315   33598 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0920 17:18:24.217328   33598 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0920 17:18:24.217340   33598 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0920 17:18:24.217351   33598 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0920 17:18:24.217362   33598 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0920 17:18:24.217379   33598 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0920 17:18:24.217391   33598 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0920 17:18:24.217432   33598 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem (1338 bytes)
	W0920 17:18:24.217459   33598 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973_empty.pem, impossibly tiny 0 bytes
	I0920 17:18:24.217467   33598 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 17:18:24.217488   33598 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem (1082 bytes)
	I0920 17:18:24.217524   33598 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem (1123 bytes)
	I0920 17:18:24.217550   33598 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem (1675 bytes)
	I0920 17:18:24.217585   33598 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem (1708 bytes)
	I0920 17:18:24.217618   33598 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:18:24.217632   33598 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem -> /usr/share/ca-certificates/15973.pem
	I0920 17:18:24.217645   33598 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem -> /usr/share/ca-certificates/159732.pem
	I0920 17:18:24.218192   33598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 17:18:24.242436   33598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 17:18:24.267664   33598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 17:18:24.292025   33598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0920 17:18:24.315429   33598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0920 17:18:24.338871   33598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 17:18:24.362515   33598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 17:18:24.386060   33598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 17:18:24.409198   33598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 17:18:24.432453   33598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem --> /usr/share/ca-certificates/15973.pem (1338 bytes)
	I0920 17:18:24.455478   33598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /usr/share/ca-certificates/159732.pem (1708 bytes)
	I0920 17:18:24.478297   33598 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 17:18:24.494060   33598 ssh_runner.go:195] Run: openssl version
	I0920 17:18:24.500085   33598 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 17:18:24.510748   33598 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:18:24.515402   33598 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:18:24.515485   33598 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:18:24.521184   33598 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 17:18:24.530583   33598 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15973.pem && ln -fs /usr/share/ca-certificates/15973.pem /etc/ssl/certs/15973.pem"
	I0920 17:18:24.540863   33598 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15973.pem
	I0920 17:18:24.545256   33598 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:04 /usr/share/ca-certificates/15973.pem
	I0920 17:18:24.545299   33598 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15973.pem
	I0920 17:18:24.550698   33598 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15973.pem /etc/ssl/certs/51391683.0"
	I0920 17:18:24.560113   33598 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/159732.pem && ln -fs /usr/share/ca-certificates/159732.pem /etc/ssl/certs/159732.pem"
	I0920 17:18:24.571303   33598 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/159732.pem
	I0920 17:18:24.575986   33598 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:04 /usr/share/ca-certificates/159732.pem
	I0920 17:18:24.576040   33598 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/159732.pem
	I0920 17:18:24.581818   33598 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/159732.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 17:18:24.591218   33598 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 17:18:24.595766   33598 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 17:18:24.601193   33598 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 17:18:24.606606   33598 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 17:18:24.612237   33598 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 17:18:24.617968   33598 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 17:18:24.623520   33598 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 17:18:24.629094   33598 kubeadm.go:392] StartCluster: {Name:ha-135993 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-135993 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.133 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.101 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 17:18:24.629249   33598 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 17:18:24.629312   33598 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 17:18:24.667577   33598 cri.go:89] found id: "bf7badc294591b4e6eb17ac796ab10e69706e71983d63ccdd57ff7c4e36ac50e"
	I0920 17:18:24.667604   33598 cri.go:89] found id: "5c869ef0e61597970a9451f4f09bc34c0ec9b1f40422349e7d3f1d33d1dfbbd6"
	I0920 17:18:24.667610   33598 cri.go:89] found id: "b6c5323097a59ef8044970de3449a9f550e30e814c16735624fe9a0eefab94b2"
	I0920 17:18:24.667614   33598 cri.go:89] found id: "1ec44cdb1194b97d48beb73c37e433b069a865ff030c5c837c6893f8be5f2fe3"
	I0920 17:18:24.667618   33598 cri.go:89] found id: "7c668f6376655011c61ac0bac2456f3e751f265e08a925a427f7ecfca3d54d97"
	I0920 17:18:24.667622   33598 cri.go:89] found id: "36f3e8a4356ff70d6ac1d50a79bc078c32683201ca9c0787097b28f3417fdc90"
	I0920 17:18:24.667627   33598 cri.go:89] found id: "5054778f39bbb435a191b7cbdabfca614d8b6d64b7bce122294d50415d601787"
	I0920 17:18:24.667631   33598 cri.go:89] found id: "8792a3b1249ffb1a157682b07dec9a0edf11b81c44e53ea068e0fc6f4548be22"
	I0920 17:18:24.667635   33598 cri.go:89] found id: "e4b462c3efaa10b1790ebacd5fe8845d6eb91643cb6db9b2830f33afe1744d57"
	I0920 17:18:24.667643   33598 cri.go:89] found id: "1a56cd54bb369a6e7651ec4ac3a33220fcfa19226c4ca75ab535aeddbf2811a9"
	I0920 17:18:24.667648   33598 cri.go:89] found id: "2b48cf1f03207bb258f12f5506b5e60fd4bb742d7b2ced222664cc2b996ff15d"
	I0920 17:18:24.667667   33598 cri.go:89] found id: "1f5eb92cf36b033e765f0681f8a6634251dfd6dc0db3410efd3ef6e8580a8b2f"
	I0920 17:18:24.667672   33598 cri.go:89] found id: "e70d74afe0f7f7c1e0f084aba99e9c488a1f999e94866b12072fdcbc24b0dad7"
	I0920 17:18:24.667676   33598 cri.go:89] found id: "db80f5e2505940b81bc20c39e98d873b35eff2e71aac748d8b406e21f5435fca"
	I0920 17:18:24.667683   33598 cri.go:89] found id: ""
	I0920 17:18:24.667730   33598 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 20 17:20:57 ha-135993 crio[3715]: time="2024-09-20 17:20:57.156079991Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852857156051682,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=009a5915-ddd4-482d-80f5-ce6923a415f9 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 17:20:57 ha-135993 crio[3715]: time="2024-09-20 17:20:57.156713040Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a7b2c946-b083-4a44-9424-a93aa1c42aec name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:20:57 ha-135993 crio[3715]: time="2024-09-20 17:20:57.156781188Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a7b2c946-b083-4a44-9424-a93aa1c42aec name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:20:57 ha-135993 crio[3715]: time="2024-09-20 17:20:57.157186554Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a6addfcb27c43b162c58e5dc89b34a21b057ffa9e5c6929c13e6720a7d41c017,PodSandboxId:f362cdeaa384c6a9423bc0e3f8a2e87059f200fade4e07de0512ba9104a08560,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726852791788949922,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57137bee-9a7b-4659-a855-0da82d137cb0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79d6dfc37ecea1b70bf42810eb355d3d33df6f44ad8813cb64a541d83b09b12f,PodSandboxId:3426320ea65936be4749a4579ea74dd7b8d3d6d6b1708f63e0490d36f8754cbc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726852749787357892,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bce625d942177d9274bb431f7a7012b2,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ee493ac17f0bde4d555b58e8b816b325426800736be6cff626867aa80e21c96,PodSandboxId:1b66d16f83ebcbb613c67b790b5609e78babfb7bd452c17106f8a33ad0ed7f3a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726852749775751738,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fa5fbf3465d3154576a11474b7b8548,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09b2ab354660c3a46043369de5a5d91ba3bbacf95655fdc9ad4f34c7160d3f2d,PodSandboxId:f362cdeaa384c6a9423bc0e3f8a2e87059f200fade4e07de0512ba9104a08560,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726852743779709268,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57137bee-9a7b-4659-a855-0da82d137cb0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b045c2884667987ee6e8c10e271c2e6226efba112efed8795e08e4a67e0990dc,PodSandboxId:ae90d8ac2b75af492d713bbe29179244cfcaa84f3743ba966f27bbd1ff5508c4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726852740036843854,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-df429,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ef8ed1eb-a7c6-4c49-9e46-924bad4d9577,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ca3f9a710f32d6cb191f799a34e55023e57f7dbebf54b32a353fa72b332ddde,PodSandboxId:a29604104e7ce7457620e53485525a9eb289cd6d4b4d58f90a78e8d40b586cf0,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726852720182665622,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c06f5b4bed0418e8539976286a422aad,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5168567f834602e7f761a3eff5e14aa6b5d1d15b02eb85e90b1e331fc1b73fed,PodSandboxId:eec2e8181267a85c9517e828def8039caef778199c53474aae6cc8adfa8bc435,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726852714096373683,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kpbhk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dfd9f1a-148c-4dba-884a-8618b74f82d0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9
153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff3c00df26f158af79f61dc02b970ad39db9d4f0fcffefa0b719f4598158f62b,PodSandboxId:dee7980a50d73104b0c061a1ffa9347cb87e48985218f83b818efbed52f4a239,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726852709920410907,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gcvg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 899b2b8c-9009-46c0-816b-781e85eb8b19,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.k
ubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:398409b93b45dca7c586c3035d250b5dc775d03ee8dd75f81e9e7427a559ed00,PodSandboxId:1bd48e5dcd4693a3b4c8340aba118e7ad278f521e69bcfd1541092778c515ae6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726852707455117279,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name:
kube-proxy-52r49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d1124bd-e7cb-4239-a29d-c1d5b8870aff,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d6797d9f1c442f6743dde97cc9de50c155c5e2b00e1197ba4a088a93bd7c5db,PodSandboxId:2ef4ad9be22fb67138ab03ac3ed8e5cfb7f27022fb85eb427a1902ca45aec292,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726852706753264043,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6clt2,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: d73a0817-d84f-4269-9de0-1532287a07db,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a83a259fa257292715bb5b2d5b323b49763d5553518b528fc78fc079bb9ed45,PodSandboxId:3426320ea65936be4749a4579ea74dd7b8d3d6d6b1708f63e0490d36f8754cbc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726852706597265158,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-135993,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: bce625d942177d9274bb431f7a7012b2,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee0aced30e75021c727bf72254c618cb3ff71687fe9882fab00322be6da050f0,PodSandboxId:9317a52c9caac0dba626b3d134d689c2bf560e3e0ccc4367c435a9b9e631156e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726852706448342207,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f0e3932935569a5c49edd
0da5a87eba,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41c08dd315ead086282b3d4c8d4a5fb35136bd8d2824d8516b73a61281a9eeaf,PodSandboxId:1b66d16f83ebcbb613c67b790b5609e78babfb7bd452c17106f8a33ad0ed7f3a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726852706532089181,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fa5fbf3465d31545
76a11474b7b8548,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6001c238074303dee4a1968943cb28c0e10f360312e0284fc54018ba88a7f96d,PodSandboxId:2c6a8e58a88556b2103f93f3116a1d5ecc55470841ea358bf0c41f6c80d864ed,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726852706446603353,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086260df87dea88f53b3b3ca08d61864,},Ann
otations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf7badc294591b4e6eb17ac796ab10e69706e71983d63ccdd57ff7c4e36ac50e,PodSandboxId:e228f7e109c3c3870cb95be64330287ff98d08fbf7a86af9f160c71d511053a4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726852693213530849,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kpbhk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dfd9f1a-148c-4dba-884a-8618b74f82d0,},Annotations:map[string]string{io.ku
bernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2a30264a8299336cfb5a4338bab42f8ef663f285ffa7fcb290de88635658a97,PodSandboxId:afa282bba6347297904c2a6423e39984da1a95a2caf8b23dd6857518cc0bb05d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726852250916118141,Labels:map[string]str
ing{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-df429,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ef8ed1eb-a7c6-4c49-9e46-924bad4d9577,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5054778f39bbb435a191b7cbdabfca614d8b6d64b7bce122294d50415d601787,PodSandboxId:6fda3c09e12fee3d9bfbdf163c989e0242d8464e853282bca12154b468cf1a1d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726852104287105818,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gcvg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 899b2b8c-9009-46c0-816b-781e85eb8b19,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8792a3b1249ffb1a157682b07dec9a0edf11b81c44e53ea068e0fc6f4548be22,PodSandboxId:ed014d23a111faa2c65032e35b2ef554b33cd4cb8eddbc89e3b77efabb53a5ce,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726852092541516094,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6clt2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d73a0817-d84f-4269-9de0-1532287a07db,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4b462c3efaa10b1790ebacd5fe8845d6eb91643cb6db9b2830f33afe1744d57,PodSandboxId:1971096e9fdaaef4dfb856b23a044da810f3ea4c5d05511c7fc53105be9d664b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726852092275153244,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-52r49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d1124bd-e7cb-4239-a29d-c1d5b8870aff,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db80f5e2505940b81bc20c39e98d873b35eff2e71aac748d8b406e21f5435fca,PodSandboxId:77a9434f5f03e0fce6c021d0cb97ce2bbe8dbb697371c561907dc8656adca49c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726852081504258492,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086260df87dea88f53b3b3ca08d61864,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e70d74afe0f7f7c1e0f084aba99e9c488a1f999e94866b12072fdcbc24b0dad7,PodSandboxId:74a0a0888b0f65cfdbc3321666aaf8f377feaae2a4e3c132c5ed40ccd4468689,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1726852081538881038,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f0e3932935569a5c49edd0da5a87eba,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a7b2c946-b083-4a44-9424-a93aa1c42aec name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:20:57 ha-135993 crio[3715]: time="2024-09-20 17:20:57.203780673Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cfc4952b-d371-4daa-b7ac-b1865e7798de name=/runtime.v1.RuntimeService/Version
	Sep 20 17:20:57 ha-135993 crio[3715]: time="2024-09-20 17:20:57.203907574Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cfc4952b-d371-4daa-b7ac-b1865e7798de name=/runtime.v1.RuntimeService/Version
	Sep 20 17:20:57 ha-135993 crio[3715]: time="2024-09-20 17:20:57.207156369Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f60285bb-b3aa-4a67-b4bb-6ba9c1fd60fb name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 17:20:57 ha-135993 crio[3715]: time="2024-09-20 17:20:57.207644501Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852857207618992,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f60285bb-b3aa-4a67-b4bb-6ba9c1fd60fb name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 17:20:57 ha-135993 crio[3715]: time="2024-09-20 17:20:57.208353396Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5f6e6af0-c404-4110-b565-3969948daf1f name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:20:57 ha-135993 crio[3715]: time="2024-09-20 17:20:57.208420962Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5f6e6af0-c404-4110-b565-3969948daf1f name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:20:57 ha-135993 crio[3715]: time="2024-09-20 17:20:57.208895918Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a6addfcb27c43b162c58e5dc89b34a21b057ffa9e5c6929c13e6720a7d41c017,PodSandboxId:f362cdeaa384c6a9423bc0e3f8a2e87059f200fade4e07de0512ba9104a08560,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726852791788949922,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57137bee-9a7b-4659-a855-0da82d137cb0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79d6dfc37ecea1b70bf42810eb355d3d33df6f44ad8813cb64a541d83b09b12f,PodSandboxId:3426320ea65936be4749a4579ea74dd7b8d3d6d6b1708f63e0490d36f8754cbc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726852749787357892,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bce625d942177d9274bb431f7a7012b2,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ee493ac17f0bde4d555b58e8b816b325426800736be6cff626867aa80e21c96,PodSandboxId:1b66d16f83ebcbb613c67b790b5609e78babfb7bd452c17106f8a33ad0ed7f3a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726852749775751738,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fa5fbf3465d3154576a11474b7b8548,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09b2ab354660c3a46043369de5a5d91ba3bbacf95655fdc9ad4f34c7160d3f2d,PodSandboxId:f362cdeaa384c6a9423bc0e3f8a2e87059f200fade4e07de0512ba9104a08560,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726852743779709268,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57137bee-9a7b-4659-a855-0da82d137cb0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b045c2884667987ee6e8c10e271c2e6226efba112efed8795e08e4a67e0990dc,PodSandboxId:ae90d8ac2b75af492d713bbe29179244cfcaa84f3743ba966f27bbd1ff5508c4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726852740036843854,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-df429,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ef8ed1eb-a7c6-4c49-9e46-924bad4d9577,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ca3f9a710f32d6cb191f799a34e55023e57f7dbebf54b32a353fa72b332ddde,PodSandboxId:a29604104e7ce7457620e53485525a9eb289cd6d4b4d58f90a78e8d40b586cf0,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726852720182665622,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c06f5b4bed0418e8539976286a422aad,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5168567f834602e7f761a3eff5e14aa6b5d1d15b02eb85e90b1e331fc1b73fed,PodSandboxId:eec2e8181267a85c9517e828def8039caef778199c53474aae6cc8adfa8bc435,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726852714096373683,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kpbhk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dfd9f1a-148c-4dba-884a-8618b74f82d0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9
153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff3c00df26f158af79f61dc02b970ad39db9d4f0fcffefa0b719f4598158f62b,PodSandboxId:dee7980a50d73104b0c061a1ffa9347cb87e48985218f83b818efbed52f4a239,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726852709920410907,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gcvg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 899b2b8c-9009-46c0-816b-781e85eb8b19,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.k
ubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:398409b93b45dca7c586c3035d250b5dc775d03ee8dd75f81e9e7427a559ed00,PodSandboxId:1bd48e5dcd4693a3b4c8340aba118e7ad278f521e69bcfd1541092778c515ae6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726852707455117279,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name:
kube-proxy-52r49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d1124bd-e7cb-4239-a29d-c1d5b8870aff,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d6797d9f1c442f6743dde97cc9de50c155c5e2b00e1197ba4a088a93bd7c5db,PodSandboxId:2ef4ad9be22fb67138ab03ac3ed8e5cfb7f27022fb85eb427a1902ca45aec292,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726852706753264043,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6clt2,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: d73a0817-d84f-4269-9de0-1532287a07db,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a83a259fa257292715bb5b2d5b323b49763d5553518b528fc78fc079bb9ed45,PodSandboxId:3426320ea65936be4749a4579ea74dd7b8d3d6d6b1708f63e0490d36f8754cbc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726852706597265158,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-135993,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: bce625d942177d9274bb431f7a7012b2,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee0aced30e75021c727bf72254c618cb3ff71687fe9882fab00322be6da050f0,PodSandboxId:9317a52c9caac0dba626b3d134d689c2bf560e3e0ccc4367c435a9b9e631156e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726852706448342207,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f0e3932935569a5c49edd
0da5a87eba,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41c08dd315ead086282b3d4c8d4a5fb35136bd8d2824d8516b73a61281a9eeaf,PodSandboxId:1b66d16f83ebcbb613c67b790b5609e78babfb7bd452c17106f8a33ad0ed7f3a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726852706532089181,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fa5fbf3465d31545
76a11474b7b8548,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6001c238074303dee4a1968943cb28c0e10f360312e0284fc54018ba88a7f96d,PodSandboxId:2c6a8e58a88556b2103f93f3116a1d5ecc55470841ea358bf0c41f6c80d864ed,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726852706446603353,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086260df87dea88f53b3b3ca08d61864,},Ann
otations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf7badc294591b4e6eb17ac796ab10e69706e71983d63ccdd57ff7c4e36ac50e,PodSandboxId:e228f7e109c3c3870cb95be64330287ff98d08fbf7a86af9f160c71d511053a4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726852693213530849,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kpbhk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dfd9f1a-148c-4dba-884a-8618b74f82d0,},Annotations:map[string]string{io.ku
bernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2a30264a8299336cfb5a4338bab42f8ef663f285ffa7fcb290de88635658a97,PodSandboxId:afa282bba6347297904c2a6423e39984da1a95a2caf8b23dd6857518cc0bb05d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726852250916118141,Labels:map[string]str
ing{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-df429,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ef8ed1eb-a7c6-4c49-9e46-924bad4d9577,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5054778f39bbb435a191b7cbdabfca614d8b6d64b7bce122294d50415d601787,PodSandboxId:6fda3c09e12fee3d9bfbdf163c989e0242d8464e853282bca12154b468cf1a1d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726852104287105818,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gcvg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 899b2b8c-9009-46c0-816b-781e85eb8b19,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8792a3b1249ffb1a157682b07dec9a0edf11b81c44e53ea068e0fc6f4548be22,PodSandboxId:ed014d23a111faa2c65032e35b2ef554b33cd4cb8eddbc89e3b77efabb53a5ce,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726852092541516094,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6clt2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d73a0817-d84f-4269-9de0-1532287a07db,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4b462c3efaa10b1790ebacd5fe8845d6eb91643cb6db9b2830f33afe1744d57,PodSandboxId:1971096e9fdaaef4dfb856b23a044da810f3ea4c5d05511c7fc53105be9d664b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726852092275153244,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-52r49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d1124bd-e7cb-4239-a29d-c1d5b8870aff,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db80f5e2505940b81bc20c39e98d873b35eff2e71aac748d8b406e21f5435fca,PodSandboxId:77a9434f5f03e0fce6c021d0cb97ce2bbe8dbb697371c561907dc8656adca49c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726852081504258492,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086260df87dea88f53b3b3ca08d61864,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e70d74afe0f7f7c1e0f084aba99e9c488a1f999e94866b12072fdcbc24b0dad7,PodSandboxId:74a0a0888b0f65cfdbc3321666aaf8f377feaae2a4e3c132c5ed40ccd4468689,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1726852081538881038,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f0e3932935569a5c49edd0da5a87eba,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5f6e6af0-c404-4110-b565-3969948daf1f name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:20:57 ha-135993 crio[3715]: time="2024-09-20 17:20:57.258148316Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0428b20c-566a-4555-ae50-7ae59860cfb8 name=/runtime.v1.RuntimeService/Version
	Sep 20 17:20:57 ha-135993 crio[3715]: time="2024-09-20 17:20:57.258283078Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0428b20c-566a-4555-ae50-7ae59860cfb8 name=/runtime.v1.RuntimeService/Version
	Sep 20 17:20:57 ha-135993 crio[3715]: time="2024-09-20 17:20:57.259488058Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ca4fe501-fffc-45ec-9f3b-c067ce379412 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 17:20:57 ha-135993 crio[3715]: time="2024-09-20 17:20:57.259935608Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852857259911279,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ca4fe501-fffc-45ec-9f3b-c067ce379412 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 17:20:57 ha-135993 crio[3715]: time="2024-09-20 17:20:57.260559746Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7d86b7cf-93da-43df-8ca1-1f7ad897e30a name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:20:57 ha-135993 crio[3715]: time="2024-09-20 17:20:57.260628074Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7d86b7cf-93da-43df-8ca1-1f7ad897e30a name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:20:57 ha-135993 crio[3715]: time="2024-09-20 17:20:57.261433807Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a6addfcb27c43b162c58e5dc89b34a21b057ffa9e5c6929c13e6720a7d41c017,PodSandboxId:f362cdeaa384c6a9423bc0e3f8a2e87059f200fade4e07de0512ba9104a08560,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726852791788949922,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57137bee-9a7b-4659-a855-0da82d137cb0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79d6dfc37ecea1b70bf42810eb355d3d33df6f44ad8813cb64a541d83b09b12f,PodSandboxId:3426320ea65936be4749a4579ea74dd7b8d3d6d6b1708f63e0490d36f8754cbc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726852749787357892,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bce625d942177d9274bb431f7a7012b2,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ee493ac17f0bde4d555b58e8b816b325426800736be6cff626867aa80e21c96,PodSandboxId:1b66d16f83ebcbb613c67b790b5609e78babfb7bd452c17106f8a33ad0ed7f3a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726852749775751738,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fa5fbf3465d3154576a11474b7b8548,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09b2ab354660c3a46043369de5a5d91ba3bbacf95655fdc9ad4f34c7160d3f2d,PodSandboxId:f362cdeaa384c6a9423bc0e3f8a2e87059f200fade4e07de0512ba9104a08560,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726852743779709268,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57137bee-9a7b-4659-a855-0da82d137cb0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b045c2884667987ee6e8c10e271c2e6226efba112efed8795e08e4a67e0990dc,PodSandboxId:ae90d8ac2b75af492d713bbe29179244cfcaa84f3743ba966f27bbd1ff5508c4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726852740036843854,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-df429,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ef8ed1eb-a7c6-4c49-9e46-924bad4d9577,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ca3f9a710f32d6cb191f799a34e55023e57f7dbebf54b32a353fa72b332ddde,PodSandboxId:a29604104e7ce7457620e53485525a9eb289cd6d4b4d58f90a78e8d40b586cf0,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726852720182665622,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c06f5b4bed0418e8539976286a422aad,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5168567f834602e7f761a3eff5e14aa6b5d1d15b02eb85e90b1e331fc1b73fed,PodSandboxId:eec2e8181267a85c9517e828def8039caef778199c53474aae6cc8adfa8bc435,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726852714096373683,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kpbhk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dfd9f1a-148c-4dba-884a-8618b74f82d0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9
153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff3c00df26f158af79f61dc02b970ad39db9d4f0fcffefa0b719f4598158f62b,PodSandboxId:dee7980a50d73104b0c061a1ffa9347cb87e48985218f83b818efbed52f4a239,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726852709920410907,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gcvg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 899b2b8c-9009-46c0-816b-781e85eb8b19,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.k
ubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:398409b93b45dca7c586c3035d250b5dc775d03ee8dd75f81e9e7427a559ed00,PodSandboxId:1bd48e5dcd4693a3b4c8340aba118e7ad278f521e69bcfd1541092778c515ae6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726852707455117279,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name:
kube-proxy-52r49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d1124bd-e7cb-4239-a29d-c1d5b8870aff,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d6797d9f1c442f6743dde97cc9de50c155c5e2b00e1197ba4a088a93bd7c5db,PodSandboxId:2ef4ad9be22fb67138ab03ac3ed8e5cfb7f27022fb85eb427a1902ca45aec292,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726852706753264043,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6clt2,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: d73a0817-d84f-4269-9de0-1532287a07db,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a83a259fa257292715bb5b2d5b323b49763d5553518b528fc78fc079bb9ed45,PodSandboxId:3426320ea65936be4749a4579ea74dd7b8d3d6d6b1708f63e0490d36f8754cbc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726852706597265158,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-135993,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: bce625d942177d9274bb431f7a7012b2,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee0aced30e75021c727bf72254c618cb3ff71687fe9882fab00322be6da050f0,PodSandboxId:9317a52c9caac0dba626b3d134d689c2bf560e3e0ccc4367c435a9b9e631156e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726852706448342207,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f0e3932935569a5c49edd
0da5a87eba,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41c08dd315ead086282b3d4c8d4a5fb35136bd8d2824d8516b73a61281a9eeaf,PodSandboxId:1b66d16f83ebcbb613c67b790b5609e78babfb7bd452c17106f8a33ad0ed7f3a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726852706532089181,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fa5fbf3465d31545
76a11474b7b8548,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6001c238074303dee4a1968943cb28c0e10f360312e0284fc54018ba88a7f96d,PodSandboxId:2c6a8e58a88556b2103f93f3116a1d5ecc55470841ea358bf0c41f6c80d864ed,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726852706446603353,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086260df87dea88f53b3b3ca08d61864,},Ann
otations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf7badc294591b4e6eb17ac796ab10e69706e71983d63ccdd57ff7c4e36ac50e,PodSandboxId:e228f7e109c3c3870cb95be64330287ff98d08fbf7a86af9f160c71d511053a4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726852693213530849,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kpbhk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dfd9f1a-148c-4dba-884a-8618b74f82d0,},Annotations:map[string]string{io.ku
bernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2a30264a8299336cfb5a4338bab42f8ef663f285ffa7fcb290de88635658a97,PodSandboxId:afa282bba6347297904c2a6423e39984da1a95a2caf8b23dd6857518cc0bb05d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726852250916118141,Labels:map[string]str
ing{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-df429,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ef8ed1eb-a7c6-4c49-9e46-924bad4d9577,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5054778f39bbb435a191b7cbdabfca614d8b6d64b7bce122294d50415d601787,PodSandboxId:6fda3c09e12fee3d9bfbdf163c989e0242d8464e853282bca12154b468cf1a1d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726852104287105818,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gcvg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 899b2b8c-9009-46c0-816b-781e85eb8b19,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8792a3b1249ffb1a157682b07dec9a0edf11b81c44e53ea068e0fc6f4548be22,PodSandboxId:ed014d23a111faa2c65032e35b2ef554b33cd4cb8eddbc89e3b77efabb53a5ce,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726852092541516094,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6clt2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d73a0817-d84f-4269-9de0-1532287a07db,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4b462c3efaa10b1790ebacd5fe8845d6eb91643cb6db9b2830f33afe1744d57,PodSandboxId:1971096e9fdaaef4dfb856b23a044da810f3ea4c5d05511c7fc53105be9d664b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726852092275153244,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-52r49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d1124bd-e7cb-4239-a29d-c1d5b8870aff,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db80f5e2505940b81bc20c39e98d873b35eff2e71aac748d8b406e21f5435fca,PodSandboxId:77a9434f5f03e0fce6c021d0cb97ce2bbe8dbb697371c561907dc8656adca49c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726852081504258492,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086260df87dea88f53b3b3ca08d61864,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e70d74afe0f7f7c1e0f084aba99e9c488a1f999e94866b12072fdcbc24b0dad7,PodSandboxId:74a0a0888b0f65cfdbc3321666aaf8f377feaae2a4e3c132c5ed40ccd4468689,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1726852081538881038,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f0e3932935569a5c49edd0da5a87eba,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7d86b7cf-93da-43df-8ca1-1f7ad897e30a name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:20:57 ha-135993 crio[3715]: time="2024-09-20 17:20:57.308495211Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=872c7ff8-0548-4774-aa51-0fef30dbe0a9 name=/runtime.v1.RuntimeService/Version
	Sep 20 17:20:57 ha-135993 crio[3715]: time="2024-09-20 17:20:57.308583316Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=872c7ff8-0548-4774-aa51-0fef30dbe0a9 name=/runtime.v1.RuntimeService/Version
	Sep 20 17:20:57 ha-135993 crio[3715]: time="2024-09-20 17:20:57.309897534Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f06717f7-4ea1-44a7-9ac7-4beb1057949a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 17:20:57 ha-135993 crio[3715]: time="2024-09-20 17:20:57.310418588Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852857310393727,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f06717f7-4ea1-44a7-9ac7-4beb1057949a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 17:20:57 ha-135993 crio[3715]: time="2024-09-20 17:20:57.311000183Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c3463872-2d04-4522-9014-6415993a7ff4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:20:57 ha-135993 crio[3715]: time="2024-09-20 17:20:57.311071255Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c3463872-2d04-4522-9014-6415993a7ff4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:20:57 ha-135993 crio[3715]: time="2024-09-20 17:20:57.311558591Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a6addfcb27c43b162c58e5dc89b34a21b057ffa9e5c6929c13e6720a7d41c017,PodSandboxId:f362cdeaa384c6a9423bc0e3f8a2e87059f200fade4e07de0512ba9104a08560,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726852791788949922,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57137bee-9a7b-4659-a855-0da82d137cb0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79d6dfc37ecea1b70bf42810eb355d3d33df6f44ad8813cb64a541d83b09b12f,PodSandboxId:3426320ea65936be4749a4579ea74dd7b8d3d6d6b1708f63e0490d36f8754cbc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726852749787357892,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bce625d942177d9274bb431f7a7012b2,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ee493ac17f0bde4d555b58e8b816b325426800736be6cff626867aa80e21c96,PodSandboxId:1b66d16f83ebcbb613c67b790b5609e78babfb7bd452c17106f8a33ad0ed7f3a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726852749775751738,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fa5fbf3465d3154576a11474b7b8548,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09b2ab354660c3a46043369de5a5d91ba3bbacf95655fdc9ad4f34c7160d3f2d,PodSandboxId:f362cdeaa384c6a9423bc0e3f8a2e87059f200fade4e07de0512ba9104a08560,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726852743779709268,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57137bee-9a7b-4659-a855-0da82d137cb0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b045c2884667987ee6e8c10e271c2e6226efba112efed8795e08e4a67e0990dc,PodSandboxId:ae90d8ac2b75af492d713bbe29179244cfcaa84f3743ba966f27bbd1ff5508c4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726852740036843854,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-df429,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ef8ed1eb-a7c6-4c49-9e46-924bad4d9577,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ca3f9a710f32d6cb191f799a34e55023e57f7dbebf54b32a353fa72b332ddde,PodSandboxId:a29604104e7ce7457620e53485525a9eb289cd6d4b4d58f90a78e8d40b586cf0,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726852720182665622,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c06f5b4bed0418e8539976286a422aad,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5168567f834602e7f761a3eff5e14aa6b5d1d15b02eb85e90b1e331fc1b73fed,PodSandboxId:eec2e8181267a85c9517e828def8039caef778199c53474aae6cc8adfa8bc435,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726852714096373683,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kpbhk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dfd9f1a-148c-4dba-884a-8618b74f82d0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9
153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff3c00df26f158af79f61dc02b970ad39db9d4f0fcffefa0b719f4598158f62b,PodSandboxId:dee7980a50d73104b0c061a1ffa9347cb87e48985218f83b818efbed52f4a239,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726852709920410907,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gcvg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 899b2b8c-9009-46c0-816b-781e85eb8b19,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.k
ubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:398409b93b45dca7c586c3035d250b5dc775d03ee8dd75f81e9e7427a559ed00,PodSandboxId:1bd48e5dcd4693a3b4c8340aba118e7ad278f521e69bcfd1541092778c515ae6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726852707455117279,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name:
kube-proxy-52r49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d1124bd-e7cb-4239-a29d-c1d5b8870aff,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d6797d9f1c442f6743dde97cc9de50c155c5e2b00e1197ba4a088a93bd7c5db,PodSandboxId:2ef4ad9be22fb67138ab03ac3ed8e5cfb7f27022fb85eb427a1902ca45aec292,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726852706753264043,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6clt2,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: d73a0817-d84f-4269-9de0-1532287a07db,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a83a259fa257292715bb5b2d5b323b49763d5553518b528fc78fc079bb9ed45,PodSandboxId:3426320ea65936be4749a4579ea74dd7b8d3d6d6b1708f63e0490d36f8754cbc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726852706597265158,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-135993,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: bce625d942177d9274bb431f7a7012b2,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee0aced30e75021c727bf72254c618cb3ff71687fe9882fab00322be6da050f0,PodSandboxId:9317a52c9caac0dba626b3d134d689c2bf560e3e0ccc4367c435a9b9e631156e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726852706448342207,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f0e3932935569a5c49edd
0da5a87eba,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41c08dd315ead086282b3d4c8d4a5fb35136bd8d2824d8516b73a61281a9eeaf,PodSandboxId:1b66d16f83ebcbb613c67b790b5609e78babfb7bd452c17106f8a33ad0ed7f3a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726852706532089181,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fa5fbf3465d31545
76a11474b7b8548,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6001c238074303dee4a1968943cb28c0e10f360312e0284fc54018ba88a7f96d,PodSandboxId:2c6a8e58a88556b2103f93f3116a1d5ecc55470841ea358bf0c41f6c80d864ed,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726852706446603353,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086260df87dea88f53b3b3ca08d61864,},Ann
otations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf7badc294591b4e6eb17ac796ab10e69706e71983d63ccdd57ff7c4e36ac50e,PodSandboxId:e228f7e109c3c3870cb95be64330287ff98d08fbf7a86af9f160c71d511053a4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726852693213530849,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kpbhk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dfd9f1a-148c-4dba-884a-8618b74f82d0,},Annotations:map[string]string{io.ku
bernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2a30264a8299336cfb5a4338bab42f8ef663f285ffa7fcb290de88635658a97,PodSandboxId:afa282bba6347297904c2a6423e39984da1a95a2caf8b23dd6857518cc0bb05d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726852250916118141,Labels:map[string]str
ing{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-df429,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ef8ed1eb-a7c6-4c49-9e46-924bad4d9577,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5054778f39bbb435a191b7cbdabfca614d8b6d64b7bce122294d50415d601787,PodSandboxId:6fda3c09e12fee3d9bfbdf163c989e0242d8464e853282bca12154b468cf1a1d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726852104287105818,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gcvg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 899b2b8c-9009-46c0-816b-781e85eb8b19,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8792a3b1249ffb1a157682b07dec9a0edf11b81c44e53ea068e0fc6f4548be22,PodSandboxId:ed014d23a111faa2c65032e35b2ef554b33cd4cb8eddbc89e3b77efabb53a5ce,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726852092541516094,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6clt2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d73a0817-d84f-4269-9de0-1532287a07db,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4b462c3efaa10b1790ebacd5fe8845d6eb91643cb6db9b2830f33afe1744d57,PodSandboxId:1971096e9fdaaef4dfb856b23a044da810f3ea4c5d05511c7fc53105be9d664b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726852092275153244,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-52r49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d1124bd-e7cb-4239-a29d-c1d5b8870aff,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db80f5e2505940b81bc20c39e98d873b35eff2e71aac748d8b406e21f5435fca,PodSandboxId:77a9434f5f03e0fce6c021d0cb97ce2bbe8dbb697371c561907dc8656adca49c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726852081504258492,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086260df87dea88f53b3b3ca08d61864,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e70d74afe0f7f7c1e0f084aba99e9c488a1f999e94866b12072fdcbc24b0dad7,PodSandboxId:74a0a0888b0f65cfdbc3321666aaf8f377feaae2a4e3c132c5ed40ccd4468689,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1726852081538881038,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f0e3932935569a5c49edd0da5a87eba,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c3463872-2d04-4522-9014-6415993a7ff4 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	a6addfcb27c43       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       4                   f362cdeaa384c       storage-provisioner
	79d6dfc37ecea       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      About a minute ago   Running             kube-apiserver            3                   3426320ea6593       kube-apiserver-ha-135993
	9ee493ac17f0b       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      About a minute ago   Running             kube-controller-manager   2                   1b66d16f83ebc       kube-controller-manager-ha-135993
	09b2ab354660c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Exited              storage-provisioner       3                   f362cdeaa384c       storage-provisioner
	b045c28846679       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   ae90d8ac2b75a       busybox-7dff88458-df429
	3ca3f9a710f32       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      2 minutes ago        Running             kube-vip                  0                   a29604104e7ce       kube-vip-ha-135993
	5168567f83460       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      2 minutes ago        Running             coredns                   2                   eec2e8181267a       coredns-7c65d6cfc9-kpbhk
	ff3c00df26f15       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      2 minutes ago        Running             coredns                   1                   dee7980a50d73       coredns-7c65d6cfc9-gcvg4
	398409b93b45d       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      2 minutes ago        Running             kube-proxy                1                   1bd48e5dcd469       kube-proxy-52r49
	8d6797d9f1c44       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      2 minutes ago        Running             kindnet-cni               1                   2ef4ad9be22fb       kindnet-6clt2
	0a83a259fa257       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      2 minutes ago        Exited              kube-apiserver            2                   3426320ea6593       kube-apiserver-ha-135993
	41c08dd315ead       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      2 minutes ago        Exited              kube-controller-manager   1                   1b66d16f83ebc       kube-controller-manager-ha-135993
	ee0aced30e750       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      2 minutes ago        Running             etcd                      1                   9317a52c9caac       etcd-ha-135993
	6001c23807430       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      2 minutes ago        Running             kube-scheduler            1                   2c6a8e58a8855       kube-scheduler-ha-135993
	bf7badc294591       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      2 minutes ago        Exited              coredns                   1                   e228f7e109c3c       coredns-7c65d6cfc9-kpbhk
	d2a30264a8299       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   10 minutes ago       Exited              busybox                   0                   afa282bba6347       busybox-7dff88458-df429
	5054778f39bbb       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      12 minutes ago       Exited              coredns                   0                   6fda3c09e12fe       coredns-7c65d6cfc9-gcvg4
	8792a3b1249ff       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      12 minutes ago       Exited              kindnet-cni               0                   ed014d23a111f       kindnet-6clt2
	e4b462c3efaa1       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      12 minutes ago       Exited              kube-proxy                0                   1971096e9fdaa       kube-proxy-52r49
	e70d74afe0f7f       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      12 minutes ago       Exited              etcd                      0                   74a0a0888b0f6       etcd-ha-135993
	db80f5e250594       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      12 minutes ago       Exited              kube-scheduler            0                   77a9434f5f03e       kube-scheduler-ha-135993
	
	
	==> coredns [5054778f39bbb435a191b7cbdabfca614d8b6d64b7bce122294d50415d601787] <==
	[INFO] 10.244.2.2:50089 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000170402s
	[INFO] 10.244.2.2:41205 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001201877s
	[INFO] 10.244.2.2:49094 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000154615s
	[INFO] 10.244.2.2:54226 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000116561s
	[INFO] 10.244.2.2:56885 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000137064s
	[INFO] 10.244.1.2:43199 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133082s
	[INFO] 10.244.1.2:54300 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000122573s
	[INFO] 10.244.1.2:57535 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000095892s
	[INFO] 10.244.1.2:45845 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000088385s
	[INFO] 10.244.0.4:53452 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000193594s
	[INFO] 10.244.0.4:46571 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000075164s
	[INFO] 10.244.2.2:44125 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000166147s
	[INFO] 10.244.2.2:59364 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000113432s
	[INFO] 10.244.2.2:54562 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000112311s
	[INFO] 10.244.1.2:60066 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132637s
	[INFO] 10.244.1.2:43717 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00017413s
	[INFO] 10.244.1.2:51684 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000156522s
	[INFO] 10.244.0.4:56213 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000141144s
	[INFO] 10.244.2.2:56175 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000117658s
	[INFO] 10.244.2.2:59810 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000111868s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1781&timeout=8m9s&timeoutSeconds=489&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1787&timeout=7m13s&timeoutSeconds=433&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1821&timeout=9m22s&timeoutSeconds=562&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [5168567f834602e7f761a3eff5e14aa6b5d1d15b02eb85e90b1e331fc1b73fed] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:35934->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:35934->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [bf7badc294591b4e6eb17ac796ab10e69706e71983d63ccdd57ff7c4e36ac50e] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:53701 - 37780 "HINFO IN 4226797056785722848.48568784628734717. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.011790862s
	
	
	==> coredns [ff3c00df26f158af79f61dc02b970ad39db9d4f0fcffefa0b719f4598158f62b] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.7:39336->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.7:39308->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.7:39336->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.7:39308->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-135993
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-135993
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0626f22cf0d915d75e291a5bce701f94395056e1
	                    minikube.k8s.io/name=ha-135993
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T17_08_08_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 17:08:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-135993
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 17:20:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 17:19:11 +0000   Fri, 20 Sep 2024 17:08:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 17:19:11 +0000   Fri, 20 Sep 2024 17:08:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 17:19:11 +0000   Fri, 20 Sep 2024 17:08:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 17:19:11 +0000   Fri, 20 Sep 2024 17:08:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.60
	  Hostname:    ha-135993
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6e83ceee6b834466a3a10733ff3c06b4
	  System UUID:                6e83ceee-6b83-4466-a3a1-0733ff3c06b4
	  Boot ID:                    ddcdaa90-2381-4c26-932e-b18d04f91d88
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-df429              0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-7c65d6cfc9-gcvg4             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     12m
	  kube-system                 coredns-7c65d6cfc9-kpbhk             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     12m
	  kube-system                 etcd-ha-135993                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         12m
	  kube-system                 kindnet-6clt2                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      12m
	  kube-system                 kube-apiserver-ha-135993             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-135993    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-52r49                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-135993             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-135993                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 106s                   kube-proxy       
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  12m                    kubelet          Node ha-135993 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     12m                    kubelet          Node ha-135993 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    12m                    kubelet          Node ha-135993 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 12m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           12m                    node-controller  Node ha-135993 event: Registered Node ha-135993 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-135993 status is now: NodeReady
	  Normal   RegisteredNode           11m                    node-controller  Node ha-135993 event: Registered Node ha-135993 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-135993 event: Registered Node ha-135993 in Controller
	  Normal   NodeNotReady             3m2s (x2 over 3m27s)   kubelet          Node ha-135993 status is now: NodeNotReady
	  Warning  ContainerGCFailed        2m50s (x2 over 3m50s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           108s                   node-controller  Node ha-135993 event: Registered Node ha-135993 in Controller
	  Normal   RegisteredNode           102s                   node-controller  Node ha-135993 event: Registered Node ha-135993 in Controller
	  Normal   RegisteredNode           39s                    node-controller  Node ha-135993 event: Registered Node ha-135993 in Controller
	
	
	Name:               ha-135993-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-135993-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0626f22cf0d915d75e291a5bce701f94395056e1
	                    minikube.k8s.io/name=ha-135993
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_20T17_09_05_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 17:09:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-135993-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 17:20:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 17:19:54 +0000   Fri, 20 Sep 2024 17:19:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 17:19:54 +0000   Fri, 20 Sep 2024 17:19:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 17:19:54 +0000   Fri, 20 Sep 2024 17:19:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 17:19:54 +0000   Fri, 20 Sep 2024 17:19:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.227
	  Hostname:    ha-135993-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 50c529298e8f4fbb9207cda8fc4b8abe
	  System UUID:                50c52929-8e8f-4fbb-9207-cda8fc4b8abe
	  Boot ID:                    c2e01ab2-7f61-4e96-86cf-402743e36b78
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-cw8r4                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-135993-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         11m
	  kube-system                 kindnet-5m4r8                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      11m
	  kube-system                 kube-apiserver-ha-135993-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-135993-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-z6xqt                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-135993-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-135993-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 101s                   kube-proxy       
	  Normal  Starting                 11m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  11m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)      kubelet          Node ha-135993-m02 status is now: NodeHasSufficientMemory
	  Normal  CIDRAssignmentFailed     11m                    cidrAllocator    Node ha-135993-m02 status is now: CIDRAssignmentFailed
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)      kubelet          Node ha-135993-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)      kubelet          Node ha-135993-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           11m                    node-controller  Node ha-135993-m02 event: Registered Node ha-135993-m02 in Controller
	  Normal  RegisteredNode           11m                    node-controller  Node ha-135993-m02 event: Registered Node ha-135993-m02 in Controller
	  Normal  RegisteredNode           10m                    node-controller  Node ha-135993-m02 event: Registered Node ha-135993-m02 in Controller
	  Normal  NodeNotReady             8m21s                  node-controller  Node ha-135993-m02 status is now: NodeNotReady
	  Normal  Starting                 2m12s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m12s (x8 over 2m12s)  kubelet          Node ha-135993-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m12s (x8 over 2m12s)  kubelet          Node ha-135993-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m12s (x7 over 2m12s)  kubelet          Node ha-135993-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m12s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           108s                   node-controller  Node ha-135993-m02 event: Registered Node ha-135993-m02 in Controller
	  Normal  RegisteredNode           102s                   node-controller  Node ha-135993-m02 event: Registered Node ha-135993-m02 in Controller
	  Normal  RegisteredNode           39s                    node-controller  Node ha-135993-m02 event: Registered Node ha-135993-m02 in Controller
	
	
	Name:               ha-135993-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-135993-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0626f22cf0d915d75e291a5bce701f94395056e1
	                    minikube.k8s.io/name=ha-135993
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_20T17_10_21_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 17:10:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-135993-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 17:20:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 17:20:27 +0000   Fri, 20 Sep 2024 17:19:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 17:20:27 +0000   Fri, 20 Sep 2024 17:19:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 17:20:27 +0000   Fri, 20 Sep 2024 17:19:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 17:20:27 +0000   Fri, 20 Sep 2024 17:19:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.133
	  Hostname:    ha-135993-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a16666848f8545f6bbb9419c97d0a0cd
	  System UUID:                a1666684-8f85-45f6-bbb9-419c97d0a0cd
	  Boot ID:                    7879d885-6e92-4f83-ae55-70f8cd4322a1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-ksx56                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-135993-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         10m
	  kube-system                 kindnet-hcqf8                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-apiserver-ha-135993-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-ha-135993-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-45c9m                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-ha-135993-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-vip-ha-135993-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 44s                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   CIDRAssignmentFailed     10m                cidrAllocator    Node ha-135993-m03 status is now: CIDRAssignmentFailed
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node ha-135993-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node ha-135993-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node ha-135993-m03 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node ha-135993-m03 event: Registered Node ha-135993-m03 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-135993-m03 event: Registered Node ha-135993-m03 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-135993-m03 event: Registered Node ha-135993-m03 in Controller
	  Normal   RegisteredNode           108s               node-controller  Node ha-135993-m03 event: Registered Node ha-135993-m03 in Controller
	  Normal   RegisteredNode           102s               node-controller  Node ha-135993-m03 event: Registered Node ha-135993-m03 in Controller
	  Normal   NodeNotReady             68s                node-controller  Node ha-135993-m03 status is now: NodeNotReady
	  Normal   Starting                 60s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  60s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 60s (x3 over 60s)  kubelet          Node ha-135993-m03 has been rebooted, boot id: 7879d885-6e92-4f83-ae55-70f8cd4322a1
	  Normal   NodeHasSufficientMemory  60s (x4 over 60s)  kubelet          Node ha-135993-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    60s (x4 over 60s)  kubelet          Node ha-135993-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     60s (x4 over 60s)  kubelet          Node ha-135993-m03 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             60s                kubelet          Node ha-135993-m03 status is now: NodeNotReady
	  Normal   NodeReady                60s (x2 over 60s)  kubelet          Node ha-135993-m03 status is now: NodeReady
	  Normal   RegisteredNode           39s                node-controller  Node ha-135993-m03 event: Registered Node ha-135993-m03 in Controller
	
	
	Name:               ha-135993-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-135993-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0626f22cf0d915d75e291a5bce701f94395056e1
	                    minikube.k8s.io/name=ha-135993
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_20T17_11_21_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 17:11:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-135993-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 17:20:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 17:20:49 +0000   Fri, 20 Sep 2024 17:20:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 17:20:49 +0000   Fri, 20 Sep 2024 17:20:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 17:20:49 +0000   Fri, 20 Sep 2024 17:20:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 17:20:49 +0000   Fri, 20 Sep 2024 17:20:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.101
	  Hostname:    ha-135993-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 16a282b7a18241dba73a5c13e70f4f98
	  System UUID:                16a282b7-a182-41db-a73a-5c13e70f4f98
	  Boot ID:                    ab1e946c-f99e-4f2e-818a-87907a330fda
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.4.0/24
	PodCIDRs:                     10.244.4.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-88sbs       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      9m36s
	  kube-system                 kube-proxy-2q8mx    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4s                     kube-proxy       
	  Normal   Starting                 9m32s                  kube-proxy       
	  Normal   NodeAllocatableEnforced  9m36s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           9m36s                  node-controller  Node ha-135993-m04 event: Registered Node ha-135993-m04 in Controller
	  Normal   CIDRAssignmentFailed     9m36s                  cidrAllocator    Node ha-135993-m04 status is now: CIDRAssignmentFailed
	  Normal   CIDRAssignmentFailed     9m36s                  cidrAllocator    Node ha-135993-m04 status is now: CIDRAssignmentFailed
	  Normal   RegisteredNode           9m36s                  node-controller  Node ha-135993-m04 event: Registered Node ha-135993-m04 in Controller
	  Normal   RegisteredNode           9m36s                  node-controller  Node ha-135993-m04 event: Registered Node ha-135993-m04 in Controller
	  Normal   NodeHasSufficientMemory  9m36s (x2 over 9m37s)  kubelet          Node ha-135993-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m36s (x2 over 9m37s)  kubelet          Node ha-135993-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m36s (x2 over 9m37s)  kubelet          Node ha-135993-m04 status is now: NodeHasSufficientPID
	  Normal   NodeReady                9m17s                  kubelet          Node ha-135993-m04 status is now: NodeReady
	  Normal   RegisteredNode           108s                   node-controller  Node ha-135993-m04 event: Registered Node ha-135993-m04 in Controller
	  Normal   RegisteredNode           102s                   node-controller  Node ha-135993-m04 event: Registered Node ha-135993-m04 in Controller
	  Normal   NodeNotReady             68s                    node-controller  Node ha-135993-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           39s                    node-controller  Node ha-135993-m04 event: Registered Node ha-135993-m04 in Controller
	  Normal   Starting                 8s                     kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  8s                     kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  8s (x2 over 8s)        kubelet          Node ha-135993-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s (x2 over 8s)        kubelet          Node ha-135993-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s (x2 over 8s)        kubelet          Node ha-135993-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 8s                     kubelet          Node ha-135993-m04 has been rebooted, boot id: ab1e946c-f99e-4f2e-818a-87907a330fda
	  Normal   NodeReady                8s                     kubelet          Node ha-135993-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.057997] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064240] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.169257] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.120861] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.270464] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +4.125709] systemd-fstab-generator[747]: Ignoring "noauto" option for root device
	[Sep20 17:08] systemd-fstab-generator[882]: Ignoring "noauto" option for root device
	[  +0.057676] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.984086] systemd-fstab-generator[1298]: Ignoring "noauto" option for root device
	[  +0.083524] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.134244] kauditd_printk_skb: 33 callbacks suppressed
	[ +11.488548] kauditd_printk_skb: 26 callbacks suppressed
	[Sep20 17:09] kauditd_printk_skb: 24 callbacks suppressed
	[Sep20 17:18] systemd-fstab-generator[3639]: Ignoring "noauto" option for root device
	[  +0.144705] systemd-fstab-generator[3651]: Ignoring "noauto" option for root device
	[  +0.185360] systemd-fstab-generator[3665]: Ignoring "noauto" option for root device
	[  +0.147124] systemd-fstab-generator[3677]: Ignoring "noauto" option for root device
	[  +0.283768] systemd-fstab-generator[3705]: Ignoring "noauto" option for root device
	[  +9.776273] systemd-fstab-generator[3823]: Ignoring "noauto" option for root device
	[  +0.092434] kauditd_printk_skb: 110 callbacks suppressed
	[  +5.083165] kauditd_printk_skb: 98 callbacks suppressed
	[ +10.060202] kauditd_printk_skb: 10 callbacks suppressed
	[  +9.075908] kauditd_printk_skb: 2 callbacks suppressed
	[Sep20 17:19] kauditd_printk_skb: 5 callbacks suppressed
	[  +6.821602] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [e70d74afe0f7f7c1e0f084aba99e9c488a1f999e94866b12072fdcbc24b0dad7] <==
	2024/09/20 17:16:41 WARNING: [core] [Server #9] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-20T17:16:42.078663Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":4137279806849445103,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-09-20T17:16:42.104881Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.60:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-20T17:16:42.105542Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.60:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-20T17:16:42.106751Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"1a622f206f99396a","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-09-20T17:16:42.107012Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"a8909dfe06e7dd47"}
	{"level":"info","ts":"2024-09-20T17:16:42.107146Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"a8909dfe06e7dd47"}
	{"level":"info","ts":"2024-09-20T17:16:42.107277Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"a8909dfe06e7dd47"}
	{"level":"info","ts":"2024-09-20T17:16:42.107520Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"1a622f206f99396a","remote-peer-id":"a8909dfe06e7dd47"}
	{"level":"info","ts":"2024-09-20T17:16:42.107575Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"1a622f206f99396a","remote-peer-id":"a8909dfe06e7dd47"}
	{"level":"info","ts":"2024-09-20T17:16:42.107662Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"1a622f206f99396a","remote-peer-id":"a8909dfe06e7dd47"}
	{"level":"info","ts":"2024-09-20T17:16:42.107715Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"a8909dfe06e7dd47"}
	{"level":"info","ts":"2024-09-20T17:16:42.107723Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"27688c791509f222"}
	{"level":"info","ts":"2024-09-20T17:16:42.107732Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"27688c791509f222"}
	{"level":"info","ts":"2024-09-20T17:16:42.107756Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"27688c791509f222"}
	{"level":"info","ts":"2024-09-20T17:16:42.107846Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"1a622f206f99396a","remote-peer-id":"27688c791509f222"}
	{"level":"info","ts":"2024-09-20T17:16:42.107926Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"1a622f206f99396a","remote-peer-id":"27688c791509f222"}
	{"level":"info","ts":"2024-09-20T17:16:42.107990Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"1a622f206f99396a","remote-peer-id":"27688c791509f222"}
	{"level":"info","ts":"2024-09-20T17:16:42.108013Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"27688c791509f222"}
	{"level":"info","ts":"2024-09-20T17:16:42.111167Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.60:2380"}
	{"level":"warn","ts":"2024-09-20T17:16:42.111192Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"2.018396737s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	{"level":"info","ts":"2024-09-20T17:16:42.111370Z","caller":"traceutil/trace.go:171","msg":"trace[1068001185] range","detail":"{range_begin:; range_end:; }","duration":"2.018588589s","start":"2024-09-20T17:16:40.092767Z","end":"2024-09-20T17:16:42.111355Z","steps":["trace[1068001185] 'agreement among raft nodes before linearized reading'  (duration: 2.018394833s)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T17:16:42.111316Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.60:2380"}
	{"level":"info","ts":"2024-09-20T17:16:42.111464Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-135993","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.60:2380"],"advertise-client-urls":["https://192.168.39.60:2379"]}
	{"level":"error","ts":"2024-09-20T17:16:42.111421Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[+]data_corruption ok\n[+]serializable_read ok\n[-]linearizable_read failed: etcdserver: server stopped\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	
	
	==> etcd [ee0aced30e75021c727bf72254c618cb3ff71687fe9882fab00322be6da050f0] <==
	{"level":"warn","ts":"2024-09-20T17:19:54.416554Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.133:2380/version","remote-member-id":"27688c791509f222","error":"Get \"https://192.168.39.133:2380/version\": dial tcp 192.168.39.133:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T17:19:54.416620Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"27688c791509f222","error":"Get \"https://192.168.39.133:2380/version\": dial tcp 192.168.39.133:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T17:19:57.450375Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"27688c791509f222","rtt":"0s","error":"dial tcp 192.168.39.133:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T17:19:57.452943Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"27688c791509f222","rtt":"0s","error":"dial tcp 192.168.39.133:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T17:19:58.421004Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.133:2380/version","remote-member-id":"27688c791509f222","error":"Get \"https://192.168.39.133:2380/version\": dial tcp 192.168.39.133:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T17:19:58.421073Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"27688c791509f222","error":"Get \"https://192.168.39.133:2380/version\": dial tcp 192.168.39.133:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T17:20:02.422907Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.133:2380/version","remote-member-id":"27688c791509f222","error":"Get \"https://192.168.39.133:2380/version\": dial tcp 192.168.39.133:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T17:20:02.422986Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"27688c791509f222","error":"Get \"https://192.168.39.133:2380/version\": dial tcp 192.168.39.133:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T17:20:02.450556Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"27688c791509f222","rtt":"0s","error":"dial tcp 192.168.39.133:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T17:20:02.453857Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"27688c791509f222","rtt":"0s","error":"dial tcp 192.168.39.133:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T17:20:06.424970Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.133:2380/version","remote-member-id":"27688c791509f222","error":"Get \"https://192.168.39.133:2380/version\": dial tcp 192.168.39.133:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T17:20:06.425089Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"27688c791509f222","error":"Get \"https://192.168.39.133:2380/version\": dial tcp 192.168.39.133:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T17:20:07.450676Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"27688c791509f222","rtt":"0s","error":"dial tcp 192.168.39.133:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T17:20:07.454945Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"27688c791509f222","rtt":"0s","error":"dial tcp 192.168.39.133:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T17:20:10.426602Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.133:2380/version","remote-member-id":"27688c791509f222","error":"Get \"https://192.168.39.133:2380/version\": dial tcp 192.168.39.133:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T17:20:10.426672Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"27688c791509f222","error":"Get \"https://192.168.39.133:2380/version\": dial tcp 192.168.39.133:2380: connect: connection refused"}
	{"level":"info","ts":"2024-09-20T17:20:11.578043Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"27688c791509f222"}
	{"level":"info","ts":"2024-09-20T17:20:11.578145Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"1a622f206f99396a","remote-peer-id":"27688c791509f222"}
	{"level":"info","ts":"2024-09-20T17:20:11.579791Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"1a622f206f99396a","remote-peer-id":"27688c791509f222"}
	{"level":"info","ts":"2024-09-20T17:20:11.615945Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"1a622f206f99396a","to":"27688c791509f222","stream-type":"stream Message"}
	{"level":"info","ts":"2024-09-20T17:20:11.616090Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"1a622f206f99396a","remote-peer-id":"27688c791509f222"}
	{"level":"info","ts":"2024-09-20T17:20:11.616269Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"1a622f206f99396a","to":"27688c791509f222","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-09-20T17:20:11.616361Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"1a622f206f99396a","remote-peer-id":"27688c791509f222"}
	{"level":"warn","ts":"2024-09-20T17:20:12.451749Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"27688c791509f222","rtt":"0s","error":"dial tcp 192.168.39.133:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T17:20:12.455118Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"27688c791509f222","rtt":"0s","error":"dial tcp 192.168.39.133:2380: connect: connection refused"}
	
	
	==> kernel <==
	 17:20:58 up 13 min,  0 users,  load average: 0.41, 0.55, 0.36
	Linux ha-135993 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [8792a3b1249ffb1a157682b07dec9a0edf11b81c44e53ea068e0fc6f4548be22] <==
	I0920 17:16:13.582883       1 main.go:295] Handling node with IPs: map[192.168.39.60:{}]
	I0920 17:16:13.583002       1 main.go:299] handling current node
	I0920 17:16:13.583044       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0920 17:16:13.583063       1 main.go:322] Node ha-135993-m02 has CIDR [10.244.1.0/24] 
	I0920 17:16:13.583693       1 main.go:295] Handling node with IPs: map[192.168.39.133:{}]
	I0920 17:16:13.583843       1 main.go:322] Node ha-135993-m03 has CIDR [10.244.2.0/24] 
	I0920 17:16:13.583956       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I0920 17:16:13.583979       1 main.go:322] Node ha-135993-m04 has CIDR [10.244.4.0/24] 
	I0920 17:16:23.587337       1 main.go:295] Handling node with IPs: map[192.168.39.60:{}]
	I0920 17:16:23.587462       1 main.go:299] handling current node
	I0920 17:16:23.587492       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0920 17:16:23.587510       1 main.go:322] Node ha-135993-m02 has CIDR [10.244.1.0/24] 
	I0920 17:16:23.587650       1 main.go:295] Handling node with IPs: map[192.168.39.133:{}]
	I0920 17:16:23.587682       1 main.go:322] Node ha-135993-m03 has CIDR [10.244.2.0/24] 
	I0920 17:16:23.587748       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I0920 17:16:23.587767       1 main.go:322] Node ha-135993-m04 has CIDR [10.244.4.0/24] 
	I0920 17:16:33.583265       1 main.go:295] Handling node with IPs: map[192.168.39.133:{}]
	I0920 17:16:33.583410       1 main.go:322] Node ha-135993-m03 has CIDR [10.244.2.0/24] 
	I0920 17:16:33.583747       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I0920 17:16:33.583866       1 main.go:322] Node ha-135993-m04 has CIDR [10.244.4.0/24] 
	I0920 17:16:33.584108       1 main.go:295] Handling node with IPs: map[192.168.39.60:{}]
	I0920 17:16:33.584178       1 main.go:299] handling current node
	I0920 17:16:33.584280       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0920 17:16:33.584316       1 main.go:322] Node ha-135993-m02 has CIDR [10.244.1.0/24] 
	E0920 17:16:34.638841       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=1820&timeout=5m1s&timeoutSeconds=301&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	
	
	==> kindnet [8d6797d9f1c442f6743dde97cc9de50c155c5e2b00e1197ba4a088a93bd7c5db] <==
	I0920 17:20:28.008618       1 main.go:322] Node ha-135993-m04 has CIDR [10.244.4.0/24] 
	I0920 17:20:38.016574       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I0920 17:20:38.016737       1 main.go:322] Node ha-135993-m04 has CIDR [10.244.4.0/24] 
	I0920 17:20:38.016925       1 main.go:295] Handling node with IPs: map[192.168.39.60:{}]
	I0920 17:20:38.016965       1 main.go:299] handling current node
	I0920 17:20:38.017022       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0920 17:20:38.017064       1 main.go:322] Node ha-135993-m02 has CIDR [10.244.1.0/24] 
	I0920 17:20:38.017152       1 main.go:295] Handling node with IPs: map[192.168.39.133:{}]
	I0920 17:20:38.017177       1 main.go:322] Node ha-135993-m03 has CIDR [10.244.2.0/24] 
	I0920 17:20:48.016512       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I0920 17:20:48.016618       1 main.go:322] Node ha-135993-m04 has CIDR [10.244.4.0/24] 
	I0920 17:20:48.016902       1 main.go:295] Handling node with IPs: map[192.168.39.60:{}]
	I0920 17:20:48.016915       1 main.go:299] handling current node
	I0920 17:20:48.016926       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0920 17:20:48.016930       1 main.go:322] Node ha-135993-m02 has CIDR [10.244.1.0/24] 
	I0920 17:20:48.016987       1 main.go:295] Handling node with IPs: map[192.168.39.133:{}]
	I0920 17:20:48.016991       1 main.go:322] Node ha-135993-m03 has CIDR [10.244.2.0/24] 
	I0920 17:20:58.009473       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I0920 17:20:58.009528       1 main.go:322] Node ha-135993-m04 has CIDR [10.244.4.0/24] 
	I0920 17:20:58.009692       1 main.go:295] Handling node with IPs: map[192.168.39.60:{}]
	I0920 17:20:58.009699       1 main.go:299] handling current node
	I0920 17:20:58.009723       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0920 17:20:58.009728       1 main.go:322] Node ha-135993-m02 has CIDR [10.244.1.0/24] 
	I0920 17:20:58.009827       1 main.go:295] Handling node with IPs: map[192.168.39.133:{}]
	I0920 17:20:58.009834       1 main.go:322] Node ha-135993-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [0a83a259fa257292715bb5b2d5b323b49763d5553518b528fc78fc079bb9ed45] <==
	I0920 17:18:26.910368       1 options.go:228] external host was not specified, using 192.168.39.60
	I0920 17:18:26.922578       1 server.go:142] Version: v1.31.1
	I0920 17:18:26.922711       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 17:18:27.933754       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0920 17:18:27.943305       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0920 17:18:27.944115       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0920 17:18:27.944175       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0920 17:18:27.944471       1 instance.go:232] Using reconciler: lease
	W0920 17:18:47.933534       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0920 17:18:47.933541       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0920 17:18:47.945361       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	W0920 17:18:47.946116       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	
	
	==> kube-apiserver [79d6dfc37ecea1b70bf42810eb355d3d33df6f44ad8813cb64a541d83b09b12f] <==
	I0920 17:19:11.955361       1 crd_finalizer.go:269] Starting CRDFinalizer
	I0920 17:19:12.021969       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0920 17:19:12.022070       1 policy_source.go:224] refreshing policies
	I0920 17:19:12.041394       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0920 17:19:12.047493       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0920 17:19:12.047533       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0920 17:19:12.048418       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0920 17:19:12.048681       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0920 17:19:12.049295       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0920 17:19:12.049325       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0920 17:19:12.052180       1 shared_informer.go:320] Caches are synced for configmaps
	I0920 17:19:12.056761       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0920 17:19:12.058751       1 aggregator.go:171] initial CRD sync complete...
	I0920 17:19:12.059284       1 autoregister_controller.go:144] Starting autoregister controller
	I0920 17:19:12.059351       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0920 17:19:12.059376       1 cache.go:39] Caches are synced for autoregister controller
	I0920 17:19:12.069029       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	W0920 17:19:12.083618       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.133 192.168.39.227]
	I0920 17:19:12.085173       1 controller.go:615] quota admission added evaluator for: endpoints
	I0920 17:19:12.099675       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0920 17:19:12.107067       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0920 17:19:12.111667       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0920 17:19:12.954961       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0920 17:19:13.320640       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.133 192.168.39.227 192.168.39.60]
	W0920 17:19:23.327327       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.227 192.168.39.60]
	
	
	==> kube-controller-manager [41c08dd315ead086282b3d4c8d4a5fb35136bd8d2824d8516b73a61281a9eeaf] <==
	I0920 17:18:28.081643       1 serving.go:386] Generated self-signed cert in-memory
	I0920 17:18:28.304063       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0920 17:18:28.304160       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 17:18:28.305979       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0920 17:18:28.306163       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0920 17:18:28.306740       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0920 17:18:28.306848       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0920 17:18:48.957417       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.60:8443/healthz\": dial tcp 192.168.39.60:8443: connect: connection refused"
	
	
	==> kube-controller-manager [9ee493ac17f0bde4d555b58e8b816b325426800736be6cff626867aa80e21c96] <==
	I0920 17:19:49.166790       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-135993-m04"
	I0920 17:19:49.166965       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-135993-m03"
	I0920 17:19:49.170638       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-135993-m04"
	I0920 17:19:49.193401       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-135993-m04"
	I0920 17:19:49.199834       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-135993-m03"
	I0920 17:19:49.318266       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="13.972272ms"
	I0920 17:19:49.318522       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="54.168µs"
	I0920 17:19:50.515993       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-135993-m03"
	I0920 17:19:54.093792       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-135993-m02"
	I0920 17:19:54.419119       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-135993-m03"
	I0920 17:19:57.567581       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-135993-m03"
	I0920 17:19:57.588281       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-135993-m03"
	I0920 17:19:58.395708       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="78.386µs"
	I0920 17:19:59.400941       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-135993-m03"
	I0920 17:20:00.595712       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-135993-m04"
	I0920 17:20:04.493737       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-135993-m04"
	I0920 17:20:17.709159       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="32.236581ms"
	I0920 17:20:17.709403       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="60.126µs"
	I0920 17:20:18.292799       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-135993-m04"
	I0920 17:20:18.387020       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-135993-m04"
	I0920 17:20:27.832727       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-135993-m03"
	I0920 17:20:49.408315       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-135993-m04"
	I0920 17:20:49.408561       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-135993-m04"
	I0920 17:20:49.431027       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-135993-m04"
	I0920 17:20:49.623837       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-135993-m04"
	
	
	==> kube-proxy [398409b93b45dca7c586c3035d250b5dc775d03ee8dd75f81e9e7427a559ed00] <==
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 17:18:30.863684       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-135993\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0920 17:18:33.934951       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-135993\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0920 17:18:37.006676       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-135993\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0920 17:18:43.151468       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-135993\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0920 17:18:52.367863       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-135993\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0920 17:19:10.800108       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-135993\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0920 17:19:10.800288       1 server.go:646] "Can't determine this node's IP, assuming loopback; if this is incorrect, please set the --bind-address flag"
	E0920 17:19:10.800455       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 17:19:10.839742       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 17:19:10.839859       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 17:19:10.839906       1 server_linux.go:169] "Using iptables Proxier"
	I0920 17:19:10.842371       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 17:19:10.842727       1 server.go:483] "Version info" version="v1.31.1"
	I0920 17:19:10.842768       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 17:19:10.849519       1 config.go:199] "Starting service config controller"
	I0920 17:19:10.849690       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 17:19:10.849811       1 config.go:105] "Starting endpoint slice config controller"
	I0920 17:19:10.849828       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 17:19:10.852154       1 config.go:328] "Starting node config controller"
	I0920 17:19:10.852251       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 17:19:11.549870       1 shared_informer.go:320] Caches are synced for service config
	I0920 17:19:11.550101       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 17:19:11.552694       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [e4b462c3efaa10b1790ebacd5fe8845d6eb91643cb6db9b2830f33afe1744d57] <==
	E0920 17:15:32.687994       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1753\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 17:15:35.759924       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1753": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 17:15:35.760051       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1753\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 17:15:35.760194       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1753": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 17:15:35.760274       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1753\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 17:15:35.760448       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-135993&resourceVersion=1752": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 17:15:35.760489       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-135993&resourceVersion=1752\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 17:15:41.903179       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-135993&resourceVersion=1752": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 17:15:41.903457       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-135993&resourceVersion=1752\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 17:15:41.903661       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1753": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 17:15:41.904041       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1753\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 17:15:41.904397       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1753": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 17:15:41.904518       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1753\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 17:15:51.120685       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1753": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 17:15:51.120949       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1753\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 17:15:54.191424       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1753": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 17:15:54.191489       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1753\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 17:15:54.191594       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-135993&resourceVersion=1752": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 17:15:54.191627       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-135993&resourceVersion=1752\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 17:16:09.550977       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1753": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 17:16:09.551110       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1753\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 17:16:12.623454       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1753": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 17:16:12.623523       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1753\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 17:16:15.697679       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-135993&resourceVersion=1752": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 17:16:15.697975       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-135993&resourceVersion=1752\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-scheduler [6001c238074303dee4a1968943cb28c0e10f360312e0284fc54018ba88a7f96d] <==
	W0920 17:19:04.361363       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.60:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.60:8443: connect: connection refused
	E0920 17:19:04.361505       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.60:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.60:8443: connect: connection refused" logger="UnhandledError"
	W0920 17:19:04.542616       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.60:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.60:8443: connect: connection refused
	E0920 17:19:04.542680       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.60:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.60:8443: connect: connection refused" logger="UnhandledError"
	W0920 17:19:05.333894       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.60:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.60:8443: connect: connection refused
	E0920 17:19:05.333976       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.39.60:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.60:8443: connect: connection refused" logger="UnhandledError"
	W0920 17:19:05.571440       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.60:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.60:8443: connect: connection refused
	E0920 17:19:05.571516       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.39.60:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.60:8443: connect: connection refused" logger="UnhandledError"
	W0920 17:19:06.147577       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.60:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.60:8443: connect: connection refused
	E0920 17:19:06.147638       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.168.39.60:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.39.60:8443: connect: connection refused" logger="UnhandledError"
	W0920 17:19:06.545565       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.60:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.60:8443: connect: connection refused
	E0920 17:19:06.545687       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.60:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.60:8443: connect: connection refused" logger="UnhandledError"
	W0920 17:19:07.827787       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.60:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.60:8443: connect: connection refused
	E0920 17:19:07.827861       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.168.39.60:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.39.60:8443: connect: connection refused" logger="UnhandledError"
	W0920 17:19:08.254547       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.60:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.60:8443: connect: connection refused
	E0920 17:19:08.254603       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.168.39.60:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.39.60:8443: connect: connection refused" logger="UnhandledError"
	W0920 17:19:08.299073       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.60:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.60:8443: connect: connection refused
	E0920 17:19:08.299147       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.60:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.60:8443: connect: connection refused" logger="UnhandledError"
	W0920 17:19:08.378967       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.60:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.60:8443: connect: connection refused
	E0920 17:19:08.379013       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.60:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.60:8443: connect: connection refused" logger="UnhandledError"
	W0920 17:19:08.853713       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.60:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.60:8443: connect: connection refused
	E0920 17:19:08.853776       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.39.60:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.60:8443: connect: connection refused" logger="UnhandledError"
	W0920 17:19:08.880684       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.60:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.60:8443: connect: connection refused
	E0920 17:19:08.880767       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.168.39.60:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.39.60:8443: connect: connection refused" logger="UnhandledError"
	I0920 17:19:25.963005       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [db80f5e2505940b81bc20c39e98d873b35eff2e71aac748d8b406e21f5435fca] <==
	I0920 17:11:21.277247       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-w6gf8" node="ha-135993-m04"
	E0920 17:11:21.344572       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-n6xl6\": pod kindnet-n6xl6 is already assigned to node \"ha-135993-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-n6xl6" node="ha-135993-m04"
	E0920 17:11:21.344755       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-n6xl6\": pod kindnet-n6xl6 is already assigned to node \"ha-135993-m04\"" pod="kube-system/kindnet-n6xl6"
	E0920 17:11:21.388481       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-jfsxq\": pod kube-proxy-jfsxq is already assigned to node \"ha-135993-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-jfsxq" node="ha-135993-m04"
	E0920 17:11:21.388679       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-jfsxq\": pod kube-proxy-jfsxq is already assigned to node \"ha-135993-m04\"" pod="kube-system/kube-proxy-jfsxq"
	E0920 17:11:21.399720       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-svxp4\": pod kindnet-svxp4 is already assigned to node \"ha-135993-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-svxp4" node="ha-135993-m04"
	E0920 17:11:21.401135       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod a758ff76-3e8c-40c1-9742-2fbcddd4aa87(kube-system/kindnet-svxp4) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-svxp4"
	E0920 17:11:21.401322       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-svxp4\": pod kindnet-svxp4 is already assigned to node \"ha-135993-m04\"" pod="kube-system/kindnet-svxp4"
	I0920 17:11:21.401439       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-svxp4" node="ha-135993-m04"
	E0920 17:16:28.130642       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0920 17:16:28.294848       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E0920 17:16:29.425148       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0920 17:16:29.523859       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0920 17:16:29.788444       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E0920 17:16:32.035022       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0920 17:16:33.460287       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E0920 17:16:33.692643       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0920 17:16:34.037854       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0920 17:16:35.845491       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	E0920 17:16:36.209417       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	E0920 17:16:37.709519       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0920 17:16:38.023062       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0920 17:16:39.752998       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0920 17:16:40.009823       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	E0920 17:16:41.933759       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 20 17:19:47 ha-135993 kubelet[1305]: E0920 17:19:47.939526    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852787939146615,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:19:47 ha-135993 kubelet[1305]: E0920 17:19:47.939569    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852787939146615,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:19:51 ha-135993 kubelet[1305]: I0920 17:19:51.761491    1305 scope.go:117] "RemoveContainer" containerID="09b2ab354660c3a46043369de5a5d91ba3bbacf95655fdc9ad4f34c7160d3f2d"
	Sep 20 17:19:54 ha-135993 kubelet[1305]: I0920 17:19:54.760783    1305 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-vip-ha-135993" podUID="6aa396e1-76b2-4911-bc93-660c51cef03d"
	Sep 20 17:19:54 ha-135993 kubelet[1305]: I0920 17:19:54.784691    1305 kubelet.go:1900] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-135993"
	Sep 20 17:19:57 ha-135993 kubelet[1305]: I0920 17:19:57.778686    1305 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-135993" podStartSLOduration=3.778621137 podStartE2EDuration="3.778621137s" podCreationTimestamp="2024-09-20 17:19:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-20 17:19:57.778023514 +0000 UTC m=+710.175273821" watchObservedRunningTime="2024-09-20 17:19:57.778621137 +0000 UTC m=+710.175871513"
	Sep 20 17:19:57 ha-135993 kubelet[1305]: E0920 17:19:57.943629    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852797942656767,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:19:57 ha-135993 kubelet[1305]: E0920 17:19:57.943690    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852797942656767,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:20:07 ha-135993 kubelet[1305]: E0920 17:20:07.774560    1305 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 20 17:20:07 ha-135993 kubelet[1305]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 20 17:20:07 ha-135993 kubelet[1305]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 20 17:20:07 ha-135993 kubelet[1305]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 20 17:20:07 ha-135993 kubelet[1305]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 20 17:20:07 ha-135993 kubelet[1305]: E0920 17:20:07.954151    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852807953629330,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:20:07 ha-135993 kubelet[1305]: E0920 17:20:07.954186    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852807953629330,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:20:17 ha-135993 kubelet[1305]: E0920 17:20:17.955645    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852817955414217,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:20:17 ha-135993 kubelet[1305]: E0920 17:20:17.955698    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852817955414217,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:20:27 ha-135993 kubelet[1305]: E0920 17:20:27.959265    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852827958611984,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:20:27 ha-135993 kubelet[1305]: E0920 17:20:27.959662    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852827958611984,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:20:37 ha-135993 kubelet[1305]: E0920 17:20:37.961985    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852837961497350,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:20:37 ha-135993 kubelet[1305]: E0920 17:20:37.962026    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852837961497350,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:20:47 ha-135993 kubelet[1305]: E0920 17:20:47.964995    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852847964689511,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:20:47 ha-135993 kubelet[1305]: E0920 17:20:47.965024    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852847964689511,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:20:57 ha-135993 kubelet[1305]: E0920 17:20:57.967302    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852857966869501,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:20:57 ha-135993 kubelet[1305]: E0920 17:20:57.967364    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852857966869501,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 17:20:56.827071   35401 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19672-8777/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-135993 -n ha-135993
helpers_test.go:261: (dbg) Run:  kubectl --context ha-135993 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (379.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-135993 stop -v=7 --alsologtostderr
E0920 17:21:39.931888   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/functional-945494/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:22:43.197056   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-135993 stop -v=7 --alsologtostderr: exit status 82 (2m0.469080543s)

                                                
                                                
-- stdout --
	* Stopping node "ha-135993-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 17:21:16.444072   35844 out.go:345] Setting OutFile to fd 1 ...
	I0920 17:21:16.444427   35844 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:21:16.444441   35844 out.go:358] Setting ErrFile to fd 2...
	I0920 17:21:16.444446   35844 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:21:16.444632   35844 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-8777/.minikube/bin
	I0920 17:21:16.444864   35844 out.go:352] Setting JSON to false
	I0920 17:21:16.444939   35844 mustload.go:65] Loading cluster: ha-135993
	I0920 17:21:16.445326   35844 config.go:182] Loaded profile config "ha-135993": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 17:21:16.445419   35844 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/config.json ...
	I0920 17:21:16.445615   35844 mustload.go:65] Loading cluster: ha-135993
	I0920 17:21:16.445746   35844 config.go:182] Loaded profile config "ha-135993": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 17:21:16.445772   35844 stop.go:39] StopHost: ha-135993-m04
	I0920 17:21:16.446190   35844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:21:16.446238   35844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:21:16.461642   35844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41765
	I0920 17:21:16.462233   35844 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:21:16.462890   35844 main.go:141] libmachine: Using API Version  1
	I0920 17:21:16.462923   35844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:21:16.463340   35844 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:21:16.465681   35844 out.go:177] * Stopping node "ha-135993-m04"  ...
	I0920 17:21:16.466972   35844 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0920 17:21:16.467017   35844 main.go:141] libmachine: (ha-135993-m04) Calling .DriverName
	I0920 17:21:16.467289   35844 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0920 17:21:16.467318   35844 main.go:141] libmachine: (ha-135993-m04) Calling .GetSSHHostname
	I0920 17:21:16.471356   35844 main.go:141] libmachine: (ha-135993-m04) DBG | domain ha-135993-m04 has defined MAC address 52:54:00:fc:55:36 in network mk-ha-135993
	I0920 17:21:16.471863   35844 main.go:141] libmachine: (ha-135993-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:55:36", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:20:44 +0000 UTC Type:0 Mac:52:54:00:fc:55:36 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-135993-m04 Clientid:01:52:54:00:fc:55:36}
	I0920 17:21:16.471895   35844 main.go:141] libmachine: (ha-135993-m04) DBG | domain ha-135993-m04 has defined IP address 192.168.39.101 and MAC address 52:54:00:fc:55:36 in network mk-ha-135993
	I0920 17:21:16.472066   35844 main.go:141] libmachine: (ha-135993-m04) Calling .GetSSHPort
	I0920 17:21:16.472279   35844 main.go:141] libmachine: (ha-135993-m04) Calling .GetSSHKeyPath
	I0920 17:21:16.472412   35844 main.go:141] libmachine: (ha-135993-m04) Calling .GetSSHUsername
	I0920 17:21:16.472564   35844 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993-m04/id_rsa Username:docker}
	I0920 17:21:16.552065   35844 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0920 17:21:16.604773   35844 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0920 17:21:16.659168   35844 main.go:141] libmachine: Stopping "ha-135993-m04"...
	I0920 17:21:16.659194   35844 main.go:141] libmachine: (ha-135993-m04) Calling .GetState
	I0920 17:21:16.660856   35844 main.go:141] libmachine: (ha-135993-m04) Calling .Stop
	I0920 17:21:16.664077   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 0/120
	I0920 17:21:17.665338   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 1/120
	I0920 17:21:18.666566   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 2/120
	I0920 17:21:19.667854   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 3/120
	I0920 17:21:20.669871   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 4/120
	I0920 17:21:21.672183   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 5/120
	I0920 17:21:22.673451   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 6/120
	I0920 17:21:23.674694   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 7/120
	I0920 17:21:24.676318   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 8/120
	I0920 17:21:25.678387   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 9/120
	I0920 17:21:26.679778   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 10/120
	I0920 17:21:27.681130   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 11/120
	I0920 17:21:28.682465   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 12/120
	I0920 17:21:29.683725   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 13/120
	I0920 17:21:30.685402   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 14/120
	I0920 17:21:31.687387   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 15/120
	I0920 17:21:32.688868   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 16/120
	I0920 17:21:33.690556   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 17/120
	I0920 17:21:34.692377   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 18/120
	I0920 17:21:35.693980   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 19/120
	I0920 17:21:36.695361   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 20/120
	I0920 17:21:37.697409   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 21/120
	I0920 17:21:38.698748   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 22/120
	I0920 17:21:39.700266   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 23/120
	I0920 17:21:40.701667   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 24/120
	I0920 17:21:41.703504   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 25/120
	I0920 17:21:42.704781   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 26/120
	I0920 17:21:43.706335   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 27/120
	I0920 17:21:44.707769   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 28/120
	I0920 17:21:45.709134   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 29/120
	I0920 17:21:46.711380   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 30/120
	I0920 17:21:47.712773   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 31/120
	I0920 17:21:48.713914   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 32/120
	I0920 17:21:49.715348   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 33/120
	I0920 17:21:50.717096   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 34/120
	I0920 17:21:51.718985   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 35/120
	I0920 17:21:52.720951   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 36/120
	I0920 17:21:53.722254   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 37/120
	I0920 17:21:54.724504   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 38/120
	I0920 17:21:55.725986   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 39/120
	I0920 17:21:56.728558   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 40/120
	I0920 17:21:57.730052   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 41/120
	I0920 17:21:58.731428   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 42/120
	I0920 17:21:59.732769   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 43/120
	I0920 17:22:00.734215   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 44/120
	I0920 17:22:01.736390   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 45/120
	I0920 17:22:02.737693   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 46/120
	I0920 17:22:03.739390   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 47/120
	I0920 17:22:04.740646   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 48/120
	I0920 17:22:05.742030   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 49/120
	I0920 17:22:06.744472   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 50/120
	I0920 17:22:07.746128   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 51/120
	I0920 17:22:08.747761   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 52/120
	I0920 17:22:09.749158   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 53/120
	I0920 17:22:10.750857   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 54/120
	I0920 17:22:11.753333   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 55/120
	I0920 17:22:12.754670   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 56/120
	I0920 17:22:13.756359   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 57/120
	I0920 17:22:14.758088   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 58/120
	I0920 17:22:15.760368   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 59/120
	I0920 17:22:16.762370   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 60/120
	I0920 17:22:17.764556   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 61/120
	I0920 17:22:18.765791   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 62/120
	I0920 17:22:19.767494   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 63/120
	I0920 17:22:20.769057   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 64/120
	I0920 17:22:21.770602   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 65/120
	I0920 17:22:22.772963   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 66/120
	I0920 17:22:23.775188   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 67/120
	I0920 17:22:24.776559   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 68/120
	I0920 17:22:25.777988   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 69/120
	I0920 17:22:26.780289   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 70/120
	I0920 17:22:27.781859   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 71/120
	I0920 17:22:28.783415   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 72/120
	I0920 17:22:29.785064   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 73/120
	I0920 17:22:30.786983   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 74/120
	I0920 17:22:31.789170   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 75/120
	I0920 17:22:32.790605   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 76/120
	I0920 17:22:33.792490   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 77/120
	I0920 17:22:34.795145   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 78/120
	I0920 17:22:35.796351   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 79/120
	I0920 17:22:36.798817   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 80/120
	I0920 17:22:37.800740   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 81/120
	I0920 17:22:38.802133   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 82/120
	I0920 17:22:39.803640   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 83/120
	I0920 17:22:40.805009   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 84/120
	I0920 17:22:41.806912   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 85/120
	I0920 17:22:42.808245   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 86/120
	I0920 17:22:43.809658   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 87/120
	I0920 17:22:44.810918   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 88/120
	I0920 17:22:45.812559   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 89/120
	I0920 17:22:46.814694   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 90/120
	I0920 17:22:47.816171   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 91/120
	I0920 17:22:48.817355   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 92/120
	I0920 17:22:49.818671   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 93/120
	I0920 17:22:50.819918   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 94/120
	I0920 17:22:51.821740   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 95/120
	I0920 17:22:52.823101   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 96/120
	I0920 17:22:53.824707   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 97/120
	I0920 17:22:54.826015   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 98/120
	I0920 17:22:55.828339   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 99/120
	I0920 17:22:56.830405   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 100/120
	I0920 17:22:57.831837   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 101/120
	I0920 17:22:58.833040   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 102/120
	I0920 17:22:59.834542   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 103/120
	I0920 17:23:00.836420   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 104/120
	I0920 17:23:01.838508   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 105/120
	I0920 17:23:02.840302   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 106/120
	I0920 17:23:03.841686   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 107/120
	I0920 17:23:04.842972   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 108/120
	I0920 17:23:05.844353   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 109/120
	I0920 17:23:06.846335   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 110/120
	I0920 17:23:07.848358   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 111/120
	I0920 17:23:08.849677   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 112/120
	I0920 17:23:09.851095   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 113/120
	I0920 17:23:10.852591   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 114/120
	I0920 17:23:11.854773   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 115/120
	I0920 17:23:12.856284   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 116/120
	I0920 17:23:13.857643   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 117/120
	I0920 17:23:14.859067   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 118/120
	I0920 17:23:15.860643   35844 main.go:141] libmachine: (ha-135993-m04) Waiting for machine to stop 119/120
	I0920 17:23:16.861242   35844 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0920 17:23:16.861319   35844 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0920 17:23:16.863149   35844 out.go:201] 
	W0920 17:23:16.864581   35844 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0920 17:23:16.864634   35844 out.go:270] * 
	* 
	W0920 17:23:16.867009   35844 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 17:23:16.868362   35844 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-135993 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-135993 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Done: out/minikube-linux-amd64 -p ha-135993 status -v=7 --alsologtostderr: (18.935873674s)
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-135993 status -v=7 --alsologtostderr": 
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-linux-amd64 -p ha-135993 status -v=7 --alsologtostderr": 
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-linux-amd64 -p ha-135993 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-135993 -n ha-135993
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-135993 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-135993 logs -n 25: (1.604488837s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-135993 ssh -n ha-135993-m02 sudo cat                                          | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:11 UTC | 20 Sep 24 17:11 UTC |
	|         | /home/docker/cp-test_ha-135993-m03_ha-135993-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-135993 cp ha-135993-m03:/home/docker/cp-test.txt                              | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:11 UTC | 20 Sep 24 17:11 UTC |
	|         | ha-135993-m04:/home/docker/cp-test_ha-135993-m03_ha-135993-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-135993 ssh -n                                                                 | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:11 UTC | 20 Sep 24 17:11 UTC |
	|         | ha-135993-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-135993 ssh -n ha-135993-m04 sudo cat                                          | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:11 UTC | 20 Sep 24 17:11 UTC |
	|         | /home/docker/cp-test_ha-135993-m03_ha-135993-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-135993 cp testdata/cp-test.txt                                                | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:11 UTC | 20 Sep 24 17:11 UTC |
	|         | ha-135993-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-135993 ssh -n                                                                 | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:11 UTC | 20 Sep 24 17:11 UTC |
	|         | ha-135993-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-135993 cp ha-135993-m04:/home/docker/cp-test.txt                              | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:11 UTC | 20 Sep 24 17:11 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2672215621/001/cp-test_ha-135993-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-135993 ssh -n                                                                 | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:11 UTC | 20 Sep 24 17:11 UTC |
	|         | ha-135993-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-135993 cp ha-135993-m04:/home/docker/cp-test.txt                              | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:11 UTC | 20 Sep 24 17:11 UTC |
	|         | ha-135993:/home/docker/cp-test_ha-135993-m04_ha-135993.txt                       |           |         |         |                     |                     |
	| ssh     | ha-135993 ssh -n                                                                 | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:11 UTC | 20 Sep 24 17:11 UTC |
	|         | ha-135993-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-135993 ssh -n ha-135993 sudo cat                                              | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:11 UTC | 20 Sep 24 17:11 UTC |
	|         | /home/docker/cp-test_ha-135993-m04_ha-135993.txt                                 |           |         |         |                     |                     |
	| cp      | ha-135993 cp ha-135993-m04:/home/docker/cp-test.txt                              | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:11 UTC | 20 Sep 24 17:12 UTC |
	|         | ha-135993-m02:/home/docker/cp-test_ha-135993-m04_ha-135993-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-135993 ssh -n                                                                 | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:12 UTC | 20 Sep 24 17:12 UTC |
	|         | ha-135993-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-135993 ssh -n ha-135993-m02 sudo cat                                          | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:12 UTC | 20 Sep 24 17:12 UTC |
	|         | /home/docker/cp-test_ha-135993-m04_ha-135993-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-135993 cp ha-135993-m04:/home/docker/cp-test.txt                              | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:12 UTC | 20 Sep 24 17:12 UTC |
	|         | ha-135993-m03:/home/docker/cp-test_ha-135993-m04_ha-135993-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-135993 ssh -n                                                                 | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:12 UTC | 20 Sep 24 17:12 UTC |
	|         | ha-135993-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-135993 ssh -n ha-135993-m03 sudo cat                                          | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:12 UTC | 20 Sep 24 17:12 UTC |
	|         | /home/docker/cp-test_ha-135993-m04_ha-135993-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-135993 node stop m02 -v=7                                                     | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:12 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-135993 node start m02 -v=7                                                    | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:14 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-135993 -v=7                                                           | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:14 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-135993 -v=7                                                                | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:14 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-135993 --wait=true -v=7                                                    | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:16 UTC | 20 Sep 24 17:20 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-135993                                                                | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:20 UTC |                     |
	| node    | ha-135993 node delete m03 -v=7                                                   | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:20 UTC | 20 Sep 24 17:21 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-135993 stop -v=7                                                              | ha-135993 | jenkins | v1.34.0 | 20 Sep 24 17:21 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 17:16:41
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 17:16:41.047220   33598 out.go:345] Setting OutFile to fd 1 ...
	I0920 17:16:41.047342   33598 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:16:41.047351   33598 out.go:358] Setting ErrFile to fd 2...
	I0920 17:16:41.047355   33598 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:16:41.047557   33598 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-8777/.minikube/bin
	I0920 17:16:41.048079   33598 out.go:352] Setting JSON to false
	I0920 17:16:41.048951   33598 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3544,"bootTime":1726849057,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 17:16:41.049070   33598 start.go:139] virtualization: kvm guest
	I0920 17:16:41.051716   33598 out.go:177] * [ha-135993] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 17:16:41.053139   33598 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 17:16:41.053139   33598 notify.go:220] Checking for updates...
	I0920 17:16:41.056139   33598 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 17:16:41.058016   33598 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-8777/kubeconfig
	I0920 17:16:41.059232   33598 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-8777/.minikube
	I0920 17:16:41.060502   33598 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 17:16:41.061778   33598 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 17:16:41.063449   33598 config.go:182] Loaded profile config "ha-135993": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 17:16:41.063539   33598 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 17:16:41.063993   33598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:16:41.064051   33598 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:16:41.080093   33598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41179
	I0920 17:16:41.080586   33598 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:16:41.081132   33598 main.go:141] libmachine: Using API Version  1
	I0920 17:16:41.081156   33598 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:16:41.081481   33598 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:16:41.081667   33598 main.go:141] libmachine: (ha-135993) Calling .DriverName
	I0920 17:16:41.119444   33598 out.go:177] * Using the kvm2 driver based on existing profile
	I0920 17:16:41.120608   33598 start.go:297] selected driver: kvm2
	I0920 17:16:41.120624   33598 start.go:901] validating driver "kvm2" against &{Name:ha-135993 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.1 ClusterName:ha-135993 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.133 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.101 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:d
ocker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 17:16:41.120761   33598 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 17:16:41.121069   33598 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 17:16:41.121141   33598 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19672-8777/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 17:16:41.135858   33598 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 17:16:41.136617   33598 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 17:16:41.136648   33598 cni.go:84] Creating CNI manager for ""
	I0920 17:16:41.136702   33598 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0920 17:16:41.136753   33598 start.go:340] cluster config:
	{Name:ha-135993 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-135993 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.133 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.101 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:
false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 17:16:41.136869   33598 iso.go:125] acquiring lock: {Name:mkba95ef0488e46f622333e9f317f43def93040b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 17:16:41.138651   33598 out.go:177] * Starting "ha-135993" primary control-plane node in "ha-135993" cluster
	I0920 17:16:41.139875   33598 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 17:16:41.139906   33598 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0920 17:16:41.139914   33598 cache.go:56] Caching tarball of preloaded images
	I0920 17:16:41.139993   33598 preload.go:172] Found /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 17:16:41.140006   33598 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 17:16:41.140131   33598 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/config.json ...
	I0920 17:16:41.140348   33598 start.go:360] acquireMachinesLock for ha-135993: {Name:mkfeedb385cf08b5d2aa00913e85815d02a180c2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 17:16:41.140410   33598 start.go:364] duration metric: took 32.044µs to acquireMachinesLock for "ha-135993"
	I0920 17:16:41.140430   33598 start.go:96] Skipping create...Using existing machine configuration
	I0920 17:16:41.140439   33598 fix.go:54] fixHost starting: 
	I0920 17:16:41.140753   33598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:16:41.140790   33598 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:16:41.155288   33598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37231
	I0920 17:16:41.155734   33598 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:16:41.156230   33598 main.go:141] libmachine: Using API Version  1
	I0920 17:16:41.156251   33598 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:16:41.156566   33598 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:16:41.156734   33598 main.go:141] libmachine: (ha-135993) Calling .DriverName
	I0920 17:16:41.156907   33598 main.go:141] libmachine: (ha-135993) Calling .GetState
	I0920 17:16:41.158426   33598 fix.go:112] recreateIfNeeded on ha-135993: state=Running err=<nil>
	W0920 17:16:41.158444   33598 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 17:16:41.160612   33598 out.go:177] * Updating the running kvm2 "ha-135993" VM ...
	I0920 17:16:41.161990   33598 machine.go:93] provisionDockerMachine start ...
	I0920 17:16:41.162013   33598 main.go:141] libmachine: (ha-135993) Calling .DriverName
	I0920 17:16:41.162202   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHHostname
	I0920 17:16:41.164624   33598 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:16:41.165096   33598 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:16:41.165123   33598 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:16:41.165218   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHPort
	I0920 17:16:41.165364   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:16:41.165477   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:16:41.165681   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHUsername
	I0920 17:16:41.166001   33598 main.go:141] libmachine: Using SSH client type: native
	I0920 17:16:41.166242   33598 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0920 17:16:41.166257   33598 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 17:16:41.283859   33598 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-135993
	
	I0920 17:16:41.283887   33598 main.go:141] libmachine: (ha-135993) Calling .GetMachineName
	I0920 17:16:41.284102   33598 buildroot.go:166] provisioning hostname "ha-135993"
	I0920 17:16:41.284130   33598 main.go:141] libmachine: (ha-135993) Calling .GetMachineName
	I0920 17:16:41.284300   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHHostname
	I0920 17:16:41.287033   33598 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:16:41.287500   33598 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:16:41.287527   33598 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:16:41.287716   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHPort
	I0920 17:16:41.287906   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:16:41.288047   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:16:41.288157   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHUsername
	I0920 17:16:41.288326   33598 main.go:141] libmachine: Using SSH client type: native
	I0920 17:16:41.288556   33598 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0920 17:16:41.288570   33598 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-135993 && echo "ha-135993" | sudo tee /etc/hostname
	I0920 17:16:41.424441   33598 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-135993
	
	I0920 17:16:41.424468   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHHostname
	I0920 17:16:41.427224   33598 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:16:41.427623   33598 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:16:41.427649   33598 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:16:41.427806   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHPort
	I0920 17:16:41.427947   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:16:41.428104   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:16:41.428236   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHUsername
	I0920 17:16:41.428402   33598 main.go:141] libmachine: Using SSH client type: native
	I0920 17:16:41.428565   33598 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0920 17:16:41.428579   33598 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-135993' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-135993/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-135993' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 17:16:41.542679   33598 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 17:16:41.542708   33598 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-8777/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-8777/.minikube}
	I0920 17:16:41.542741   33598 buildroot.go:174] setting up certificates
	I0920 17:16:41.542750   33598 provision.go:84] configureAuth start
	I0920 17:16:41.542759   33598 main.go:141] libmachine: (ha-135993) Calling .GetMachineName
	I0920 17:16:41.543029   33598 main.go:141] libmachine: (ha-135993) Calling .GetIP
	I0920 17:16:41.545600   33598 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:16:41.545988   33598 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:16:41.546012   33598 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:16:41.546153   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHHostname
	I0920 17:16:41.548216   33598 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:16:41.548577   33598 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:16:41.548601   33598 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:16:41.548693   33598 provision.go:143] copyHostCerts
	I0920 17:16:41.548730   33598 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem
	I0920 17:16:41.548772   33598 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem, removing ...
	I0920 17:16:41.548786   33598 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem
	I0920 17:16:41.548853   33598 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem (1082 bytes)
	I0920 17:16:41.548938   33598 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem
	I0920 17:16:41.548959   33598 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem, removing ...
	I0920 17:16:41.548965   33598 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem
	I0920 17:16:41.548990   33598 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem (1123 bytes)
	I0920 17:16:41.549032   33598 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem
	I0920 17:16:41.549048   33598 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem, removing ...
	I0920 17:16:41.549053   33598 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem
	I0920 17:16:41.549073   33598 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem (1675 bytes)
	I0920 17:16:41.549128   33598 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem org=jenkins.ha-135993 san=[127.0.0.1 192.168.39.60 ha-135993 localhost minikube]
	I0920 17:16:41.644963   33598 provision.go:177] copyRemoteCerts
	I0920 17:16:41.645025   33598 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 17:16:41.645047   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHHostname
	I0920 17:16:41.647896   33598 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:16:41.648220   33598 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:16:41.648254   33598 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:16:41.648442   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHPort
	I0920 17:16:41.648620   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:16:41.648762   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHUsername
	I0920 17:16:41.648888   33598 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993/id_rsa Username:docker}
	I0920 17:16:41.732042   33598 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0920 17:16:41.732114   33598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0920 17:16:41.757969   33598 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0920 17:16:41.758058   33598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0920 17:16:41.784024   33598 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0920 17:16:41.784101   33598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 17:16:41.812189   33598 provision.go:87] duration metric: took 269.425627ms to configureAuth
	I0920 17:16:41.812218   33598 buildroot.go:189] setting minikube options for container-runtime
	I0920 17:16:41.812468   33598 config.go:182] Loaded profile config "ha-135993": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 17:16:41.812550   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHHostname
	I0920 17:16:41.815200   33598 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:16:41.815553   33598 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:16:41.815590   33598 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:16:41.815875   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHPort
	I0920 17:16:41.816045   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:16:41.816201   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:16:41.816338   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHUsername
	I0920 17:16:41.816486   33598 main.go:141] libmachine: Using SSH client type: native
	I0920 17:16:41.816657   33598 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0920 17:16:41.816673   33598 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 17:18:12.634246   33598 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 17:18:12.634277   33598 machine.go:96] duration metric: took 1m31.472270807s to provisionDockerMachine
	I0920 17:18:12.634291   33598 start.go:293] postStartSetup for "ha-135993" (driver="kvm2")
	I0920 17:18:12.634301   33598 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 17:18:12.634314   33598 main.go:141] libmachine: (ha-135993) Calling .DriverName
	I0920 17:18:12.634643   33598 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 17:18:12.634666   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHHostname
	I0920 17:18:12.638207   33598 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:18:12.638667   33598 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:18:12.638706   33598 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:18:12.638855   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHPort
	I0920 17:18:12.639006   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:18:12.639130   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHUsername
	I0920 17:18:12.639223   33598 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993/id_rsa Username:docker}
	I0920 17:18:12.725278   33598 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 17:18:12.729203   33598 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 17:18:12.729228   33598 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/addons for local assets ...
	I0920 17:18:12.729293   33598 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/files for local assets ...
	I0920 17:18:12.729370   33598 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem -> 159732.pem in /etc/ssl/certs
	I0920 17:18:12.729380   33598 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem -> /etc/ssl/certs/159732.pem
	I0920 17:18:12.729467   33598 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 17:18:12.738768   33598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /etc/ssl/certs/159732.pem (1708 bytes)
	I0920 17:18:12.761911   33598 start.go:296] duration metric: took 127.604607ms for postStartSetup
	I0920 17:18:12.761964   33598 main.go:141] libmachine: (ha-135993) Calling .DriverName
	I0920 17:18:12.762270   33598 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0920 17:18:12.762301   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHHostname
	I0920 17:18:12.765232   33598 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:18:12.765708   33598 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:18:12.765736   33598 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:18:12.765898   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHPort
	I0920 17:18:12.766066   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:18:12.766257   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHUsername
	I0920 17:18:12.766424   33598 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993/id_rsa Username:docker}
	W0920 17:18:12.852696   33598 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0920 17:18:12.852744   33598 fix.go:56] duration metric: took 1m31.712303326s for fixHost
	I0920 17:18:12.852769   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHHostname
	I0920 17:18:12.855583   33598 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:18:12.856028   33598 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:18:12.856054   33598 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:18:12.856220   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHPort
	I0920 17:18:12.856504   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:18:12.856699   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:18:12.856818   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHUsername
	I0920 17:18:12.856948   33598 main.go:141] libmachine: Using SSH client type: native
	I0920 17:18:12.857141   33598 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0920 17:18:12.857155   33598 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 17:18:12.966786   33598 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726852692.934958979
	
	I0920 17:18:12.966811   33598 fix.go:216] guest clock: 1726852692.934958979
	I0920 17:18:12.966822   33598 fix.go:229] Guest: 2024-09-20 17:18:12.934958979 +0000 UTC Remote: 2024-09-20 17:18:12.852754141 +0000 UTC m=+91.842791203 (delta=82.204838ms)
	I0920 17:18:12.966874   33598 fix.go:200] guest clock delta is within tolerance: 82.204838ms
	I0920 17:18:12.966885   33598 start.go:83] releasing machines lock for "ha-135993", held for 1m31.826462761s
	I0920 17:18:12.966919   33598 main.go:141] libmachine: (ha-135993) Calling .DriverName
	I0920 17:18:12.967177   33598 main.go:141] libmachine: (ha-135993) Calling .GetIP
	I0920 17:18:12.969883   33598 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:18:12.970266   33598 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:18:12.970289   33598 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:18:12.970449   33598 main.go:141] libmachine: (ha-135993) Calling .DriverName
	I0920 17:18:12.970898   33598 main.go:141] libmachine: (ha-135993) Calling .DriverName
	I0920 17:18:12.971062   33598 main.go:141] libmachine: (ha-135993) Calling .DriverName
	I0920 17:18:12.971183   33598 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 17:18:12.971222   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHHostname
	I0920 17:18:12.971263   33598 ssh_runner.go:195] Run: cat /version.json
	I0920 17:18:12.971281   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHHostname
	I0920 17:18:12.973633   33598 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:18:12.973941   33598 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:18:12.973975   33598 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:18:12.974002   33598 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:18:12.974215   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHPort
	I0920 17:18:12.974389   33598 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:18:12.974416   33598 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:18:12.974431   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:18:12.974596   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHUsername
	I0920 17:18:12.974598   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHPort
	I0920 17:18:12.974791   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHKeyPath
	I0920 17:18:12.974779   33598 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993/id_rsa Username:docker}
	I0920 17:18:12.974948   33598 main.go:141] libmachine: (ha-135993) Calling .GetSSHUsername
	I0920 17:18:12.975075   33598 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/ha-135993/id_rsa Username:docker}
	I0920 17:18:13.132285   33598 ssh_runner.go:195] Run: systemctl --version
	I0920 17:18:13.147148   33598 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 17:18:13.332224   33598 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 17:18:13.338804   33598 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 17:18:13.338870   33598 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 17:18:13.348683   33598 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0920 17:18:13.348704   33598 start.go:495] detecting cgroup driver to use...
	I0920 17:18:13.348822   33598 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 17:18:13.366373   33598 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 17:18:13.381029   33598 docker.go:217] disabling cri-docker service (if available) ...
	I0920 17:18:13.381094   33598 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 17:18:13.394936   33598 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 17:18:13.408126   33598 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 17:18:13.568611   33598 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 17:18:13.710663   33598 docker.go:233] disabling docker service ...
	I0920 17:18:13.710748   33598 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 17:18:13.729462   33598 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 17:18:13.743141   33598 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 17:18:13.890163   33598 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 17:18:14.033917   33598 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 17:18:14.049385   33598 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 17:18:14.070244   33598 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 17:18:14.070308   33598 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:18:14.080864   33598 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 17:18:14.080925   33598 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:18:14.091364   33598 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:18:14.101584   33598 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:18:14.112134   33598 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 17:18:14.122703   33598 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:18:14.133401   33598 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:18:14.145652   33598 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:18:14.156586   33598 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 17:18:14.166711   33598 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 17:18:14.176768   33598 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:18:14.324934   33598 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 17:18:23.604835   33598 ssh_runner.go:235] Completed: sudo systemctl restart crio: (9.279861964s)
	I0920 17:18:23.604865   33598 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 17:18:23.604916   33598 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 17:18:23.610469   33598 start.go:563] Will wait 60s for crictl version
	I0920 17:18:23.610526   33598 ssh_runner.go:195] Run: which crictl
	I0920 17:18:23.614208   33598 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 17:18:23.655015   33598 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 17:18:23.655090   33598 ssh_runner.go:195] Run: crio --version
	I0920 17:18:23.684627   33598 ssh_runner.go:195] Run: crio --version
	I0920 17:18:23.714884   33598 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 17:18:23.716118   33598 main.go:141] libmachine: (ha-135993) Calling .GetIP
	I0920 17:18:23.718915   33598 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:18:23.719340   33598 main.go:141] libmachine: (ha-135993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:09", ip: ""} in network mk-ha-135993: {Iface:virbr1 ExpiryTime:2024-09-20 18:07:43 +0000 UTC Type:0 Mac:52:54:00:99:26:09 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-135993 Clientid:01:52:54:00:99:26:09}
	I0920 17:18:23.719368   33598 main.go:141] libmachine: (ha-135993) DBG | domain ha-135993 has defined IP address 192.168.39.60 and MAC address 52:54:00:99:26:09 in network mk-ha-135993
	I0920 17:18:23.719598   33598 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 17:18:23.724114   33598 kubeadm.go:883] updating cluster {Name:ha-135993 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-135993 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.133 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.101 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 17:18:23.724306   33598 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 17:18:23.724369   33598 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 17:18:23.768064   33598 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 17:18:23.768086   33598 crio.go:433] Images already preloaded, skipping extraction
	I0920 17:18:23.768131   33598 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 17:18:23.803975   33598 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 17:18:23.803995   33598 cache_images.go:84] Images are preloaded, skipping loading
	I0920 17:18:23.804013   33598 kubeadm.go:934] updating node { 192.168.39.60 8443 v1.31.1 crio true true} ...
	I0920 17:18:23.804102   33598 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-135993 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.60
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-135993 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 17:18:23.804163   33598 ssh_runner.go:195] Run: crio config
	I0920 17:18:23.851405   33598 cni.go:84] Creating CNI manager for ""
	I0920 17:18:23.851432   33598 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0920 17:18:23.851446   33598 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 17:18:23.851473   33598 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.60 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-135993 NodeName:ha-135993 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.60"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.60 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 17:18:23.851660   33598 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.60
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-135993"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.60
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.60"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 17:18:23.851707   33598 kube-vip.go:115] generating kube-vip config ...
	I0920 17:18:23.851762   33598 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0920 17:18:23.864562   33598 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0920 17:18:23.864687   33598 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0920 17:18:23.864759   33598 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 17:18:23.874841   33598 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 17:18:23.874924   33598 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0920 17:18:23.884136   33598 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0920 17:18:23.900368   33598 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 17:18:23.916507   33598 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0920 17:18:23.933587   33598 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0920 17:18:23.950802   33598 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0920 17:18:23.956753   33598 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:18:24.106480   33598 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 17:18:24.121794   33598 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993 for IP: 192.168.39.60
	I0920 17:18:24.121828   33598 certs.go:194] generating shared ca certs ...
	I0920 17:18:24.121864   33598 certs.go:226] acquiring lock for ca certs: {Name:mkc7ef6c737c6bdc3fdd9dcff8f57029c020d8f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:18:24.122057   33598 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key
	I0920 17:18:24.122119   33598 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key
	I0920 17:18:24.122129   33598 certs.go:256] generating profile certs ...
	I0920 17:18:24.122252   33598 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/client.key
	I0920 17:18:24.122291   33598 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.key.472dc397
	I0920 17:18:24.122312   33598 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.crt.472dc397 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.60 192.168.39.227 192.168.39.133 192.168.39.254]
	I0920 17:18:24.216639   33598 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.crt.472dc397 ...
	I0920 17:18:24.216681   33598 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.crt.472dc397: {Name:mkc4c7f841959bc717da1436551b45ad85e47b88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:18:24.216876   33598 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.key.472dc397 ...
	I0920 17:18:24.216895   33598 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.key.472dc397: {Name:mk453e0cdf35f4405f060c4fae21ecb8d229cc40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:18:24.217010   33598 certs.go:381] copying /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.crt.472dc397 -> /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.crt
	I0920 17:18:24.217157   33598 certs.go:385] copying /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.key.472dc397 -> /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.key
	I0920 17:18:24.217287   33598 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/proxy-client.key
	I0920 17:18:24.217302   33598 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0920 17:18:24.217315   33598 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0920 17:18:24.217328   33598 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0920 17:18:24.217340   33598 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0920 17:18:24.217351   33598 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0920 17:18:24.217362   33598 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0920 17:18:24.217379   33598 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0920 17:18:24.217391   33598 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0920 17:18:24.217432   33598 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem (1338 bytes)
	W0920 17:18:24.217459   33598 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973_empty.pem, impossibly tiny 0 bytes
	I0920 17:18:24.217467   33598 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 17:18:24.217488   33598 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem (1082 bytes)
	I0920 17:18:24.217524   33598 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem (1123 bytes)
	I0920 17:18:24.217550   33598 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem (1675 bytes)
	I0920 17:18:24.217585   33598 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem (1708 bytes)
	I0920 17:18:24.217618   33598 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:18:24.217632   33598 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem -> /usr/share/ca-certificates/15973.pem
	I0920 17:18:24.217645   33598 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem -> /usr/share/ca-certificates/159732.pem
	I0920 17:18:24.218192   33598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 17:18:24.242436   33598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 17:18:24.267664   33598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 17:18:24.292025   33598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0920 17:18:24.315429   33598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0920 17:18:24.338871   33598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 17:18:24.362515   33598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 17:18:24.386060   33598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/ha-135993/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 17:18:24.409198   33598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 17:18:24.432453   33598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem --> /usr/share/ca-certificates/15973.pem (1338 bytes)
	I0920 17:18:24.455478   33598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /usr/share/ca-certificates/159732.pem (1708 bytes)
	I0920 17:18:24.478297   33598 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 17:18:24.494060   33598 ssh_runner.go:195] Run: openssl version
	I0920 17:18:24.500085   33598 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 17:18:24.510748   33598 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:18:24.515402   33598 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:18:24.515485   33598 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:18:24.521184   33598 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 17:18:24.530583   33598 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15973.pem && ln -fs /usr/share/ca-certificates/15973.pem /etc/ssl/certs/15973.pem"
	I0920 17:18:24.540863   33598 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15973.pem
	I0920 17:18:24.545256   33598 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:04 /usr/share/ca-certificates/15973.pem
	I0920 17:18:24.545299   33598 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15973.pem
	I0920 17:18:24.550698   33598 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15973.pem /etc/ssl/certs/51391683.0"
	I0920 17:18:24.560113   33598 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/159732.pem && ln -fs /usr/share/ca-certificates/159732.pem /etc/ssl/certs/159732.pem"
	I0920 17:18:24.571303   33598 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/159732.pem
	I0920 17:18:24.575986   33598 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:04 /usr/share/ca-certificates/159732.pem
	I0920 17:18:24.576040   33598 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/159732.pem
	I0920 17:18:24.581818   33598 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/159732.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 17:18:24.591218   33598 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 17:18:24.595766   33598 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 17:18:24.601193   33598 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 17:18:24.606606   33598 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 17:18:24.612237   33598 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 17:18:24.617968   33598 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 17:18:24.623520   33598 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 17:18:24.629094   33598 kubeadm.go:392] StartCluster: {Name:ha-135993 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-135993 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.133 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.101 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 17:18:24.629249   33598 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 17:18:24.629312   33598 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 17:18:24.667577   33598 cri.go:89] found id: "bf7badc294591b4e6eb17ac796ab10e69706e71983d63ccdd57ff7c4e36ac50e"
	I0920 17:18:24.667604   33598 cri.go:89] found id: "5c869ef0e61597970a9451f4f09bc34c0ec9b1f40422349e7d3f1d33d1dfbbd6"
	I0920 17:18:24.667610   33598 cri.go:89] found id: "b6c5323097a59ef8044970de3449a9f550e30e814c16735624fe9a0eefab94b2"
	I0920 17:18:24.667614   33598 cri.go:89] found id: "1ec44cdb1194b97d48beb73c37e433b069a865ff030c5c837c6893f8be5f2fe3"
	I0920 17:18:24.667618   33598 cri.go:89] found id: "7c668f6376655011c61ac0bac2456f3e751f265e08a925a427f7ecfca3d54d97"
	I0920 17:18:24.667622   33598 cri.go:89] found id: "36f3e8a4356ff70d6ac1d50a79bc078c32683201ca9c0787097b28f3417fdc90"
	I0920 17:18:24.667627   33598 cri.go:89] found id: "5054778f39bbb435a191b7cbdabfca614d8b6d64b7bce122294d50415d601787"
	I0920 17:18:24.667631   33598 cri.go:89] found id: "8792a3b1249ffb1a157682b07dec9a0edf11b81c44e53ea068e0fc6f4548be22"
	I0920 17:18:24.667635   33598 cri.go:89] found id: "e4b462c3efaa10b1790ebacd5fe8845d6eb91643cb6db9b2830f33afe1744d57"
	I0920 17:18:24.667643   33598 cri.go:89] found id: "1a56cd54bb369a6e7651ec4ac3a33220fcfa19226c4ca75ab535aeddbf2811a9"
	I0920 17:18:24.667648   33598 cri.go:89] found id: "2b48cf1f03207bb258f12f5506b5e60fd4bb742d7b2ced222664cc2b996ff15d"
	I0920 17:18:24.667667   33598 cri.go:89] found id: "1f5eb92cf36b033e765f0681f8a6634251dfd6dc0db3410efd3ef6e8580a8b2f"
	I0920 17:18:24.667672   33598 cri.go:89] found id: "e70d74afe0f7f7c1e0f084aba99e9c488a1f999e94866b12072fdcbc24b0dad7"
	I0920 17:18:24.667676   33598 cri.go:89] found id: "db80f5e2505940b81bc20c39e98d873b35eff2e71aac748d8b406e21f5435fca"
	I0920 17:18:24.667683   33598 cri.go:89] found id: ""
	I0920 17:18:24.667730   33598 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 20 17:23:36 ha-135993 crio[3715]: time="2024-09-20 17:23:36.441381055Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0b207fe6-18b5-44a5-831b-ea6e78a4febf name=/runtime.v1.RuntimeService/Version
	Sep 20 17:23:36 ha-135993 crio[3715]: time="2024-09-20 17:23:36.442402979Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=eea6a661-ce9c-47d6-89c3-84e0d2447f71 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 17:23:36 ha-135993 crio[3715]: time="2024-09-20 17:23:36.442835443Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726853016442812250,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=eea6a661-ce9c-47d6-89c3-84e0d2447f71 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 17:23:36 ha-135993 crio[3715]: time="2024-09-20 17:23:36.443451853Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3be30997-16ed-400c-b061-8e234c7ac4fe name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:23:36 ha-135993 crio[3715]: time="2024-09-20 17:23:36.443526591Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3be30997-16ed-400c-b061-8e234c7ac4fe name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:23:36 ha-135993 crio[3715]: time="2024-09-20 17:23:36.443963223Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a6addfcb27c43b162c58e5dc89b34a21b057ffa9e5c6929c13e6720a7d41c017,PodSandboxId:f362cdeaa384c6a9423bc0e3f8a2e87059f200fade4e07de0512ba9104a08560,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726852791788949922,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57137bee-9a7b-4659-a855-0da82d137cb0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79d6dfc37ecea1b70bf42810eb355d3d33df6f44ad8813cb64a541d83b09b12f,PodSandboxId:3426320ea65936be4749a4579ea74dd7b8d3d6d6b1708f63e0490d36f8754cbc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726852749787357892,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bce625d942177d9274bb431f7a7012b2,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ee493ac17f0bde4d555b58e8b816b325426800736be6cff626867aa80e21c96,PodSandboxId:1b66d16f83ebcbb613c67b790b5609e78babfb7bd452c17106f8a33ad0ed7f3a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726852749775751738,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fa5fbf3465d3154576a11474b7b8548,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09b2ab354660c3a46043369de5a5d91ba3bbacf95655fdc9ad4f34c7160d3f2d,PodSandboxId:f362cdeaa384c6a9423bc0e3f8a2e87059f200fade4e07de0512ba9104a08560,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726852743779709268,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57137bee-9a7b-4659-a855-0da82d137cb0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b045c2884667987ee6e8c10e271c2e6226efba112efed8795e08e4a67e0990dc,PodSandboxId:ae90d8ac2b75af492d713bbe29179244cfcaa84f3743ba966f27bbd1ff5508c4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726852740036843854,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-df429,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ef8ed1eb-a7c6-4c49-9e46-924bad4d9577,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ca3f9a710f32d6cb191f799a34e55023e57f7dbebf54b32a353fa72b332ddde,PodSandboxId:a29604104e7ce7457620e53485525a9eb289cd6d4b4d58f90a78e8d40b586cf0,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726852720182665622,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c06f5b4bed0418e8539976286a422aad,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5168567f834602e7f761a3eff5e14aa6b5d1d15b02eb85e90b1e331fc1b73fed,PodSandboxId:eec2e8181267a85c9517e828def8039caef778199c53474aae6cc8adfa8bc435,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726852714096373683,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kpbhk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dfd9f1a-148c-4dba-884a-8618b74f82d0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9
153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff3c00df26f158af79f61dc02b970ad39db9d4f0fcffefa0b719f4598158f62b,PodSandboxId:dee7980a50d73104b0c061a1ffa9347cb87e48985218f83b818efbed52f4a239,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726852709920410907,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gcvg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 899b2b8c-9009-46c0-816b-781e85eb8b19,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.k
ubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:398409b93b45dca7c586c3035d250b5dc775d03ee8dd75f81e9e7427a559ed00,PodSandboxId:1bd48e5dcd4693a3b4c8340aba118e7ad278f521e69bcfd1541092778c515ae6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726852707455117279,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name:
kube-proxy-52r49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d1124bd-e7cb-4239-a29d-c1d5b8870aff,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d6797d9f1c442f6743dde97cc9de50c155c5e2b00e1197ba4a088a93bd7c5db,PodSandboxId:2ef4ad9be22fb67138ab03ac3ed8e5cfb7f27022fb85eb427a1902ca45aec292,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726852706753264043,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6clt2,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: d73a0817-d84f-4269-9de0-1532287a07db,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a83a259fa257292715bb5b2d5b323b49763d5553518b528fc78fc079bb9ed45,PodSandboxId:3426320ea65936be4749a4579ea74dd7b8d3d6d6b1708f63e0490d36f8754cbc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726852706597265158,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-135993,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: bce625d942177d9274bb431f7a7012b2,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee0aced30e75021c727bf72254c618cb3ff71687fe9882fab00322be6da050f0,PodSandboxId:9317a52c9caac0dba626b3d134d689c2bf560e3e0ccc4367c435a9b9e631156e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726852706448342207,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f0e3932935569a5c49edd
0da5a87eba,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41c08dd315ead086282b3d4c8d4a5fb35136bd8d2824d8516b73a61281a9eeaf,PodSandboxId:1b66d16f83ebcbb613c67b790b5609e78babfb7bd452c17106f8a33ad0ed7f3a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726852706532089181,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fa5fbf3465d31545
76a11474b7b8548,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6001c238074303dee4a1968943cb28c0e10f360312e0284fc54018ba88a7f96d,PodSandboxId:2c6a8e58a88556b2103f93f3116a1d5ecc55470841ea358bf0c41f6c80d864ed,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726852706446603353,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086260df87dea88f53b3b3ca08d61864,},Ann
otations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf7badc294591b4e6eb17ac796ab10e69706e71983d63ccdd57ff7c4e36ac50e,PodSandboxId:e228f7e109c3c3870cb95be64330287ff98d08fbf7a86af9f160c71d511053a4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726852693213530849,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kpbhk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dfd9f1a-148c-4dba-884a-8618b74f82d0,},Annotations:map[string]string{io.ku
bernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2a30264a8299336cfb5a4338bab42f8ef663f285ffa7fcb290de88635658a97,PodSandboxId:afa282bba6347297904c2a6423e39984da1a95a2caf8b23dd6857518cc0bb05d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726852250916118141,Labels:map[string]str
ing{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-df429,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ef8ed1eb-a7c6-4c49-9e46-924bad4d9577,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5054778f39bbb435a191b7cbdabfca614d8b6d64b7bce122294d50415d601787,PodSandboxId:6fda3c09e12fee3d9bfbdf163c989e0242d8464e853282bca12154b468cf1a1d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726852104287105818,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gcvg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 899b2b8c-9009-46c0-816b-781e85eb8b19,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8792a3b1249ffb1a157682b07dec9a0edf11b81c44e53ea068e0fc6f4548be22,PodSandboxId:ed014d23a111faa2c65032e35b2ef554b33cd4cb8eddbc89e3b77efabb53a5ce,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726852092541516094,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6clt2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d73a0817-d84f-4269-9de0-1532287a07db,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4b462c3efaa10b1790ebacd5fe8845d6eb91643cb6db9b2830f33afe1744d57,PodSandboxId:1971096e9fdaaef4dfb856b23a044da810f3ea4c5d05511c7fc53105be9d664b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726852092275153244,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-52r49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d1124bd-e7cb-4239-a29d-c1d5b8870aff,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db80f5e2505940b81bc20c39e98d873b35eff2e71aac748d8b406e21f5435fca,PodSandboxId:77a9434f5f03e0fce6c021d0cb97ce2bbe8dbb697371c561907dc8656adca49c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726852081504258492,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086260df87dea88f53b3b3ca08d61864,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e70d74afe0f7f7c1e0f084aba99e9c488a1f999e94866b12072fdcbc24b0dad7,PodSandboxId:74a0a0888b0f65cfdbc3321666aaf8f377feaae2a4e3c132c5ed40ccd4468689,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1726852081538881038,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f0e3932935569a5c49edd0da5a87eba,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3be30997-16ed-400c-b061-8e234c7ac4fe name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:23:36 ha-135993 crio[3715]: time="2024-09-20 17:23:36.487450450Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=24e201c3-d92a-4dac-819a-ec5ad28b4d7e name=/runtime.v1.RuntimeService/Version
	Sep 20 17:23:36 ha-135993 crio[3715]: time="2024-09-20 17:23:36.487526754Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=24e201c3-d92a-4dac-819a-ec5ad28b4d7e name=/runtime.v1.RuntimeService/Version
	Sep 20 17:23:36 ha-135993 crio[3715]: time="2024-09-20 17:23:36.489048507Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a21cc3dc-b4a4-4dd4-ab92-ab373779d11a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 17:23:36 ha-135993 crio[3715]: time="2024-09-20 17:23:36.489696290Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726853016489664294,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a21cc3dc-b4a4-4dd4-ab92-ab373779d11a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 17:23:36 ha-135993 crio[3715]: time="2024-09-20 17:23:36.490394302Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d38fc061-81d4-4c63-8e2d-591e1d4242c1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:23:36 ha-135993 crio[3715]: time="2024-09-20 17:23:36.490452323Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d38fc061-81d4-4c63-8e2d-591e1d4242c1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:23:36 ha-135993 crio[3715]: time="2024-09-20 17:23:36.490911707Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a6addfcb27c43b162c58e5dc89b34a21b057ffa9e5c6929c13e6720a7d41c017,PodSandboxId:f362cdeaa384c6a9423bc0e3f8a2e87059f200fade4e07de0512ba9104a08560,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726852791788949922,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57137bee-9a7b-4659-a855-0da82d137cb0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79d6dfc37ecea1b70bf42810eb355d3d33df6f44ad8813cb64a541d83b09b12f,PodSandboxId:3426320ea65936be4749a4579ea74dd7b8d3d6d6b1708f63e0490d36f8754cbc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726852749787357892,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bce625d942177d9274bb431f7a7012b2,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ee493ac17f0bde4d555b58e8b816b325426800736be6cff626867aa80e21c96,PodSandboxId:1b66d16f83ebcbb613c67b790b5609e78babfb7bd452c17106f8a33ad0ed7f3a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726852749775751738,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fa5fbf3465d3154576a11474b7b8548,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09b2ab354660c3a46043369de5a5d91ba3bbacf95655fdc9ad4f34c7160d3f2d,PodSandboxId:f362cdeaa384c6a9423bc0e3f8a2e87059f200fade4e07de0512ba9104a08560,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726852743779709268,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57137bee-9a7b-4659-a855-0da82d137cb0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b045c2884667987ee6e8c10e271c2e6226efba112efed8795e08e4a67e0990dc,PodSandboxId:ae90d8ac2b75af492d713bbe29179244cfcaa84f3743ba966f27bbd1ff5508c4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726852740036843854,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-df429,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ef8ed1eb-a7c6-4c49-9e46-924bad4d9577,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ca3f9a710f32d6cb191f799a34e55023e57f7dbebf54b32a353fa72b332ddde,PodSandboxId:a29604104e7ce7457620e53485525a9eb289cd6d4b4d58f90a78e8d40b586cf0,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726852720182665622,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c06f5b4bed0418e8539976286a422aad,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5168567f834602e7f761a3eff5e14aa6b5d1d15b02eb85e90b1e331fc1b73fed,PodSandboxId:eec2e8181267a85c9517e828def8039caef778199c53474aae6cc8adfa8bc435,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726852714096373683,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kpbhk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dfd9f1a-148c-4dba-884a-8618b74f82d0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9
153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff3c00df26f158af79f61dc02b970ad39db9d4f0fcffefa0b719f4598158f62b,PodSandboxId:dee7980a50d73104b0c061a1ffa9347cb87e48985218f83b818efbed52f4a239,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726852709920410907,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gcvg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 899b2b8c-9009-46c0-816b-781e85eb8b19,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.k
ubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:398409b93b45dca7c586c3035d250b5dc775d03ee8dd75f81e9e7427a559ed00,PodSandboxId:1bd48e5dcd4693a3b4c8340aba118e7ad278f521e69bcfd1541092778c515ae6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726852707455117279,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name:
kube-proxy-52r49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d1124bd-e7cb-4239-a29d-c1d5b8870aff,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d6797d9f1c442f6743dde97cc9de50c155c5e2b00e1197ba4a088a93bd7c5db,PodSandboxId:2ef4ad9be22fb67138ab03ac3ed8e5cfb7f27022fb85eb427a1902ca45aec292,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726852706753264043,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6clt2,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: d73a0817-d84f-4269-9de0-1532287a07db,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a83a259fa257292715bb5b2d5b323b49763d5553518b528fc78fc079bb9ed45,PodSandboxId:3426320ea65936be4749a4579ea74dd7b8d3d6d6b1708f63e0490d36f8754cbc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726852706597265158,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-135993,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: bce625d942177d9274bb431f7a7012b2,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee0aced30e75021c727bf72254c618cb3ff71687fe9882fab00322be6da050f0,PodSandboxId:9317a52c9caac0dba626b3d134d689c2bf560e3e0ccc4367c435a9b9e631156e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726852706448342207,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f0e3932935569a5c49edd
0da5a87eba,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41c08dd315ead086282b3d4c8d4a5fb35136bd8d2824d8516b73a61281a9eeaf,PodSandboxId:1b66d16f83ebcbb613c67b790b5609e78babfb7bd452c17106f8a33ad0ed7f3a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726852706532089181,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fa5fbf3465d31545
76a11474b7b8548,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6001c238074303dee4a1968943cb28c0e10f360312e0284fc54018ba88a7f96d,PodSandboxId:2c6a8e58a88556b2103f93f3116a1d5ecc55470841ea358bf0c41f6c80d864ed,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726852706446603353,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086260df87dea88f53b3b3ca08d61864,},Ann
otations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf7badc294591b4e6eb17ac796ab10e69706e71983d63ccdd57ff7c4e36ac50e,PodSandboxId:e228f7e109c3c3870cb95be64330287ff98d08fbf7a86af9f160c71d511053a4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726852693213530849,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kpbhk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dfd9f1a-148c-4dba-884a-8618b74f82d0,},Annotations:map[string]string{io.ku
bernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2a30264a8299336cfb5a4338bab42f8ef663f285ffa7fcb290de88635658a97,PodSandboxId:afa282bba6347297904c2a6423e39984da1a95a2caf8b23dd6857518cc0bb05d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726852250916118141,Labels:map[string]str
ing{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-df429,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ef8ed1eb-a7c6-4c49-9e46-924bad4d9577,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5054778f39bbb435a191b7cbdabfca614d8b6d64b7bce122294d50415d601787,PodSandboxId:6fda3c09e12fee3d9bfbdf163c989e0242d8464e853282bca12154b468cf1a1d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726852104287105818,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gcvg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 899b2b8c-9009-46c0-816b-781e85eb8b19,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8792a3b1249ffb1a157682b07dec9a0edf11b81c44e53ea068e0fc6f4548be22,PodSandboxId:ed014d23a111faa2c65032e35b2ef554b33cd4cb8eddbc89e3b77efabb53a5ce,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726852092541516094,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6clt2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d73a0817-d84f-4269-9de0-1532287a07db,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4b462c3efaa10b1790ebacd5fe8845d6eb91643cb6db9b2830f33afe1744d57,PodSandboxId:1971096e9fdaaef4dfb856b23a044da810f3ea4c5d05511c7fc53105be9d664b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726852092275153244,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-52r49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d1124bd-e7cb-4239-a29d-c1d5b8870aff,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db80f5e2505940b81bc20c39e98d873b35eff2e71aac748d8b406e21f5435fca,PodSandboxId:77a9434f5f03e0fce6c021d0cb97ce2bbe8dbb697371c561907dc8656adca49c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726852081504258492,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086260df87dea88f53b3b3ca08d61864,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e70d74afe0f7f7c1e0f084aba99e9c488a1f999e94866b12072fdcbc24b0dad7,PodSandboxId:74a0a0888b0f65cfdbc3321666aaf8f377feaae2a4e3c132c5ed40ccd4468689,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1726852081538881038,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f0e3932935569a5c49edd0da5a87eba,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d38fc061-81d4-4c63-8e2d-591e1d4242c1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:23:36 ha-135993 crio[3715]: time="2024-09-20 17:23:36.516253965Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=da6a7329-e4c1-4cae-922c-8faba2fc123b name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 20 17:23:36 ha-135993 crio[3715]: time="2024-09-20 17:23:36.516966726Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:ae90d8ac2b75af492d713bbe29179244cfcaa84f3743ba966f27bbd1ff5508c4,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-df429,Uid:ef8ed1eb-a7c6-4c49-9e46-924bad4d9577,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726852739910193667,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-df429,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ef8ed1eb-a7c6-4c49-9e46-924bad4d9577,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-20T17:10:46.990722060Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a29604104e7ce7457620e53485525a9eb289cd6d4b4d58f90a78e8d40b586cf0,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-135993,Uid:c06f5b4bed0418e8539976286a422aad,Namespace:kube-system,Attempt:0,},State:SANDBOX_RE
ADY,CreatedAt:1726852720084439227,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c06f5b4bed0418e8539976286a422aad,},Annotations:map[string]string{kubernetes.io/config.hash: c06f5b4bed0418e8539976286a422aad,kubernetes.io/config.seen: 2024-09-20T17:18:23.920581094Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:dee7980a50d73104b0c061a1ffa9347cb87e48985218f83b818efbed52f4a239,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-gcvg4,Uid:899b2b8c-9009-46c0-816b-781e85eb8b19,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726852709800475581,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-gcvg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 899b2b8c-9009-46c0-816b-781e85eb8b19,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09
-20T17:08:23.762967331Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:eec2e8181267a85c9517e828def8039caef778199c53474aae6cc8adfa8bc435,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-kpbhk,Uid:0dfd9f1a-148c-4dba-884a-8618b74f82d0,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1726852706257904275,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-kpbhk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dfd9f1a-148c-4dba-884a-8618b74f82d0,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-20T17:08:23.774001043Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3426320ea65936be4749a4579ea74dd7b8d3d6d6b1708f63e0490d36f8754cbc,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-135993,Uid:bce625d942177d9274bb431f7a7012b2,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726852706176389099,Labels:map[string]strin
g{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bce625d942177d9274bb431f7a7012b2,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.60:8443,kubernetes.io/config.hash: bce625d942177d9274bb431f7a7012b2,kubernetes.io/config.seen: 2024-09-20T17:08:07.708093351Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1bd48e5dcd4693a3b4c8340aba118e7ad278f521e69bcfd1541092778c515ae6,Metadata:&PodSandboxMetadata{Name:kube-proxy-52r49,Uid:8d1124bd-e7cb-4239-a29d-c1d5b8870aff,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726852706157957243,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-52r49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d1124bd-e7cb-4239-a29d-c1d5b8870aff,k8s-app: kube-
proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-20T17:08:11.842947608Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2ef4ad9be22fb67138ab03ac3ed8e5cfb7f27022fb85eb427a1902ca45aec292,Metadata:&PodSandboxMetadata{Name:kindnet-6clt2,Uid:d73a0817-d84f-4269-9de0-1532287a07db,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726852706153250497,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-6clt2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d73a0817-d84f-4269-9de0-1532287a07db,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-20T17:08:11.847631289Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2c6a8e58a88556b2103f93f3116a1d5ecc55470841ea358bf0c41f6c80d864ed,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-135993,Uid:086260
df87dea88f53b3b3ca08d61864,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726852706139378313,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086260df87dea88f53b3b3ca08d61864,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 086260df87dea88f53b3b3ca08d61864,kubernetes.io/config.seen: 2024-09-20T17:08:07.708095977Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1b66d16f83ebcbb613c67b790b5609e78babfb7bd452c17106f8a33ad0ed7f3a,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-135993,Uid:1fa5fbf3465d3154576a11474b7b8548,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726852706138870013,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-135993,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: 1fa5fbf3465d3154576a11474b7b8548,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 1fa5fbf3465d3154576a11474b7b8548,kubernetes.io/config.seen: 2024-09-20T17:08:07.708094953Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9317a52c9caac0dba626b3d134d689c2bf560e3e0ccc4367c435a9b9e631156e,Metadata:&PodSandboxMetadata{Name:etcd-ha-135993,Uid:4f0e3932935569a5c49edd0da5a87eba,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726852706115596037,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f0e3932935569a5c49edd0da5a87eba,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.60:2379,kubernetes.io/config.hash: 4f0e3932935569a5c49edd0da5a87eba,kubernetes.io/config.seen: 2024-09-20T17:08:07.708064881Z,kubernetes.io/config.source: fil
e,},RuntimeHandler:,},&PodSandbox{Id:f362cdeaa384c6a9423bc0e3f8a2e87059f200fade4e07de0512ba9104a08560,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:57137bee-9a7b-4659-a855-0da82d137cb0,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726852706076073836,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57137bee-9a7b-4659-a855-0da82d137cb0,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imageP
ullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-20T17:08:23.778285024Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e228f7e109c3c3870cb95be64330287ff98d08fbf7a86af9f160c71d511053a4,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-kpbhk,Uid:0dfd9f1a-148c-4dba-884a-8618b74f82d0,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1726852693057510740,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-kpbhk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dfd9f1a-148c-4dba-884a-8618b74f82d0,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-20T17:08:23.774001043Z,kubernetes.io
/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:afa282bba6347297904c2a6423e39984da1a95a2caf8b23dd6857518cc0bb05d,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-df429,Uid:ef8ed1eb-a7c6-4c49-9e46-924bad4d9577,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726852247309191767,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-df429,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ef8ed1eb-a7c6-4c49-9e46-924bad4d9577,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-20T17:10:46.990722060Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6fda3c09e12fee3d9bfbdf163c989e0242d8464e853282bca12154b468cf1a1d,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-gcvg4,Uid:899b2b8c-9009-46c0-816b-781e85eb8b19,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726852104073849199,Labels:map[string]string{io.kubernetes.container.name: POD,io.kube
rnetes.pod.name: coredns-7c65d6cfc9-gcvg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 899b2b8c-9009-46c0-816b-781e85eb8b19,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-20T17:08:23.762967331Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ed014d23a111faa2c65032e35b2ef554b33cd4cb8eddbc89e3b77efabb53a5ce,Metadata:&PodSandboxMetadata{Name:kindnet-6clt2,Uid:d73a0817-d84f-4269-9de0-1532287a07db,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726852092183788077,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-6clt2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d73a0817-d84f-4269-9de0-1532287a07db,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-20T17:08:11.847631289Z,kubernetes.io/config.source: api,},Runtim
eHandler:,},&PodSandbox{Id:1971096e9fdaaef4dfb856b23a044da810f3ea4c5d05511c7fc53105be9d664b,Metadata:&PodSandboxMetadata{Name:kube-proxy-52r49,Uid:8d1124bd-e7cb-4239-a29d-c1d5b8870aff,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726852092149866432,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-52r49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d1124bd-e7cb-4239-a29d-c1d5b8870aff,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-20T17:08:11.842947608Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:77a9434f5f03e0fce6c021d0cb97ce2bbe8dbb697371c561907dc8656adca49c,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-135993,Uid:086260df87dea88f53b3b3ca08d61864,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726852081375931751,Labels:map[string]string{component: kube-scheduler,io.kubernet
es.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086260df87dea88f53b3b3ca08d61864,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 086260df87dea88f53b3b3ca08d61864,kubernetes.io/config.seen: 2024-09-20T17:08:00.892479642Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:74a0a0888b0f65cfdbc3321666aaf8f377feaae2a4e3c132c5ed40ccd4468689,Metadata:&PodSandboxMetadata{Name:etcd-ha-135993,Uid:4f0e3932935569a5c49edd0da5a87eba,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726852081348033851,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f0e3932935569a5c49edd0da5a87eba,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.60:2379,kubernetes.io/config.hash: 4f0e3932935
569a5c49edd0da5a87eba,kubernetes.io/config.seen: 2024-09-20T17:08:00.892473478Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=da6a7329-e4c1-4cae-922c-8faba2fc123b name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 20 17:23:36 ha-135993 crio[3715]: time="2024-09-20 17:23:36.517840201Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ff032588-9c06-4fa5-b2b0-fc9551f51dcb name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:23:36 ha-135993 crio[3715]: time="2024-09-20 17:23:36.517902195Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ff032588-9c06-4fa5-b2b0-fc9551f51dcb name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:23:36 ha-135993 crio[3715]: time="2024-09-20 17:23:36.518424650Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a6addfcb27c43b162c58e5dc89b34a21b057ffa9e5c6929c13e6720a7d41c017,PodSandboxId:f362cdeaa384c6a9423bc0e3f8a2e87059f200fade4e07de0512ba9104a08560,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726852791788949922,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57137bee-9a7b-4659-a855-0da82d137cb0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79d6dfc37ecea1b70bf42810eb355d3d33df6f44ad8813cb64a541d83b09b12f,PodSandboxId:3426320ea65936be4749a4579ea74dd7b8d3d6d6b1708f63e0490d36f8754cbc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726852749787357892,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bce625d942177d9274bb431f7a7012b2,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ee493ac17f0bde4d555b58e8b816b325426800736be6cff626867aa80e21c96,PodSandboxId:1b66d16f83ebcbb613c67b790b5609e78babfb7bd452c17106f8a33ad0ed7f3a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726852749775751738,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fa5fbf3465d3154576a11474b7b8548,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09b2ab354660c3a46043369de5a5d91ba3bbacf95655fdc9ad4f34c7160d3f2d,PodSandboxId:f362cdeaa384c6a9423bc0e3f8a2e87059f200fade4e07de0512ba9104a08560,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726852743779709268,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57137bee-9a7b-4659-a855-0da82d137cb0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b045c2884667987ee6e8c10e271c2e6226efba112efed8795e08e4a67e0990dc,PodSandboxId:ae90d8ac2b75af492d713bbe29179244cfcaa84f3743ba966f27bbd1ff5508c4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726852740036843854,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-df429,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ef8ed1eb-a7c6-4c49-9e46-924bad4d9577,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ca3f9a710f32d6cb191f799a34e55023e57f7dbebf54b32a353fa72b332ddde,PodSandboxId:a29604104e7ce7457620e53485525a9eb289cd6d4b4d58f90a78e8d40b586cf0,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726852720182665622,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c06f5b4bed0418e8539976286a422aad,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5168567f834602e7f761a3eff5e14aa6b5d1d15b02eb85e90b1e331fc1b73fed,PodSandboxId:eec2e8181267a85c9517e828def8039caef778199c53474aae6cc8adfa8bc435,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726852714096373683,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kpbhk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dfd9f1a-148c-4dba-884a-8618b74f82d0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9
153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff3c00df26f158af79f61dc02b970ad39db9d4f0fcffefa0b719f4598158f62b,PodSandboxId:dee7980a50d73104b0c061a1ffa9347cb87e48985218f83b818efbed52f4a239,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726852709920410907,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gcvg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 899b2b8c-9009-46c0-816b-781e85eb8b19,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.k
ubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:398409b93b45dca7c586c3035d250b5dc775d03ee8dd75f81e9e7427a559ed00,PodSandboxId:1bd48e5dcd4693a3b4c8340aba118e7ad278f521e69bcfd1541092778c515ae6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726852707455117279,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name:
kube-proxy-52r49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d1124bd-e7cb-4239-a29d-c1d5b8870aff,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d6797d9f1c442f6743dde97cc9de50c155c5e2b00e1197ba4a088a93bd7c5db,PodSandboxId:2ef4ad9be22fb67138ab03ac3ed8e5cfb7f27022fb85eb427a1902ca45aec292,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726852706753264043,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6clt2,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: d73a0817-d84f-4269-9de0-1532287a07db,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a83a259fa257292715bb5b2d5b323b49763d5553518b528fc78fc079bb9ed45,PodSandboxId:3426320ea65936be4749a4579ea74dd7b8d3d6d6b1708f63e0490d36f8754cbc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726852706597265158,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-135993,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: bce625d942177d9274bb431f7a7012b2,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee0aced30e75021c727bf72254c618cb3ff71687fe9882fab00322be6da050f0,PodSandboxId:9317a52c9caac0dba626b3d134d689c2bf560e3e0ccc4367c435a9b9e631156e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726852706448342207,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f0e3932935569a5c49edd
0da5a87eba,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41c08dd315ead086282b3d4c8d4a5fb35136bd8d2824d8516b73a61281a9eeaf,PodSandboxId:1b66d16f83ebcbb613c67b790b5609e78babfb7bd452c17106f8a33ad0ed7f3a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726852706532089181,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fa5fbf3465d31545
76a11474b7b8548,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6001c238074303dee4a1968943cb28c0e10f360312e0284fc54018ba88a7f96d,PodSandboxId:2c6a8e58a88556b2103f93f3116a1d5ecc55470841ea358bf0c41f6c80d864ed,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726852706446603353,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086260df87dea88f53b3b3ca08d61864,},Ann
otations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf7badc294591b4e6eb17ac796ab10e69706e71983d63ccdd57ff7c4e36ac50e,PodSandboxId:e228f7e109c3c3870cb95be64330287ff98d08fbf7a86af9f160c71d511053a4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726852693213530849,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kpbhk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dfd9f1a-148c-4dba-884a-8618b74f82d0,},Annotations:map[string]string{io.ku
bernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2a30264a8299336cfb5a4338bab42f8ef663f285ffa7fcb290de88635658a97,PodSandboxId:afa282bba6347297904c2a6423e39984da1a95a2caf8b23dd6857518cc0bb05d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726852250916118141,Labels:map[string]str
ing{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-df429,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ef8ed1eb-a7c6-4c49-9e46-924bad4d9577,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5054778f39bbb435a191b7cbdabfca614d8b6d64b7bce122294d50415d601787,PodSandboxId:6fda3c09e12fee3d9bfbdf163c989e0242d8464e853282bca12154b468cf1a1d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726852104287105818,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gcvg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 899b2b8c-9009-46c0-816b-781e85eb8b19,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8792a3b1249ffb1a157682b07dec9a0edf11b81c44e53ea068e0fc6f4548be22,PodSandboxId:ed014d23a111faa2c65032e35b2ef554b33cd4cb8eddbc89e3b77efabb53a5ce,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726852092541516094,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6clt2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d73a0817-d84f-4269-9de0-1532287a07db,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4b462c3efaa10b1790ebacd5fe8845d6eb91643cb6db9b2830f33afe1744d57,PodSandboxId:1971096e9fdaaef4dfb856b23a044da810f3ea4c5d05511c7fc53105be9d664b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726852092275153244,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-52r49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d1124bd-e7cb-4239-a29d-c1d5b8870aff,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db80f5e2505940b81bc20c39e98d873b35eff2e71aac748d8b406e21f5435fca,PodSandboxId:77a9434f5f03e0fce6c021d0cb97ce2bbe8dbb697371c561907dc8656adca49c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726852081504258492,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086260df87dea88f53b3b3ca08d61864,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e70d74afe0f7f7c1e0f084aba99e9c488a1f999e94866b12072fdcbc24b0dad7,PodSandboxId:74a0a0888b0f65cfdbc3321666aaf8f377feaae2a4e3c132c5ed40ccd4468689,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1726852081538881038,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f0e3932935569a5c49edd0da5a87eba,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ff032588-9c06-4fa5-b2b0-fc9551f51dcb name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:23:36 ha-135993 crio[3715]: time="2024-09-20 17:23:36.535070778Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3545d18d-80b3-4189-a698-d6af0fd9093a name=/runtime.v1.RuntimeService/Version
	Sep 20 17:23:36 ha-135993 crio[3715]: time="2024-09-20 17:23:36.535424283Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3545d18d-80b3-4189-a698-d6af0fd9093a name=/runtime.v1.RuntimeService/Version
	Sep 20 17:23:36 ha-135993 crio[3715]: time="2024-09-20 17:23:36.536916846Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fa90fef1-c845-46b0-93b8-2c35326a2b97 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 17:23:36 ha-135993 crio[3715]: time="2024-09-20 17:23:36.537738198Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726853016537707198,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fa90fef1-c845-46b0-93b8-2c35326a2b97 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 17:23:36 ha-135993 crio[3715]: time="2024-09-20 17:23:36.538655356Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a351a35e-9777-4f0c-9e6b-02471c27b5bb name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:23:36 ha-135993 crio[3715]: time="2024-09-20 17:23:36.538719070Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a351a35e-9777-4f0c-9e6b-02471c27b5bb name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:23:36 ha-135993 crio[3715]: time="2024-09-20 17:23:36.539171806Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a6addfcb27c43b162c58e5dc89b34a21b057ffa9e5c6929c13e6720a7d41c017,PodSandboxId:f362cdeaa384c6a9423bc0e3f8a2e87059f200fade4e07de0512ba9104a08560,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726852791788949922,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57137bee-9a7b-4659-a855-0da82d137cb0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79d6dfc37ecea1b70bf42810eb355d3d33df6f44ad8813cb64a541d83b09b12f,PodSandboxId:3426320ea65936be4749a4579ea74dd7b8d3d6d6b1708f63e0490d36f8754cbc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726852749787357892,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bce625d942177d9274bb431f7a7012b2,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ee493ac17f0bde4d555b58e8b816b325426800736be6cff626867aa80e21c96,PodSandboxId:1b66d16f83ebcbb613c67b790b5609e78babfb7bd452c17106f8a33ad0ed7f3a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726852749775751738,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fa5fbf3465d3154576a11474b7b8548,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09b2ab354660c3a46043369de5a5d91ba3bbacf95655fdc9ad4f34c7160d3f2d,PodSandboxId:f362cdeaa384c6a9423bc0e3f8a2e87059f200fade4e07de0512ba9104a08560,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726852743779709268,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57137bee-9a7b-4659-a855-0da82d137cb0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b045c2884667987ee6e8c10e271c2e6226efba112efed8795e08e4a67e0990dc,PodSandboxId:ae90d8ac2b75af492d713bbe29179244cfcaa84f3743ba966f27bbd1ff5508c4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726852740036843854,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-df429,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ef8ed1eb-a7c6-4c49-9e46-924bad4d9577,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ca3f9a710f32d6cb191f799a34e55023e57f7dbebf54b32a353fa72b332ddde,PodSandboxId:a29604104e7ce7457620e53485525a9eb289cd6d4b4d58f90a78e8d40b586cf0,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726852720182665622,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c06f5b4bed0418e8539976286a422aad,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5168567f834602e7f761a3eff5e14aa6b5d1d15b02eb85e90b1e331fc1b73fed,PodSandboxId:eec2e8181267a85c9517e828def8039caef778199c53474aae6cc8adfa8bc435,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726852714096373683,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kpbhk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dfd9f1a-148c-4dba-884a-8618b74f82d0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9
153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff3c00df26f158af79f61dc02b970ad39db9d4f0fcffefa0b719f4598158f62b,PodSandboxId:dee7980a50d73104b0c061a1ffa9347cb87e48985218f83b818efbed52f4a239,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726852709920410907,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gcvg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 899b2b8c-9009-46c0-816b-781e85eb8b19,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.k
ubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:398409b93b45dca7c586c3035d250b5dc775d03ee8dd75f81e9e7427a559ed00,PodSandboxId:1bd48e5dcd4693a3b4c8340aba118e7ad278f521e69bcfd1541092778c515ae6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726852707455117279,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name:
kube-proxy-52r49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d1124bd-e7cb-4239-a29d-c1d5b8870aff,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d6797d9f1c442f6743dde97cc9de50c155c5e2b00e1197ba4a088a93bd7c5db,PodSandboxId:2ef4ad9be22fb67138ab03ac3ed8e5cfb7f27022fb85eb427a1902ca45aec292,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726852706753264043,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6clt2,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: d73a0817-d84f-4269-9de0-1532287a07db,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a83a259fa257292715bb5b2d5b323b49763d5553518b528fc78fc079bb9ed45,PodSandboxId:3426320ea65936be4749a4579ea74dd7b8d3d6d6b1708f63e0490d36f8754cbc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726852706597265158,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-135993,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: bce625d942177d9274bb431f7a7012b2,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee0aced30e75021c727bf72254c618cb3ff71687fe9882fab00322be6da050f0,PodSandboxId:9317a52c9caac0dba626b3d134d689c2bf560e3e0ccc4367c435a9b9e631156e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726852706448342207,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f0e3932935569a5c49edd
0da5a87eba,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41c08dd315ead086282b3d4c8d4a5fb35136bd8d2824d8516b73a61281a9eeaf,PodSandboxId:1b66d16f83ebcbb613c67b790b5609e78babfb7bd452c17106f8a33ad0ed7f3a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726852706532089181,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fa5fbf3465d31545
76a11474b7b8548,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6001c238074303dee4a1968943cb28c0e10f360312e0284fc54018ba88a7f96d,PodSandboxId:2c6a8e58a88556b2103f93f3116a1d5ecc55470841ea358bf0c41f6c80d864ed,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726852706446603353,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086260df87dea88f53b3b3ca08d61864,},Ann
otations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf7badc294591b4e6eb17ac796ab10e69706e71983d63ccdd57ff7c4e36ac50e,PodSandboxId:e228f7e109c3c3870cb95be64330287ff98d08fbf7a86af9f160c71d511053a4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726852693213530849,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kpbhk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dfd9f1a-148c-4dba-884a-8618b74f82d0,},Annotations:map[string]string{io.ku
bernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2a30264a8299336cfb5a4338bab42f8ef663f285ffa7fcb290de88635658a97,PodSandboxId:afa282bba6347297904c2a6423e39984da1a95a2caf8b23dd6857518cc0bb05d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726852250916118141,Labels:map[string]str
ing{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-df429,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ef8ed1eb-a7c6-4c49-9e46-924bad4d9577,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5054778f39bbb435a191b7cbdabfca614d8b6d64b7bce122294d50415d601787,PodSandboxId:6fda3c09e12fee3d9bfbdf163c989e0242d8464e853282bca12154b468cf1a1d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726852104287105818,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gcvg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 899b2b8c-9009-46c0-816b-781e85eb8b19,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8792a3b1249ffb1a157682b07dec9a0edf11b81c44e53ea068e0fc6f4548be22,PodSandboxId:ed014d23a111faa2c65032e35b2ef554b33cd4cb8eddbc89e3b77efabb53a5ce,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726852092541516094,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6clt2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d73a0817-d84f-4269-9de0-1532287a07db,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4b462c3efaa10b1790ebacd5fe8845d6eb91643cb6db9b2830f33afe1744d57,PodSandboxId:1971096e9fdaaef4dfb856b23a044da810f3ea4c5d05511c7fc53105be9d664b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726852092275153244,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-52r49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d1124bd-e7cb-4239-a29d-c1d5b8870aff,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db80f5e2505940b81bc20c39e98d873b35eff2e71aac748d8b406e21f5435fca,PodSandboxId:77a9434f5f03e0fce6c021d0cb97ce2bbe8dbb697371c561907dc8656adca49c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726852081504258492,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086260df87dea88f53b3b3ca08d61864,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e70d74afe0f7f7c1e0f084aba99e9c488a1f999e94866b12072fdcbc24b0dad7,PodSandboxId:74a0a0888b0f65cfdbc3321666aaf8f377feaae2a4e3c132c5ed40ccd4468689,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1726852081538881038,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-135993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f0e3932935569a5c49edd0da5a87eba,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a351a35e-9777-4f0c-9e6b-02471c27b5bb name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a6addfcb27c43       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       4                   f362cdeaa384c       storage-provisioner
	79d6dfc37ecea       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      4 minutes ago       Running             kube-apiserver            3                   3426320ea6593       kube-apiserver-ha-135993
	9ee493ac17f0b       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      4 minutes ago       Running             kube-controller-manager   2                   1b66d16f83ebc       kube-controller-manager-ha-135993
	09b2ab354660c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Exited              storage-provisioner       3                   f362cdeaa384c       storage-provisioner
	b045c28846679       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago       Running             busybox                   1                   ae90d8ac2b75a       busybox-7dff88458-df429
	3ca3f9a710f32       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      4 minutes ago       Running             kube-vip                  0                   a29604104e7ce       kube-vip-ha-135993
	5168567f83460       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   2                   eec2e8181267a       coredns-7c65d6cfc9-kpbhk
	ff3c00df26f15       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   1                   dee7980a50d73       coredns-7c65d6cfc9-gcvg4
	398409b93b45d       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      5 minutes ago       Running             kube-proxy                1                   1bd48e5dcd469       kube-proxy-52r49
	8d6797d9f1c44       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      5 minutes ago       Running             kindnet-cni               1                   2ef4ad9be22fb       kindnet-6clt2
	0a83a259fa257       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      5 minutes ago       Exited              kube-apiserver            2                   3426320ea6593       kube-apiserver-ha-135993
	41c08dd315ead       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      5 minutes ago       Exited              kube-controller-manager   1                   1b66d16f83ebc       kube-controller-manager-ha-135993
	ee0aced30e750       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      5 minutes ago       Running             etcd                      1                   9317a52c9caac       etcd-ha-135993
	6001c23807430       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      5 minutes ago       Running             kube-scheduler            1                   2c6a8e58a8855       kube-scheduler-ha-135993
	bf7badc294591       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Exited              coredns                   1                   e228f7e109c3c       coredns-7c65d6cfc9-kpbhk
	d2a30264a8299       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   12 minutes ago      Exited              busybox                   0                   afa282bba6347       busybox-7dff88458-df429
	5054778f39bbb       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      15 minutes ago      Exited              coredns                   0                   6fda3c09e12fe       coredns-7c65d6cfc9-gcvg4
	8792a3b1249ff       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      15 minutes ago      Exited              kindnet-cni               0                   ed014d23a111f       kindnet-6clt2
	e4b462c3efaa1       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      15 minutes ago      Exited              kube-proxy                0                   1971096e9fdaa       kube-proxy-52r49
	e70d74afe0f7f       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      15 minutes ago      Exited              etcd                      0                   74a0a0888b0f6       etcd-ha-135993
	db80f5e250594       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      15 minutes ago      Exited              kube-scheduler            0                   77a9434f5f03e       kube-scheduler-ha-135993
	
	
	==> coredns [5054778f39bbb435a191b7cbdabfca614d8b6d64b7bce122294d50415d601787] <==
	[INFO] 10.244.2.2:50089 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000170402s
	[INFO] 10.244.2.2:41205 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001201877s
	[INFO] 10.244.2.2:49094 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000154615s
	[INFO] 10.244.2.2:54226 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000116561s
	[INFO] 10.244.2.2:56885 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000137064s
	[INFO] 10.244.1.2:43199 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133082s
	[INFO] 10.244.1.2:54300 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000122573s
	[INFO] 10.244.1.2:57535 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000095892s
	[INFO] 10.244.1.2:45845 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000088385s
	[INFO] 10.244.0.4:53452 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000193594s
	[INFO] 10.244.0.4:46571 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000075164s
	[INFO] 10.244.2.2:44125 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000166147s
	[INFO] 10.244.2.2:59364 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000113432s
	[INFO] 10.244.2.2:54562 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000112311s
	[INFO] 10.244.1.2:60066 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132637s
	[INFO] 10.244.1.2:43717 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00017413s
	[INFO] 10.244.1.2:51684 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000156522s
	[INFO] 10.244.0.4:56213 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000141144s
	[INFO] 10.244.2.2:56175 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000117658s
	[INFO] 10.244.2.2:59810 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000111868s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1781&timeout=8m9s&timeoutSeconds=489&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1787&timeout=7m13s&timeoutSeconds=433&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1821&timeout=9m22s&timeoutSeconds=562&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [5168567f834602e7f761a3eff5e14aa6b5d1d15b02eb85e90b1e331fc1b73fed] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:35934->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:35934->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [bf7badc294591b4e6eb17ac796ab10e69706e71983d63ccdd57ff7c4e36ac50e] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:53701 - 37780 "HINFO IN 4226797056785722848.48568784628734717. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.011790862s
	
	
	==> coredns [ff3c00df26f158af79f61dc02b970ad39db9d4f0fcffefa0b719f4598158f62b] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.7:39336->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.7:39308->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.7:39336->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.7:39308->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-135993
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-135993
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0626f22cf0d915d75e291a5bce701f94395056e1
	                    minikube.k8s.io/name=ha-135993
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T17_08_08_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 17:08:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-135993
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 17:23:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 17:19:11 +0000   Fri, 20 Sep 2024 17:08:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 17:19:11 +0000   Fri, 20 Sep 2024 17:08:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 17:19:11 +0000   Fri, 20 Sep 2024 17:08:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 17:19:11 +0000   Fri, 20 Sep 2024 17:08:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.60
	  Hostname:    ha-135993
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6e83ceee6b834466a3a10733ff3c06b4
	  System UUID:                6e83ceee-6b83-4466-a3a1-0733ff3c06b4
	  Boot ID:                    ddcdaa90-2381-4c26-932e-b18d04f91d88
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-df429              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 coredns-7c65d6cfc9-gcvg4             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     15m
	  kube-system                 coredns-7c65d6cfc9-kpbhk             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     15m
	  kube-system                 etcd-ha-135993                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         15m
	  kube-system                 kindnet-6clt2                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      15m
	  kube-system                 kube-apiserver-ha-135993             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-ha-135993    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-52r49                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-ha-135993             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-vip-ha-135993                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m42s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m26s                  kube-proxy       
	  Normal   Starting                 15m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  15m                    kubelet          Node ha-135993 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     15m                    kubelet          Node ha-135993 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    15m                    kubelet          Node ha-135993 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 15m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           15m                    node-controller  Node ha-135993 event: Registered Node ha-135993 in Controller
	  Normal   NodeReady                15m                    kubelet          Node ha-135993 status is now: NodeReady
	  Normal   RegisteredNode           14m                    node-controller  Node ha-135993 event: Registered Node ha-135993 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-135993 event: Registered Node ha-135993 in Controller
	  Normal   NodeNotReady             5m41s (x2 over 6m6s)   kubelet          Node ha-135993 status is now: NodeNotReady
	  Warning  ContainerGCFailed        5m29s (x2 over 6m29s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m27s                  node-controller  Node ha-135993 event: Registered Node ha-135993 in Controller
	  Normal   RegisteredNode           4m21s                  node-controller  Node ha-135993 event: Registered Node ha-135993 in Controller
	  Normal   RegisteredNode           3m18s                  node-controller  Node ha-135993 event: Registered Node ha-135993 in Controller
	
	
	Name:               ha-135993-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-135993-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0626f22cf0d915d75e291a5bce701f94395056e1
	                    minikube.k8s.io/name=ha-135993
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_20T17_09_05_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 17:09:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-135993-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 17:23:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 17:19:54 +0000   Fri, 20 Sep 2024 17:19:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 17:19:54 +0000   Fri, 20 Sep 2024 17:19:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 17:19:54 +0000   Fri, 20 Sep 2024 17:19:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 17:19:54 +0000   Fri, 20 Sep 2024 17:19:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.227
	  Hostname:    ha-135993-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 50c529298e8f4fbb9207cda8fc4b8abe
	  System UUID:                50c52929-8e8f-4fbb-9207-cda8fc4b8abe
	  Boot ID:                    c2e01ab2-7f61-4e96-86cf-402743e36b78
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-cw8r4                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 etcd-ha-135993-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         14m
	  kube-system                 kindnet-5m4r8                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      14m
	  kube-system                 kube-apiserver-ha-135993-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-135993-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-z6xqt                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ha-135993-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-vip-ha-135993-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m20s                  kube-proxy       
	  Normal  Starting                 14m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node ha-135993-m02 status is now: NodeHasSufficientMemory
	  Normal  CIDRAssignmentFailed     14m                    cidrAllocator    Node ha-135993-m02 status is now: CIDRAssignmentFailed
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node ha-135993-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)      kubelet          Node ha-135993-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m                    node-controller  Node ha-135993-m02 event: Registered Node ha-135993-m02 in Controller
	  Normal  RegisteredNode           14m                    node-controller  Node ha-135993-m02 event: Registered Node ha-135993-m02 in Controller
	  Normal  RegisteredNode           13m                    node-controller  Node ha-135993-m02 event: Registered Node ha-135993-m02 in Controller
	  Normal  NodeNotReady             11m                    node-controller  Node ha-135993-m02 status is now: NodeNotReady
	  Normal  Starting                 4m52s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m52s (x8 over 4m52s)  kubelet          Node ha-135993-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m52s (x8 over 4m52s)  kubelet          Node ha-135993-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m52s (x7 over 4m52s)  kubelet          Node ha-135993-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m28s                  node-controller  Node ha-135993-m02 event: Registered Node ha-135993-m02 in Controller
	  Normal  RegisteredNode           4m22s                  node-controller  Node ha-135993-m02 event: Registered Node ha-135993-m02 in Controller
	  Normal  RegisteredNode           3m19s                  node-controller  Node ha-135993-m02 event: Registered Node ha-135993-m02 in Controller
	
	
	Name:               ha-135993-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-135993-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0626f22cf0d915d75e291a5bce701f94395056e1
	                    minikube.k8s.io/name=ha-135993
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_20T17_11_21_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 17:11:21 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-135993-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 17:21:09 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 20 Sep 2024 17:20:49 +0000   Fri, 20 Sep 2024 17:21:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 20 Sep 2024 17:20:49 +0000   Fri, 20 Sep 2024 17:21:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 20 Sep 2024 17:20:49 +0000   Fri, 20 Sep 2024 17:21:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 20 Sep 2024 17:20:49 +0000   Fri, 20 Sep 2024 17:21:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.101
	  Hostname:    ha-135993-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 16a282b7a18241dba73a5c13e70f4f98
	  System UUID:                16a282b7-a182-41db-a73a-5c13e70f4f98
	  Boot ID:                    ab1e946c-f99e-4f2e-818a-87907a330fda
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.4.0/24
	PodCIDRs:                     10.244.4.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-t9d2z    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m38s
	  kube-system                 kindnet-88sbs              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      12m
	  kube-system                 kube-proxy-2q8mx           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m43s                  kube-proxy       
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           12m                    node-controller  Node ha-135993-m04 event: Registered Node ha-135993-m04 in Controller
	  Normal   CIDRAssignmentFailed     12m                    cidrAllocator    Node ha-135993-m04 status is now: CIDRAssignmentFailed
	  Normal   CIDRAssignmentFailed     12m                    cidrAllocator    Node ha-135993-m04 status is now: CIDRAssignmentFailed
	  Normal   RegisteredNode           12m                    node-controller  Node ha-135993-m04 event: Registered Node ha-135993-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-135993-m04 event: Registered Node ha-135993-m04 in Controller
	  Normal   NodeHasSufficientMemory  12m (x2 over 12m)      kubelet          Node ha-135993-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x2 over 12m)      kubelet          Node ha-135993-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x2 over 12m)      kubelet          Node ha-135993-m04 status is now: NodeHasSufficientPID
	  Normal   NodeReady                11m                    kubelet          Node ha-135993-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m28s                  node-controller  Node ha-135993-m04 event: Registered Node ha-135993-m04 in Controller
	  Normal   RegisteredNode           4m22s                  node-controller  Node ha-135993-m04 event: Registered Node ha-135993-m04 in Controller
	  Normal   RegisteredNode           3m19s                  node-controller  Node ha-135993-m04 event: Registered Node ha-135993-m04 in Controller
	  Normal   Starting                 2m48s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  2m48s (x2 over 2m48s)  kubelet          Node ha-135993-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m48s (x2 over 2m48s)  kubelet          Node ha-135993-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m48s (x2 over 2m48s)  kubelet          Node ha-135993-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m48s                  kubelet          Node ha-135993-m04 has been rebooted, boot id: ab1e946c-f99e-4f2e-818a-87907a330fda
	  Normal   NodeReady                2m48s                  kubelet          Node ha-135993-m04 status is now: NodeReady
	  Normal   NodeNotReady             108s (x2 over 3m48s)   node-controller  Node ha-135993-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.057997] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064240] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.169257] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.120861] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.270464] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +4.125709] systemd-fstab-generator[747]: Ignoring "noauto" option for root device
	[Sep20 17:08] systemd-fstab-generator[882]: Ignoring "noauto" option for root device
	[  +0.057676] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.984086] systemd-fstab-generator[1298]: Ignoring "noauto" option for root device
	[  +0.083524] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.134244] kauditd_printk_skb: 33 callbacks suppressed
	[ +11.488548] kauditd_printk_skb: 26 callbacks suppressed
	[Sep20 17:09] kauditd_printk_skb: 24 callbacks suppressed
	[Sep20 17:18] systemd-fstab-generator[3639]: Ignoring "noauto" option for root device
	[  +0.144705] systemd-fstab-generator[3651]: Ignoring "noauto" option for root device
	[  +0.185360] systemd-fstab-generator[3665]: Ignoring "noauto" option for root device
	[  +0.147124] systemd-fstab-generator[3677]: Ignoring "noauto" option for root device
	[  +0.283768] systemd-fstab-generator[3705]: Ignoring "noauto" option for root device
	[  +9.776273] systemd-fstab-generator[3823]: Ignoring "noauto" option for root device
	[  +0.092434] kauditd_printk_skb: 110 callbacks suppressed
	[  +5.083165] kauditd_printk_skb: 98 callbacks suppressed
	[ +10.060202] kauditd_printk_skb: 10 callbacks suppressed
	[  +9.075908] kauditd_printk_skb: 2 callbacks suppressed
	[Sep20 17:19] kauditd_printk_skb: 5 callbacks suppressed
	[  +6.821602] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [e70d74afe0f7f7c1e0f084aba99e9c488a1f999e94866b12072fdcbc24b0dad7] <==
	2024/09/20 17:16:41 WARNING: [core] [Server #9] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-20T17:16:42.078663Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":4137279806849445103,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-09-20T17:16:42.104881Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.60:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-20T17:16:42.105542Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.60:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-20T17:16:42.106751Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"1a622f206f99396a","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-09-20T17:16:42.107012Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"a8909dfe06e7dd47"}
	{"level":"info","ts":"2024-09-20T17:16:42.107146Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"a8909dfe06e7dd47"}
	{"level":"info","ts":"2024-09-20T17:16:42.107277Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"a8909dfe06e7dd47"}
	{"level":"info","ts":"2024-09-20T17:16:42.107520Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"1a622f206f99396a","remote-peer-id":"a8909dfe06e7dd47"}
	{"level":"info","ts":"2024-09-20T17:16:42.107575Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"1a622f206f99396a","remote-peer-id":"a8909dfe06e7dd47"}
	{"level":"info","ts":"2024-09-20T17:16:42.107662Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"1a622f206f99396a","remote-peer-id":"a8909dfe06e7dd47"}
	{"level":"info","ts":"2024-09-20T17:16:42.107715Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"a8909dfe06e7dd47"}
	{"level":"info","ts":"2024-09-20T17:16:42.107723Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"27688c791509f222"}
	{"level":"info","ts":"2024-09-20T17:16:42.107732Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"27688c791509f222"}
	{"level":"info","ts":"2024-09-20T17:16:42.107756Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"27688c791509f222"}
	{"level":"info","ts":"2024-09-20T17:16:42.107846Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"1a622f206f99396a","remote-peer-id":"27688c791509f222"}
	{"level":"info","ts":"2024-09-20T17:16:42.107926Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"1a622f206f99396a","remote-peer-id":"27688c791509f222"}
	{"level":"info","ts":"2024-09-20T17:16:42.107990Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"1a622f206f99396a","remote-peer-id":"27688c791509f222"}
	{"level":"info","ts":"2024-09-20T17:16:42.108013Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"27688c791509f222"}
	{"level":"info","ts":"2024-09-20T17:16:42.111167Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.60:2380"}
	{"level":"warn","ts":"2024-09-20T17:16:42.111192Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"2.018396737s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	{"level":"info","ts":"2024-09-20T17:16:42.111370Z","caller":"traceutil/trace.go:171","msg":"trace[1068001185] range","detail":"{range_begin:; range_end:; }","duration":"2.018588589s","start":"2024-09-20T17:16:40.092767Z","end":"2024-09-20T17:16:42.111355Z","steps":["trace[1068001185] 'agreement among raft nodes before linearized reading'  (duration: 2.018394833s)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T17:16:42.111316Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.60:2380"}
	{"level":"info","ts":"2024-09-20T17:16:42.111464Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-135993","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.60:2380"],"advertise-client-urls":["https://192.168.39.60:2379"]}
	{"level":"error","ts":"2024-09-20T17:16:42.111421Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[+]data_corruption ok\n[+]serializable_read ok\n[-]linearizable_read failed: etcdserver: server stopped\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	
	
	==> etcd [ee0aced30e75021c727bf72254c618cb3ff71687fe9882fab00322be6da050f0] <==
	{"level":"info","ts":"2024-09-20T17:20:11.615945Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"1a622f206f99396a","to":"27688c791509f222","stream-type":"stream Message"}
	{"level":"info","ts":"2024-09-20T17:20:11.616090Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"1a622f206f99396a","remote-peer-id":"27688c791509f222"}
	{"level":"info","ts":"2024-09-20T17:20:11.616269Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"1a622f206f99396a","to":"27688c791509f222","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-09-20T17:20:11.616361Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"1a622f206f99396a","remote-peer-id":"27688c791509f222"}
	{"level":"warn","ts":"2024-09-20T17:20:12.451749Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"27688c791509f222","rtt":"0s","error":"dial tcp 192.168.39.133:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T17:20:12.455118Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"27688c791509f222","rtt":"0s","error":"dial tcp 192.168.39.133:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T17:21:02.974409Z","caller":"embed/config_logging.go:170","msg":"rejected connection on client endpoint","remote-addr":"192.168.39.133:38712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2024-09-20T17:21:02.985484Z","caller":"embed/config_logging.go:170","msg":"rejected connection on client endpoint","remote-addr":"192.168.39.133:38726","server-name":"","error":"EOF"}
	{"level":"info","ts":"2024-09-20T17:21:03.017430Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1a622f206f99396a switched to configuration voters=(1901133809061542250 12146381909381340487)"}
	{"level":"info","ts":"2024-09-20T17:21:03.020276Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"94dd135126e1e7b0","local-member-id":"1a622f206f99396a","removed-remote-peer-id":"27688c791509f222","removed-remote-peer-urls":["https://192.168.39.133:2380"]}
	{"level":"info","ts":"2024-09-20T17:21:03.020418Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"27688c791509f222"}
	{"level":"warn","ts":"2024-09-20T17:21:03.020903Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"27688c791509f222"}
	{"level":"info","ts":"2024-09-20T17:21:03.020995Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"27688c791509f222"}
	{"level":"warn","ts":"2024-09-20T17:21:03.021346Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"27688c791509f222"}
	{"level":"info","ts":"2024-09-20T17:21:03.021451Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"27688c791509f222"}
	{"level":"info","ts":"2024-09-20T17:21:03.021758Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"1a622f206f99396a","remote-peer-id":"27688c791509f222"}
	{"level":"warn","ts":"2024-09-20T17:21:03.022013Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"1a622f206f99396a","remote-peer-id":"27688c791509f222","error":"context canceled"}
	{"level":"warn","ts":"2024-09-20T17:21:03.022124Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"27688c791509f222","error":"failed to read 27688c791509f222 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-09-20T17:21:03.022277Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"1a622f206f99396a","remote-peer-id":"27688c791509f222"}
	{"level":"warn","ts":"2024-09-20T17:21:03.022684Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"1a622f206f99396a","remote-peer-id":"27688c791509f222","error":"http: read on closed response body"}
	{"level":"info","ts":"2024-09-20T17:21:03.022757Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"1a622f206f99396a","remote-peer-id":"27688c791509f222"}
	{"level":"info","ts":"2024-09-20T17:21:03.022799Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"27688c791509f222"}
	{"level":"info","ts":"2024-09-20T17:21:03.022850Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"1a622f206f99396a","removed-remote-peer-id":"27688c791509f222"}
	{"level":"warn","ts":"2024-09-20T17:21:03.036034Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"1a622f206f99396a","remote-peer-id-stream-handler":"1a622f206f99396a","remote-peer-id-from":"27688c791509f222"}
	{"level":"warn","ts":"2024-09-20T17:21:03.036625Z","caller":"embed/config_logging.go:170","msg":"rejected connection on peer endpoint","remote-addr":"192.168.39.133:48712","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 17:23:37 up 16 min,  0 users,  load average: 0.43, 0.44, 0.34
	Linux ha-135993 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [8792a3b1249ffb1a157682b07dec9a0edf11b81c44e53ea068e0fc6f4548be22] <==
	I0920 17:16:13.582883       1 main.go:295] Handling node with IPs: map[192.168.39.60:{}]
	I0920 17:16:13.583002       1 main.go:299] handling current node
	I0920 17:16:13.583044       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0920 17:16:13.583063       1 main.go:322] Node ha-135993-m02 has CIDR [10.244.1.0/24] 
	I0920 17:16:13.583693       1 main.go:295] Handling node with IPs: map[192.168.39.133:{}]
	I0920 17:16:13.583843       1 main.go:322] Node ha-135993-m03 has CIDR [10.244.2.0/24] 
	I0920 17:16:13.583956       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I0920 17:16:13.583979       1 main.go:322] Node ha-135993-m04 has CIDR [10.244.4.0/24] 
	I0920 17:16:23.587337       1 main.go:295] Handling node with IPs: map[192.168.39.60:{}]
	I0920 17:16:23.587462       1 main.go:299] handling current node
	I0920 17:16:23.587492       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0920 17:16:23.587510       1 main.go:322] Node ha-135993-m02 has CIDR [10.244.1.0/24] 
	I0920 17:16:23.587650       1 main.go:295] Handling node with IPs: map[192.168.39.133:{}]
	I0920 17:16:23.587682       1 main.go:322] Node ha-135993-m03 has CIDR [10.244.2.0/24] 
	I0920 17:16:23.587748       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I0920 17:16:23.587767       1 main.go:322] Node ha-135993-m04 has CIDR [10.244.4.0/24] 
	I0920 17:16:33.583265       1 main.go:295] Handling node with IPs: map[192.168.39.133:{}]
	I0920 17:16:33.583410       1 main.go:322] Node ha-135993-m03 has CIDR [10.244.2.0/24] 
	I0920 17:16:33.583747       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I0920 17:16:33.583866       1 main.go:322] Node ha-135993-m04 has CIDR [10.244.4.0/24] 
	I0920 17:16:33.584108       1 main.go:295] Handling node with IPs: map[192.168.39.60:{}]
	I0920 17:16:33.584178       1 main.go:299] handling current node
	I0920 17:16:33.584280       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0920 17:16:33.584316       1 main.go:322] Node ha-135993-m02 has CIDR [10.244.1.0/24] 
	E0920 17:16:34.638841       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=1820&timeout=5m1s&timeoutSeconds=301&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	
	
	==> kindnet [8d6797d9f1c442f6743dde97cc9de50c155c5e2b00e1197ba4a088a93bd7c5db] <==
	I0920 17:22:48.017701       1 main.go:322] Node ha-135993-m04 has CIDR [10.244.4.0/24] 
	I0920 17:22:58.007681       1 main.go:295] Handling node with IPs: map[192.168.39.60:{}]
	I0920 17:22:58.007753       1 main.go:299] handling current node
	I0920 17:22:58.007788       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0920 17:22:58.007796       1 main.go:322] Node ha-135993-m02 has CIDR [10.244.1.0/24] 
	I0920 17:22:58.007989       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I0920 17:22:58.008018       1 main.go:322] Node ha-135993-m04 has CIDR [10.244.4.0/24] 
	I0920 17:23:08.016618       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0920 17:23:08.016679       1 main.go:322] Node ha-135993-m02 has CIDR [10.244.1.0/24] 
	I0920 17:23:08.016945       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I0920 17:23:08.016979       1 main.go:322] Node ha-135993-m04 has CIDR [10.244.4.0/24] 
	I0920 17:23:08.017042       1 main.go:295] Handling node with IPs: map[192.168.39.60:{}]
	I0920 17:23:08.017048       1 main.go:299] handling current node
	I0920 17:23:18.016542       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I0920 17:23:18.016619       1 main.go:322] Node ha-135993-m04 has CIDR [10.244.4.0/24] 
	I0920 17:23:18.016778       1 main.go:295] Handling node with IPs: map[192.168.39.60:{}]
	I0920 17:23:18.016786       1 main.go:299] handling current node
	I0920 17:23:18.016804       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0920 17:23:18.016808       1 main.go:322] Node ha-135993-m02 has CIDR [10.244.1.0/24] 
	I0920 17:23:28.008338       1 main.go:295] Handling node with IPs: map[192.168.39.60:{}]
	I0920 17:23:28.008378       1 main.go:299] handling current node
	I0920 17:23:28.008399       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0920 17:23:28.008404       1 main.go:322] Node ha-135993-m02 has CIDR [10.244.1.0/24] 
	I0920 17:23:28.008546       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I0920 17:23:28.008564       1 main.go:322] Node ha-135993-m04 has CIDR [10.244.4.0/24] 
	
	
	==> kube-apiserver [0a83a259fa257292715bb5b2d5b323b49763d5553518b528fc78fc079bb9ed45] <==
	I0920 17:18:26.910368       1 options.go:228] external host was not specified, using 192.168.39.60
	I0920 17:18:26.922578       1 server.go:142] Version: v1.31.1
	I0920 17:18:26.922711       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 17:18:27.933754       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0920 17:18:27.943305       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0920 17:18:27.944115       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0920 17:18:27.944175       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0920 17:18:27.944471       1 instance.go:232] Using reconciler: lease
	W0920 17:18:47.933534       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0920 17:18:47.933541       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0920 17:18:47.945361       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	W0920 17:18:47.946116       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	
	
	==> kube-apiserver [79d6dfc37ecea1b70bf42810eb355d3d33df6f44ad8813cb64a541d83b09b12f] <==
	I0920 17:19:12.021969       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0920 17:19:12.022070       1 policy_source.go:224] refreshing policies
	I0920 17:19:12.041394       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0920 17:19:12.047493       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0920 17:19:12.047533       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0920 17:19:12.048418       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0920 17:19:12.048681       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0920 17:19:12.049295       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0920 17:19:12.049325       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0920 17:19:12.052180       1 shared_informer.go:320] Caches are synced for configmaps
	I0920 17:19:12.056761       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0920 17:19:12.058751       1 aggregator.go:171] initial CRD sync complete...
	I0920 17:19:12.059284       1 autoregister_controller.go:144] Starting autoregister controller
	I0920 17:19:12.059351       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0920 17:19:12.059376       1 cache.go:39] Caches are synced for autoregister controller
	I0920 17:19:12.069029       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	W0920 17:19:12.083618       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.133 192.168.39.227]
	I0920 17:19:12.085173       1 controller.go:615] quota admission added evaluator for: endpoints
	I0920 17:19:12.099675       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0920 17:19:12.107067       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0920 17:19:12.111667       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0920 17:19:12.954961       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0920 17:19:13.320640       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.133 192.168.39.227 192.168.39.60]
	W0920 17:19:23.327327       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.227 192.168.39.60]
	W0920 17:21:13.328950       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.227 192.168.39.60]
	
	
	==> kube-controller-manager [41c08dd315ead086282b3d4c8d4a5fb35136bd8d2824d8516b73a61281a9eeaf] <==
	I0920 17:18:28.081643       1 serving.go:386] Generated self-signed cert in-memory
	I0920 17:18:28.304063       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0920 17:18:28.304160       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 17:18:28.305979       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0920 17:18:28.306163       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0920 17:18:28.306740       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0920 17:18:28.306848       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0920 17:18:48.957417       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.60:8443/healthz\": dial tcp 192.168.39.60:8443: connect: connection refused"
	
	
	==> kube-controller-manager [9ee493ac17f0bde4d555b58e8b816b325426800736be6cff626867aa80e21c96] <==
	I0920 17:21:49.648692       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-135993-m04"
	I0920 17:21:49.668925       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-135993-m04"
	I0920 17:21:49.754745       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="39.04345ms"
	I0920 17:21:49.755069       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="80.993µs"
	I0920 17:21:50.619195       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-135993-m04"
	I0920 17:21:54.781883       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-135993-m04"
	E0920 17:21:55.487023       1 gc_controller.go:151] "Failed to get node" err="node \"ha-135993-m03\" not found" logger="pod-garbage-collector-controller" node="ha-135993-m03"
	E0920 17:21:55.487074       1 gc_controller.go:151] "Failed to get node" err="node \"ha-135993-m03\" not found" logger="pod-garbage-collector-controller" node="ha-135993-m03"
	E0920 17:21:55.487082       1 gc_controller.go:151] "Failed to get node" err="node \"ha-135993-m03\" not found" logger="pod-garbage-collector-controller" node="ha-135993-m03"
	E0920 17:21:55.487090       1 gc_controller.go:151] "Failed to get node" err="node \"ha-135993-m03\" not found" logger="pod-garbage-collector-controller" node="ha-135993-m03"
	E0920 17:21:55.487095       1 gc_controller.go:151] "Failed to get node" err="node \"ha-135993-m03\" not found" logger="pod-garbage-collector-controller" node="ha-135993-m03"
	I0920 17:21:55.499858       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-135993-m03"
	I0920 17:21:55.541648       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-135993-m03"
	I0920 17:21:55.542018       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-135993-m03"
	I0920 17:21:55.575908       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-135993-m03"
	I0920 17:21:55.575947       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-45c9m"
	I0920 17:21:55.612181       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-45c9m"
	I0920 17:21:55.612265       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-135993-m03"
	I0920 17:21:55.640469       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-135993-m03"
	I0920 17:21:55.640651       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-hcqf8"
	I0920 17:21:55.681973       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-hcqf8"
	I0920 17:21:55.682267       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-135993-m03"
	I0920 17:21:55.721494       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-135993-m03"
	I0920 17:21:55.721612       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-135993-m03"
	I0920 17:21:55.763879       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-135993-m03"
	
	
	==> kube-proxy [398409b93b45dca7c586c3035d250b5dc775d03ee8dd75f81e9e7427a559ed00] <==
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 17:18:30.863684       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-135993\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0920 17:18:33.934951       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-135993\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0920 17:18:37.006676       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-135993\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0920 17:18:43.151468       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-135993\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0920 17:18:52.367863       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-135993\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0920 17:19:10.800108       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-135993\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0920 17:19:10.800288       1 server.go:646] "Can't determine this node's IP, assuming loopback; if this is incorrect, please set the --bind-address flag"
	E0920 17:19:10.800455       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 17:19:10.839742       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 17:19:10.839859       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 17:19:10.839906       1 server_linux.go:169] "Using iptables Proxier"
	I0920 17:19:10.842371       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 17:19:10.842727       1 server.go:483] "Version info" version="v1.31.1"
	I0920 17:19:10.842768       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 17:19:10.849519       1 config.go:199] "Starting service config controller"
	I0920 17:19:10.849690       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 17:19:10.849811       1 config.go:105] "Starting endpoint slice config controller"
	I0920 17:19:10.849828       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 17:19:10.852154       1 config.go:328] "Starting node config controller"
	I0920 17:19:10.852251       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 17:19:11.549870       1 shared_informer.go:320] Caches are synced for service config
	I0920 17:19:11.550101       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 17:19:11.552694       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [e4b462c3efaa10b1790ebacd5fe8845d6eb91643cb6db9b2830f33afe1744d57] <==
	E0920 17:15:32.687994       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1753\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 17:15:35.759924       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1753": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 17:15:35.760051       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1753\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 17:15:35.760194       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1753": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 17:15:35.760274       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1753\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 17:15:35.760448       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-135993&resourceVersion=1752": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 17:15:35.760489       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-135993&resourceVersion=1752\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 17:15:41.903179       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-135993&resourceVersion=1752": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 17:15:41.903457       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-135993&resourceVersion=1752\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 17:15:41.903661       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1753": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 17:15:41.904041       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1753\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 17:15:41.904397       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1753": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 17:15:41.904518       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1753\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 17:15:51.120685       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1753": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 17:15:51.120949       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1753\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 17:15:54.191424       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1753": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 17:15:54.191489       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1753\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 17:15:54.191594       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-135993&resourceVersion=1752": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 17:15:54.191627       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-135993&resourceVersion=1752\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 17:16:09.550977       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1753": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 17:16:09.551110       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1753\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 17:16:12.623454       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1753": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 17:16:12.623523       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1753\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 17:16:15.697679       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-135993&resourceVersion=1752": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 17:16:15.697975       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-135993&resourceVersion=1752\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-scheduler [6001c238074303dee4a1968943cb28c0e10f360312e0284fc54018ba88a7f96d] <==
	W0920 17:19:04.542616       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.60:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.60:8443: connect: connection refused
	E0920 17:19:04.542680       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.60:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.60:8443: connect: connection refused" logger="UnhandledError"
	W0920 17:19:05.333894       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.60:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.60:8443: connect: connection refused
	E0920 17:19:05.333976       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.39.60:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.60:8443: connect: connection refused" logger="UnhandledError"
	W0920 17:19:05.571440       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.60:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.60:8443: connect: connection refused
	E0920 17:19:05.571516       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.39.60:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.60:8443: connect: connection refused" logger="UnhandledError"
	W0920 17:19:06.147577       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.60:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.60:8443: connect: connection refused
	E0920 17:19:06.147638       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.168.39.60:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.39.60:8443: connect: connection refused" logger="UnhandledError"
	W0920 17:19:06.545565       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.60:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.60:8443: connect: connection refused
	E0920 17:19:06.545687       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.60:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.60:8443: connect: connection refused" logger="UnhandledError"
	W0920 17:19:07.827787       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.60:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.60:8443: connect: connection refused
	E0920 17:19:07.827861       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.168.39.60:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.39.60:8443: connect: connection refused" logger="UnhandledError"
	W0920 17:19:08.254547       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.60:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.60:8443: connect: connection refused
	E0920 17:19:08.254603       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.168.39.60:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.39.60:8443: connect: connection refused" logger="UnhandledError"
	W0920 17:19:08.299073       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.60:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.60:8443: connect: connection refused
	E0920 17:19:08.299147       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.60:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.60:8443: connect: connection refused" logger="UnhandledError"
	W0920 17:19:08.378967       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.60:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.60:8443: connect: connection refused
	E0920 17:19:08.379013       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.60:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.60:8443: connect: connection refused" logger="UnhandledError"
	W0920 17:19:08.853713       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.60:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.60:8443: connect: connection refused
	E0920 17:19:08.853776       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.39.60:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.60:8443: connect: connection refused" logger="UnhandledError"
	W0920 17:19:08.880684       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.60:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.60:8443: connect: connection refused
	E0920 17:19:08.880767       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.168.39.60:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.39.60:8443: connect: connection refused" logger="UnhandledError"
	I0920 17:19:25.963005       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0920 17:20:59.718979       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-t9d2z\": pod busybox-7dff88458-t9d2z is already assigned to node \"ha-135993-m04\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-t9d2z" node="ha-135993-m04"
	E0920 17:20:59.719298       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-t9d2z\": pod busybox-7dff88458-t9d2z is already assigned to node \"ha-135993-m04\"" pod="default/busybox-7dff88458-t9d2z"
	
	
	==> kube-scheduler [db80f5e2505940b81bc20c39e98d873b35eff2e71aac748d8b406e21f5435fca] <==
	I0920 17:11:21.277247       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-w6gf8" node="ha-135993-m04"
	E0920 17:11:21.344572       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-n6xl6\": pod kindnet-n6xl6 is already assigned to node \"ha-135993-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-n6xl6" node="ha-135993-m04"
	E0920 17:11:21.344755       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-n6xl6\": pod kindnet-n6xl6 is already assigned to node \"ha-135993-m04\"" pod="kube-system/kindnet-n6xl6"
	E0920 17:11:21.388481       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-jfsxq\": pod kube-proxy-jfsxq is already assigned to node \"ha-135993-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-jfsxq" node="ha-135993-m04"
	E0920 17:11:21.388679       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-jfsxq\": pod kube-proxy-jfsxq is already assigned to node \"ha-135993-m04\"" pod="kube-system/kube-proxy-jfsxq"
	E0920 17:11:21.399720       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-svxp4\": pod kindnet-svxp4 is already assigned to node \"ha-135993-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-svxp4" node="ha-135993-m04"
	E0920 17:11:21.401135       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod a758ff76-3e8c-40c1-9742-2fbcddd4aa87(kube-system/kindnet-svxp4) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-svxp4"
	E0920 17:11:21.401322       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-svxp4\": pod kindnet-svxp4 is already assigned to node \"ha-135993-m04\"" pod="kube-system/kindnet-svxp4"
	I0920 17:11:21.401439       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-svxp4" node="ha-135993-m04"
	E0920 17:16:28.130642       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0920 17:16:28.294848       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E0920 17:16:29.425148       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0920 17:16:29.523859       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0920 17:16:29.788444       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E0920 17:16:32.035022       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0920 17:16:33.460287       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E0920 17:16:33.692643       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0920 17:16:34.037854       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0920 17:16:35.845491       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	E0920 17:16:36.209417       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	E0920 17:16:37.709519       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0920 17:16:38.023062       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0920 17:16:39.752998       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0920 17:16:40.009823       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	E0920 17:16:41.933759       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 20 17:22:07 ha-135993 kubelet[1305]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 20 17:22:07 ha-135993 kubelet[1305]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 20 17:22:07 ha-135993 kubelet[1305]: E0920 17:22:07.985796    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852927985474175,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:22:07 ha-135993 kubelet[1305]: E0920 17:22:07.985829    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852927985474175,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:22:17 ha-135993 kubelet[1305]: E0920 17:22:17.987654    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852937987308047,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:22:17 ha-135993 kubelet[1305]: E0920 17:22:17.987740    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852937987308047,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:22:27 ha-135993 kubelet[1305]: E0920 17:22:27.995664    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852947990989907,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:22:27 ha-135993 kubelet[1305]: E0920 17:22:27.996150    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852947990989907,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:22:37 ha-135993 kubelet[1305]: E0920 17:22:37.998652    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852957998099142,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:22:37 ha-135993 kubelet[1305]: E0920 17:22:37.998693    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852957998099142,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:22:48 ha-135993 kubelet[1305]: E0920 17:22:48.000050    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852967999704617,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:22:48 ha-135993 kubelet[1305]: E0920 17:22:48.000087    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852967999704617,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:22:58 ha-135993 kubelet[1305]: E0920 17:22:58.002318    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852978001559762,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:22:58 ha-135993 kubelet[1305]: E0920 17:22:58.002799    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852978001559762,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:23:07 ha-135993 kubelet[1305]: E0920 17:23:07.775863    1305 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 20 17:23:07 ha-135993 kubelet[1305]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 20 17:23:07 ha-135993 kubelet[1305]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 20 17:23:07 ha-135993 kubelet[1305]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 20 17:23:07 ha-135993 kubelet[1305]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 20 17:23:08 ha-135993 kubelet[1305]: E0920 17:23:08.004381    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852988004067338,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:23:08 ha-135993 kubelet[1305]: E0920 17:23:08.004404    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852988004067338,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:23:18 ha-135993 kubelet[1305]: E0920 17:23:18.007033    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852998006097207,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:23:18 ha-135993 kubelet[1305]: E0920 17:23:18.007080    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726852998006097207,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:23:28 ha-135993 kubelet[1305]: E0920 17:23:28.010777    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726853008009354039,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:23:28 ha-135993 kubelet[1305]: E0920 17:23:28.010824    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726853008009354039,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 17:23:36.131106   36439 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19672-8777/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-135993 -n ha-135993
helpers_test.go:261: (dbg) Run:  kubectl --context ha-135993 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.64s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (333.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-592246
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-592246
E0920 17:40:46.268114   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-592246: exit status 82 (2m1.844592606s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-592246-m03"  ...
	* Stopping node "multinode-592246-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-592246" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-592246 --wait=true -v=8 --alsologtostderr
E0920 17:41:39.932475   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/functional-945494/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:42:43.197235   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-592246 --wait=true -v=8 --alsologtostderr: (3m28.992860056s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-592246
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-592246 -n multinode-592246
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592246 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-592246 logs -n 25: (1.608814326s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-592246 ssh -n                                                                 | multinode-592246 | jenkins | v1.34.0 | 20 Sep 24 17:38 UTC | 20 Sep 24 17:38 UTC |
	|         | multinode-592246-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-592246 cp multinode-592246-m02:/home/docker/cp-test.txt                       | multinode-592246 | jenkins | v1.34.0 | 20 Sep 24 17:38 UTC | 20 Sep 24 17:38 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3830266903/001/cp-test_multinode-592246-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-592246 ssh -n                                                                 | multinode-592246 | jenkins | v1.34.0 | 20 Sep 24 17:38 UTC | 20 Sep 24 17:38 UTC |
	|         | multinode-592246-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-592246 cp multinode-592246-m02:/home/docker/cp-test.txt                       | multinode-592246 | jenkins | v1.34.0 | 20 Sep 24 17:38 UTC | 20 Sep 24 17:38 UTC |
	|         | multinode-592246:/home/docker/cp-test_multinode-592246-m02_multinode-592246.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-592246 ssh -n                                                                 | multinode-592246 | jenkins | v1.34.0 | 20 Sep 24 17:38 UTC | 20 Sep 24 17:38 UTC |
	|         | multinode-592246-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-592246 ssh -n multinode-592246 sudo cat                                       | multinode-592246 | jenkins | v1.34.0 | 20 Sep 24 17:38 UTC | 20 Sep 24 17:38 UTC |
	|         | /home/docker/cp-test_multinode-592246-m02_multinode-592246.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-592246 cp multinode-592246-m02:/home/docker/cp-test.txt                       | multinode-592246 | jenkins | v1.34.0 | 20 Sep 24 17:38 UTC | 20 Sep 24 17:38 UTC |
	|         | multinode-592246-m03:/home/docker/cp-test_multinode-592246-m02_multinode-592246-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-592246 ssh -n                                                                 | multinode-592246 | jenkins | v1.34.0 | 20 Sep 24 17:38 UTC | 20 Sep 24 17:38 UTC |
	|         | multinode-592246-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-592246 ssh -n multinode-592246-m03 sudo cat                                   | multinode-592246 | jenkins | v1.34.0 | 20 Sep 24 17:38 UTC | 20 Sep 24 17:38 UTC |
	|         | /home/docker/cp-test_multinode-592246-m02_multinode-592246-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-592246 cp testdata/cp-test.txt                                                | multinode-592246 | jenkins | v1.34.0 | 20 Sep 24 17:38 UTC | 20 Sep 24 17:38 UTC |
	|         | multinode-592246-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-592246 ssh -n                                                                 | multinode-592246 | jenkins | v1.34.0 | 20 Sep 24 17:38 UTC | 20 Sep 24 17:38 UTC |
	|         | multinode-592246-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-592246 cp multinode-592246-m03:/home/docker/cp-test.txt                       | multinode-592246 | jenkins | v1.34.0 | 20 Sep 24 17:38 UTC | 20 Sep 24 17:38 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3830266903/001/cp-test_multinode-592246-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-592246 ssh -n                                                                 | multinode-592246 | jenkins | v1.34.0 | 20 Sep 24 17:38 UTC | 20 Sep 24 17:38 UTC |
	|         | multinode-592246-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-592246 cp multinode-592246-m03:/home/docker/cp-test.txt                       | multinode-592246 | jenkins | v1.34.0 | 20 Sep 24 17:38 UTC | 20 Sep 24 17:38 UTC |
	|         | multinode-592246:/home/docker/cp-test_multinode-592246-m03_multinode-592246.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-592246 ssh -n                                                                 | multinode-592246 | jenkins | v1.34.0 | 20 Sep 24 17:38 UTC | 20 Sep 24 17:38 UTC |
	|         | multinode-592246-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-592246 ssh -n multinode-592246 sudo cat                                       | multinode-592246 | jenkins | v1.34.0 | 20 Sep 24 17:38 UTC | 20 Sep 24 17:38 UTC |
	|         | /home/docker/cp-test_multinode-592246-m03_multinode-592246.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-592246 cp multinode-592246-m03:/home/docker/cp-test.txt                       | multinode-592246 | jenkins | v1.34.0 | 20 Sep 24 17:38 UTC | 20 Sep 24 17:38 UTC |
	|         | multinode-592246-m02:/home/docker/cp-test_multinode-592246-m03_multinode-592246-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-592246 ssh -n                                                                 | multinode-592246 | jenkins | v1.34.0 | 20 Sep 24 17:38 UTC | 20 Sep 24 17:38 UTC |
	|         | multinode-592246-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-592246 ssh -n multinode-592246-m02 sudo cat                                   | multinode-592246 | jenkins | v1.34.0 | 20 Sep 24 17:38 UTC | 20 Sep 24 17:38 UTC |
	|         | /home/docker/cp-test_multinode-592246-m03_multinode-592246-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-592246 node stop m03                                                          | multinode-592246 | jenkins | v1.34.0 | 20 Sep 24 17:38 UTC | 20 Sep 24 17:38 UTC |
	| node    | multinode-592246 node start                                                             | multinode-592246 | jenkins | v1.34.0 | 20 Sep 24 17:38 UTC | 20 Sep 24 17:39 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-592246                                                                | multinode-592246 | jenkins | v1.34.0 | 20 Sep 24 17:39 UTC |                     |
	| stop    | -p multinode-592246                                                                     | multinode-592246 | jenkins | v1.34.0 | 20 Sep 24 17:39 UTC |                     |
	| start   | -p multinode-592246                                                                     | multinode-592246 | jenkins | v1.34.0 | 20 Sep 24 17:41 UTC | 20 Sep 24 17:44 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-592246                                                                | multinode-592246 | jenkins | v1.34.0 | 20 Sep 24 17:44 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 17:41:04
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 17:41:04.775899   46025 out.go:345] Setting OutFile to fd 1 ...
	I0920 17:41:04.776019   46025 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:41:04.776027   46025 out.go:358] Setting ErrFile to fd 2...
	I0920 17:41:04.776032   46025 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:41:04.776228   46025 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-8777/.minikube/bin
	I0920 17:41:04.776882   46025 out.go:352] Setting JSON to false
	I0920 17:41:04.777816   46025 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5008,"bootTime":1726849057,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 17:41:04.777928   46025 start.go:139] virtualization: kvm guest
	I0920 17:41:04.780468   46025 out.go:177] * [multinode-592246] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 17:41:04.782007   46025 notify.go:220] Checking for updates...
	I0920 17:41:04.782061   46025 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 17:41:04.783438   46025 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 17:41:04.784720   46025 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-8777/kubeconfig
	I0920 17:41:04.785982   46025 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-8777/.minikube
	I0920 17:41:04.787298   46025 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 17:41:04.788639   46025 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 17:41:04.790380   46025 config.go:182] Loaded profile config "multinode-592246": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 17:41:04.790500   46025 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 17:41:04.790995   46025 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:41:04.791067   46025 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:41:04.807205   46025 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41099
	I0920 17:41:04.807705   46025 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:41:04.808348   46025 main.go:141] libmachine: Using API Version  1
	I0920 17:41:04.808365   46025 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:41:04.808727   46025 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:41:04.808917   46025 main.go:141] libmachine: (multinode-592246) Calling .DriverName
	I0920 17:41:04.846738   46025 out.go:177] * Using the kvm2 driver based on existing profile
	I0920 17:41:04.848040   46025 start.go:297] selected driver: kvm2
	I0920 17:41:04.848060   46025 start.go:901] validating driver "kvm2" against &{Name:multinode-592246 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:multinode-592246 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.115 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.170 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.38 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false insp
ektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 17:41:04.848208   46025 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 17:41:04.848562   46025 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 17:41:04.848668   46025 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19672-8777/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 17:41:04.864156   46025 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 17:41:04.864910   46025 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 17:41:04.864944   46025 cni.go:84] Creating CNI manager for ""
	I0920 17:41:04.865010   46025 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0920 17:41:04.865062   46025 start.go:340] cluster config:
	{Name:multinode-592246 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-592246 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.115 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.170 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.38 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow
:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetCli
entPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 17:41:04.865191   46025 iso.go:125] acquiring lock: {Name:mkba95ef0488e46f622333e9f317f43def93040b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 17:41:04.867437   46025 out.go:177] * Starting "multinode-592246" primary control-plane node in "multinode-592246" cluster
	I0920 17:41:04.869049   46025 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 17:41:04.869120   46025 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0920 17:41:04.869134   46025 cache.go:56] Caching tarball of preloaded images
	I0920 17:41:04.869256   46025 preload.go:172] Found /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 17:41:04.869269   46025 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 17:41:04.869395   46025 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/multinode-592246/config.json ...
	I0920 17:41:04.869632   46025 start.go:360] acquireMachinesLock for multinode-592246: {Name:mkfeedb385cf08b5d2aa00913e85815d02a180c2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 17:41:04.869687   46025 start.go:364] duration metric: took 33.418µs to acquireMachinesLock for "multinode-592246"
	I0920 17:41:04.869704   46025 start.go:96] Skipping create...Using existing machine configuration
	I0920 17:41:04.869710   46025 fix.go:54] fixHost starting: 
	I0920 17:41:04.870023   46025 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:41:04.870073   46025 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:41:04.884965   46025 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35873
	I0920 17:41:04.885479   46025 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:41:04.886129   46025 main.go:141] libmachine: Using API Version  1
	I0920 17:41:04.886155   46025 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:41:04.886499   46025 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:41:04.886714   46025 main.go:141] libmachine: (multinode-592246) Calling .DriverName
	I0920 17:41:04.886838   46025 main.go:141] libmachine: (multinode-592246) Calling .GetState
	I0920 17:41:04.888728   46025 fix.go:112] recreateIfNeeded on multinode-592246: state=Running err=<nil>
	W0920 17:41:04.888771   46025 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 17:41:04.891125   46025 out.go:177] * Updating the running kvm2 "multinode-592246" VM ...
	I0920 17:41:04.892701   46025 machine.go:93] provisionDockerMachine start ...
	I0920 17:41:04.892728   46025 main.go:141] libmachine: (multinode-592246) Calling .DriverName
	I0920 17:41:04.892976   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHHostname
	I0920 17:41:04.896148   46025 main.go:141] libmachine: (multinode-592246) DBG | domain multinode-592246 has defined MAC address 52:54:00:15:f5:5b in network mk-multinode-592246
	I0920 17:41:04.896626   46025 main.go:141] libmachine: (multinode-592246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:f5:5b", ip: ""} in network mk-multinode-592246: {Iface:virbr1 ExpiryTime:2024-09-20 18:35:34 +0000 UTC Type:0 Mac:52:54:00:15:f5:5b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:multinode-592246 Clientid:01:52:54:00:15:f5:5b}
	I0920 17:41:04.896654   46025 main.go:141] libmachine: (multinode-592246) DBG | domain multinode-592246 has defined IP address 192.168.39.115 and MAC address 52:54:00:15:f5:5b in network mk-multinode-592246
	I0920 17:41:04.896816   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHPort
	I0920 17:41:04.896998   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHKeyPath
	I0920 17:41:04.897142   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHKeyPath
	I0920 17:41:04.897236   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHUsername
	I0920 17:41:04.897405   46025 main.go:141] libmachine: Using SSH client type: native
	I0920 17:41:04.897638   46025 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0920 17:41:04.897650   46025 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 17:41:05.015262   46025 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-592246
	
	I0920 17:41:05.015293   46025 main.go:141] libmachine: (multinode-592246) Calling .GetMachineName
	I0920 17:41:05.015623   46025 buildroot.go:166] provisioning hostname "multinode-592246"
	I0920 17:41:05.015647   46025 main.go:141] libmachine: (multinode-592246) Calling .GetMachineName
	I0920 17:41:05.015836   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHHostname
	I0920 17:41:05.018677   46025 main.go:141] libmachine: (multinode-592246) DBG | domain multinode-592246 has defined MAC address 52:54:00:15:f5:5b in network mk-multinode-592246
	I0920 17:41:05.019049   46025 main.go:141] libmachine: (multinode-592246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:f5:5b", ip: ""} in network mk-multinode-592246: {Iface:virbr1 ExpiryTime:2024-09-20 18:35:34 +0000 UTC Type:0 Mac:52:54:00:15:f5:5b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:multinode-592246 Clientid:01:52:54:00:15:f5:5b}
	I0920 17:41:05.019078   46025 main.go:141] libmachine: (multinode-592246) DBG | domain multinode-592246 has defined IP address 192.168.39.115 and MAC address 52:54:00:15:f5:5b in network mk-multinode-592246
	I0920 17:41:05.019264   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHPort
	I0920 17:41:05.019510   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHKeyPath
	I0920 17:41:05.019663   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHKeyPath
	I0920 17:41:05.019809   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHUsername
	I0920 17:41:05.019961   46025 main.go:141] libmachine: Using SSH client type: native
	I0920 17:41:05.020139   46025 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0920 17:41:05.020150   46025 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-592246 && echo "multinode-592246" | sudo tee /etc/hostname
	I0920 17:41:05.149959   46025 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-592246
	
	I0920 17:41:05.149993   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHHostname
	I0920 17:41:05.153075   46025 main.go:141] libmachine: (multinode-592246) DBG | domain multinode-592246 has defined MAC address 52:54:00:15:f5:5b in network mk-multinode-592246
	I0920 17:41:05.153497   46025 main.go:141] libmachine: (multinode-592246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:f5:5b", ip: ""} in network mk-multinode-592246: {Iface:virbr1 ExpiryTime:2024-09-20 18:35:34 +0000 UTC Type:0 Mac:52:54:00:15:f5:5b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:multinode-592246 Clientid:01:52:54:00:15:f5:5b}
	I0920 17:41:05.153519   46025 main.go:141] libmachine: (multinode-592246) DBG | domain multinode-592246 has defined IP address 192.168.39.115 and MAC address 52:54:00:15:f5:5b in network mk-multinode-592246
	I0920 17:41:05.153821   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHPort
	I0920 17:41:05.154059   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHKeyPath
	I0920 17:41:05.154273   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHKeyPath
	I0920 17:41:05.154478   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHUsername
	I0920 17:41:05.154690   46025 main.go:141] libmachine: Using SSH client type: native
	I0920 17:41:05.154910   46025 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0920 17:41:05.154934   46025 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-592246' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-592246/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-592246' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 17:41:05.270956   46025 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 17:41:05.270994   46025 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-8777/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-8777/.minikube}
	I0920 17:41:05.271023   46025 buildroot.go:174] setting up certificates
	I0920 17:41:05.271033   46025 provision.go:84] configureAuth start
	I0920 17:41:05.271045   46025 main.go:141] libmachine: (multinode-592246) Calling .GetMachineName
	I0920 17:41:05.271343   46025 main.go:141] libmachine: (multinode-592246) Calling .GetIP
	I0920 17:41:05.274198   46025 main.go:141] libmachine: (multinode-592246) DBG | domain multinode-592246 has defined MAC address 52:54:00:15:f5:5b in network mk-multinode-592246
	I0920 17:41:05.274611   46025 main.go:141] libmachine: (multinode-592246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:f5:5b", ip: ""} in network mk-multinode-592246: {Iface:virbr1 ExpiryTime:2024-09-20 18:35:34 +0000 UTC Type:0 Mac:52:54:00:15:f5:5b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:multinode-592246 Clientid:01:52:54:00:15:f5:5b}
	I0920 17:41:05.274638   46025 main.go:141] libmachine: (multinode-592246) DBG | domain multinode-592246 has defined IP address 192.168.39.115 and MAC address 52:54:00:15:f5:5b in network mk-multinode-592246
	I0920 17:41:05.274774   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHHostname
	I0920 17:41:05.277464   46025 main.go:141] libmachine: (multinode-592246) DBG | domain multinode-592246 has defined MAC address 52:54:00:15:f5:5b in network mk-multinode-592246
	I0920 17:41:05.277880   46025 main.go:141] libmachine: (multinode-592246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:f5:5b", ip: ""} in network mk-multinode-592246: {Iface:virbr1 ExpiryTime:2024-09-20 18:35:34 +0000 UTC Type:0 Mac:52:54:00:15:f5:5b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:multinode-592246 Clientid:01:52:54:00:15:f5:5b}
	I0920 17:41:05.277911   46025 main.go:141] libmachine: (multinode-592246) DBG | domain multinode-592246 has defined IP address 192.168.39.115 and MAC address 52:54:00:15:f5:5b in network mk-multinode-592246
	I0920 17:41:05.278039   46025 provision.go:143] copyHostCerts
	I0920 17:41:05.278067   46025 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem
	I0920 17:41:05.278100   46025 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem, removing ...
	I0920 17:41:05.278117   46025 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem
	I0920 17:41:05.278187   46025 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem (1082 bytes)
	I0920 17:41:05.278263   46025 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem
	I0920 17:41:05.278294   46025 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem, removing ...
	I0920 17:41:05.278309   46025 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem
	I0920 17:41:05.278338   46025 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem (1123 bytes)
	I0920 17:41:05.278379   46025 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem
	I0920 17:41:05.278396   46025 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem, removing ...
	I0920 17:41:05.278402   46025 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem
	I0920 17:41:05.278423   46025 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem (1675 bytes)
	I0920 17:41:05.278468   46025 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem org=jenkins.multinode-592246 san=[127.0.0.1 192.168.39.115 localhost minikube multinode-592246]
	I0920 17:41:05.396738   46025 provision.go:177] copyRemoteCerts
	I0920 17:41:05.396806   46025 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 17:41:05.396830   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHHostname
	I0920 17:41:05.399987   46025 main.go:141] libmachine: (multinode-592246) DBG | domain multinode-592246 has defined MAC address 52:54:00:15:f5:5b in network mk-multinode-592246
	I0920 17:41:05.400358   46025 main.go:141] libmachine: (multinode-592246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:f5:5b", ip: ""} in network mk-multinode-592246: {Iface:virbr1 ExpiryTime:2024-09-20 18:35:34 +0000 UTC Type:0 Mac:52:54:00:15:f5:5b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:multinode-592246 Clientid:01:52:54:00:15:f5:5b}
	I0920 17:41:05.400388   46025 main.go:141] libmachine: (multinode-592246) DBG | domain multinode-592246 has defined IP address 192.168.39.115 and MAC address 52:54:00:15:f5:5b in network mk-multinode-592246
	I0920 17:41:05.400627   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHPort
	I0920 17:41:05.400878   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHKeyPath
	I0920 17:41:05.401147   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHUsername
	I0920 17:41:05.401327   46025 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/multinode-592246/id_rsa Username:docker}
	I0920 17:41:05.489745   46025 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0920 17:41:05.489866   46025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0920 17:41:05.516428   46025 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0920 17:41:05.516514   46025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 17:41:05.541581   46025 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0920 17:41:05.541670   46025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0920 17:41:05.570087   46025 provision.go:87] duration metric: took 299.037398ms to configureAuth
	I0920 17:41:05.570125   46025 buildroot.go:189] setting minikube options for container-runtime
	I0920 17:41:05.570367   46025 config.go:182] Loaded profile config "multinode-592246": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 17:41:05.570453   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHHostname
	I0920 17:41:05.573603   46025 main.go:141] libmachine: (multinode-592246) DBG | domain multinode-592246 has defined MAC address 52:54:00:15:f5:5b in network mk-multinode-592246
	I0920 17:41:05.573960   46025 main.go:141] libmachine: (multinode-592246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:f5:5b", ip: ""} in network mk-multinode-592246: {Iface:virbr1 ExpiryTime:2024-09-20 18:35:34 +0000 UTC Type:0 Mac:52:54:00:15:f5:5b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:multinode-592246 Clientid:01:52:54:00:15:f5:5b}
	I0920 17:41:05.573987   46025 main.go:141] libmachine: (multinode-592246) DBG | domain multinode-592246 has defined IP address 192.168.39.115 and MAC address 52:54:00:15:f5:5b in network mk-multinode-592246
	I0920 17:41:05.574190   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHPort
	I0920 17:41:05.574406   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHKeyPath
	I0920 17:41:05.574600   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHKeyPath
	I0920 17:41:05.574763   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHUsername
	I0920 17:41:05.574914   46025 main.go:141] libmachine: Using SSH client type: native
	I0920 17:41:05.575106   46025 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0920 17:41:05.575126   46025 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 17:42:36.423601   46025 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 17:42:36.423630   46025 machine.go:96] duration metric: took 1m31.53091135s to provisionDockerMachine
	I0920 17:42:36.423644   46025 start.go:293] postStartSetup for "multinode-592246" (driver="kvm2")
	I0920 17:42:36.423654   46025 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 17:42:36.423673   46025 main.go:141] libmachine: (multinode-592246) Calling .DriverName
	I0920 17:42:36.424043   46025 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 17:42:36.424077   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHHostname
	I0920 17:42:36.428263   46025 main.go:141] libmachine: (multinode-592246) DBG | domain multinode-592246 has defined MAC address 52:54:00:15:f5:5b in network mk-multinode-592246
	I0920 17:42:36.428759   46025 main.go:141] libmachine: (multinode-592246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:f5:5b", ip: ""} in network mk-multinode-592246: {Iface:virbr1 ExpiryTime:2024-09-20 18:35:34 +0000 UTC Type:0 Mac:52:54:00:15:f5:5b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:multinode-592246 Clientid:01:52:54:00:15:f5:5b}
	I0920 17:42:36.428795   46025 main.go:141] libmachine: (multinode-592246) DBG | domain multinode-592246 has defined IP address 192.168.39.115 and MAC address 52:54:00:15:f5:5b in network mk-multinode-592246
	I0920 17:42:36.429046   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHPort
	I0920 17:42:36.429262   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHKeyPath
	I0920 17:42:36.429443   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHUsername
	I0920 17:42:36.429599   46025 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/multinode-592246/id_rsa Username:docker}
	I0920 17:42:36.518170   46025 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 17:42:36.522871   46025 command_runner.go:130] > NAME=Buildroot
	I0920 17:42:36.522898   46025 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0920 17:42:36.522903   46025 command_runner.go:130] > ID=buildroot
	I0920 17:42:36.522907   46025 command_runner.go:130] > VERSION_ID=2023.02.9
	I0920 17:42:36.522913   46025 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0920 17:42:36.523062   46025 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 17:42:36.523085   46025 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/addons for local assets ...
	I0920 17:42:36.523148   46025 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/files for local assets ...
	I0920 17:42:36.523225   46025 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem -> 159732.pem in /etc/ssl/certs
	I0920 17:42:36.523247   46025 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem -> /etc/ssl/certs/159732.pem
	I0920 17:42:36.523351   46025 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 17:42:36.533404   46025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /etc/ssl/certs/159732.pem (1708 bytes)
	I0920 17:42:36.560789   46025 start.go:296] duration metric: took 137.13163ms for postStartSetup
	I0920 17:42:36.560829   46025 fix.go:56] duration metric: took 1m31.691118587s for fixHost
	I0920 17:42:36.560849   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHHostname
	I0920 17:42:36.563832   46025 main.go:141] libmachine: (multinode-592246) DBG | domain multinode-592246 has defined MAC address 52:54:00:15:f5:5b in network mk-multinode-592246
	I0920 17:42:36.564299   46025 main.go:141] libmachine: (multinode-592246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:f5:5b", ip: ""} in network mk-multinode-592246: {Iface:virbr1 ExpiryTime:2024-09-20 18:35:34 +0000 UTC Type:0 Mac:52:54:00:15:f5:5b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:multinode-592246 Clientid:01:52:54:00:15:f5:5b}
	I0920 17:42:36.564336   46025 main.go:141] libmachine: (multinode-592246) DBG | domain multinode-592246 has defined IP address 192.168.39.115 and MAC address 52:54:00:15:f5:5b in network mk-multinode-592246
	I0920 17:42:36.564592   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHPort
	I0920 17:42:36.564853   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHKeyPath
	I0920 17:42:36.565128   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHKeyPath
	I0920 17:42:36.565268   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHUsername
	I0920 17:42:36.565446   46025 main.go:141] libmachine: Using SSH client type: native
	I0920 17:42:36.565610   46025 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0920 17:42:36.565619   46025 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 17:42:36.678751   46025 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726854156.653403265
	
	I0920 17:42:36.678772   46025 fix.go:216] guest clock: 1726854156.653403265
	I0920 17:42:36.678778   46025 fix.go:229] Guest: 2024-09-20 17:42:36.653403265 +0000 UTC Remote: 2024-09-20 17:42:36.560833204 +0000 UTC m=+91.824072456 (delta=92.570061ms)
	I0920 17:42:36.678798   46025 fix.go:200] guest clock delta is within tolerance: 92.570061ms
	I0920 17:42:36.678804   46025 start.go:83] releasing machines lock for "multinode-592246", held for 1m31.809106382s
	I0920 17:42:36.678826   46025 main.go:141] libmachine: (multinode-592246) Calling .DriverName
	I0920 17:42:36.679086   46025 main.go:141] libmachine: (multinode-592246) Calling .GetIP
	I0920 17:42:36.681941   46025 main.go:141] libmachine: (multinode-592246) DBG | domain multinode-592246 has defined MAC address 52:54:00:15:f5:5b in network mk-multinode-592246
	I0920 17:42:36.682299   46025 main.go:141] libmachine: (multinode-592246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:f5:5b", ip: ""} in network mk-multinode-592246: {Iface:virbr1 ExpiryTime:2024-09-20 18:35:34 +0000 UTC Type:0 Mac:52:54:00:15:f5:5b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:multinode-592246 Clientid:01:52:54:00:15:f5:5b}
	I0920 17:42:36.682327   46025 main.go:141] libmachine: (multinode-592246) DBG | domain multinode-592246 has defined IP address 192.168.39.115 and MAC address 52:54:00:15:f5:5b in network mk-multinode-592246
	I0920 17:42:36.682493   46025 main.go:141] libmachine: (multinode-592246) Calling .DriverName
	I0920 17:42:36.683054   46025 main.go:141] libmachine: (multinode-592246) Calling .DriverName
	I0920 17:42:36.683269   46025 main.go:141] libmachine: (multinode-592246) Calling .DriverName
	I0920 17:42:36.683389   46025 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 17:42:36.683434   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHHostname
	I0920 17:42:36.683511   46025 ssh_runner.go:195] Run: cat /version.json
	I0920 17:42:36.683537   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHHostname
	I0920 17:42:36.686217   46025 main.go:141] libmachine: (multinode-592246) DBG | domain multinode-592246 has defined MAC address 52:54:00:15:f5:5b in network mk-multinode-592246
	I0920 17:42:36.686685   46025 main.go:141] libmachine: (multinode-592246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:f5:5b", ip: ""} in network mk-multinode-592246: {Iface:virbr1 ExpiryTime:2024-09-20 18:35:34 +0000 UTC Type:0 Mac:52:54:00:15:f5:5b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:multinode-592246 Clientid:01:52:54:00:15:f5:5b}
	I0920 17:42:36.686718   46025 main.go:141] libmachine: (multinode-592246) DBG | domain multinode-592246 has defined IP address 192.168.39.115 and MAC address 52:54:00:15:f5:5b in network mk-multinode-592246
	I0920 17:42:36.686746   46025 main.go:141] libmachine: (multinode-592246) DBG | domain multinode-592246 has defined MAC address 52:54:00:15:f5:5b in network mk-multinode-592246
	I0920 17:42:36.686838   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHPort
	I0920 17:42:36.687029   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHKeyPath
	I0920 17:42:36.687186   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHUsername
	I0920 17:42:36.687307   46025 main.go:141] libmachine: (multinode-592246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:f5:5b", ip: ""} in network mk-multinode-592246: {Iface:virbr1 ExpiryTime:2024-09-20 18:35:34 +0000 UTC Type:0 Mac:52:54:00:15:f5:5b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:multinode-592246 Clientid:01:52:54:00:15:f5:5b}
	I0920 17:42:36.687329   46025 main.go:141] libmachine: (multinode-592246) DBG | domain multinode-592246 has defined IP address 192.168.39.115 and MAC address 52:54:00:15:f5:5b in network mk-multinode-592246
	I0920 17:42:36.687330   46025 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/multinode-592246/id_rsa Username:docker}
	I0920 17:42:36.687513   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHPort
	I0920 17:42:36.687684   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHKeyPath
	I0920 17:42:36.687866   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHUsername
	I0920 17:42:36.688089   46025 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/multinode-592246/id_rsa Username:docker}
	I0920 17:42:36.802985   46025 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0920 17:42:36.803058   46025 command_runner.go:130] > {"iso_version": "v1.34.0-1726784654-19672", "kicbase_version": "v0.0.45-1726589491-19662", "minikube_version": "v1.34.0", "commit": "342ed9b49b7fd0c6b2cb4410be5c5d5251f51ed8"}
	I0920 17:42:36.803204   46025 ssh_runner.go:195] Run: systemctl --version
	I0920 17:42:36.809568   46025 command_runner.go:130] > systemd 252 (252)
	I0920 17:42:36.809604   46025 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0920 17:42:36.809892   46025 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 17:42:36.974820   46025 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0920 17:42:36.980821   46025 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0920 17:42:36.980880   46025 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 17:42:36.980930   46025 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 17:42:36.990884   46025 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0920 17:42:36.990916   46025 start.go:495] detecting cgroup driver to use...
	I0920 17:42:36.990998   46025 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 17:42:37.008567   46025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 17:42:37.023571   46025 docker.go:217] disabling cri-docker service (if available) ...
	I0920 17:42:37.023647   46025 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 17:42:37.038376   46025 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 17:42:37.053240   46025 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 17:42:37.213261   46025 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 17:42:37.359615   46025 docker.go:233] disabling docker service ...
	I0920 17:42:37.359683   46025 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 17:42:37.378108   46025 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 17:42:37.393108   46025 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 17:42:37.532993   46025 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 17:42:37.688371   46025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 17:42:37.704066   46025 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 17:42:37.723665   46025 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0920 17:42:37.723713   46025 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 17:42:37.723766   46025 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:42:37.735327   46025 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 17:42:37.735392   46025 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:42:37.746568   46025 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:42:37.758278   46025 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:42:37.769584   46025 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 17:42:37.781000   46025 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:42:37.792660   46025 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:42:37.803812   46025 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:42:37.815396   46025 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 17:42:37.826186   46025 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0920 17:42:37.826272   46025 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 17:42:37.836458   46025 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:42:37.977954   46025 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 17:42:41.751954   46025 ssh_runner.go:235] Completed: sudo systemctl restart crio: (3.773960184s)
	I0920 17:42:41.751985   46025 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 17:42:41.752046   46025 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 17:42:41.756921   46025 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0920 17:42:41.756948   46025 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0920 17:42:41.756954   46025 command_runner.go:130] > Device: 0,22	Inode: 1344        Links: 1
	I0920 17:42:41.756960   46025 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0920 17:42:41.756967   46025 command_runner.go:130] > Access: 2024-09-20 17:42:41.673882032 +0000
	I0920 17:42:41.756977   46025 command_runner.go:130] > Modify: 2024-09-20 17:42:41.603880334 +0000
	I0920 17:42:41.756985   46025 command_runner.go:130] > Change: 2024-09-20 17:42:41.603880334 +0000
	I0920 17:42:41.756992   46025 command_runner.go:130] >  Birth: -
	I0920 17:42:41.757029   46025 start.go:563] Will wait 60s for crictl version
	I0920 17:42:41.757084   46025 ssh_runner.go:195] Run: which crictl
	I0920 17:42:41.761129   46025 command_runner.go:130] > /usr/bin/crictl
	I0920 17:42:41.761187   46025 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 17:42:41.801244   46025 command_runner.go:130] > Version:  0.1.0
	I0920 17:42:41.801267   46025 command_runner.go:130] > RuntimeName:  cri-o
	I0920 17:42:41.801319   46025 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0920 17:42:41.801337   46025 command_runner.go:130] > RuntimeApiVersion:  v1
	I0920 17:42:41.802867   46025 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 17:42:41.802932   46025 ssh_runner.go:195] Run: crio --version
	I0920 17:42:41.834111   46025 command_runner.go:130] > crio version 1.29.1
	I0920 17:42:41.834138   46025 command_runner.go:130] > Version:        1.29.1
	I0920 17:42:41.834156   46025 command_runner.go:130] > GitCommit:      unknown
	I0920 17:42:41.834162   46025 command_runner.go:130] > GitCommitDate:  unknown
	I0920 17:42:41.834169   46025 command_runner.go:130] > GitTreeState:   clean
	I0920 17:42:41.834178   46025 command_runner.go:130] > BuildDate:      2024-09-20T03:55:27Z
	I0920 17:42:41.834183   46025 command_runner.go:130] > GoVersion:      go1.21.6
	I0920 17:42:41.834189   46025 command_runner.go:130] > Compiler:       gc
	I0920 17:42:41.834196   46025 command_runner.go:130] > Platform:       linux/amd64
	I0920 17:42:41.834202   46025 command_runner.go:130] > Linkmode:       dynamic
	I0920 17:42:41.834209   46025 command_runner.go:130] > BuildTags:      
	I0920 17:42:41.834218   46025 command_runner.go:130] >   containers_image_ostree_stub
	I0920 17:42:41.834228   46025 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0920 17:42:41.834232   46025 command_runner.go:130] >   btrfs_noversion
	I0920 17:42:41.834244   46025 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0920 17:42:41.834250   46025 command_runner.go:130] >   libdm_no_deferred_remove
	I0920 17:42:41.834260   46025 command_runner.go:130] >   seccomp
	I0920 17:42:41.834267   46025 command_runner.go:130] > LDFlags:          unknown
	I0920 17:42:41.834274   46025 command_runner.go:130] > SeccompEnabled:   true
	I0920 17:42:41.834284   46025 command_runner.go:130] > AppArmorEnabled:  false
	I0920 17:42:41.834384   46025 ssh_runner.go:195] Run: crio --version
	I0920 17:42:41.864280   46025 command_runner.go:130] > crio version 1.29.1
	I0920 17:42:41.864315   46025 command_runner.go:130] > Version:        1.29.1
	I0920 17:42:41.864324   46025 command_runner.go:130] > GitCommit:      unknown
	I0920 17:42:41.864331   46025 command_runner.go:130] > GitCommitDate:  unknown
	I0920 17:42:41.864338   46025 command_runner.go:130] > GitTreeState:   clean
	I0920 17:42:41.864347   46025 command_runner.go:130] > BuildDate:      2024-09-20T03:55:27Z
	I0920 17:42:41.864352   46025 command_runner.go:130] > GoVersion:      go1.21.6
	I0920 17:42:41.864356   46025 command_runner.go:130] > Compiler:       gc
	I0920 17:42:41.864361   46025 command_runner.go:130] > Platform:       linux/amd64
	I0920 17:42:41.864366   46025 command_runner.go:130] > Linkmode:       dynamic
	I0920 17:42:41.864370   46025 command_runner.go:130] > BuildTags:      
	I0920 17:42:41.864390   46025 command_runner.go:130] >   containers_image_ostree_stub
	I0920 17:42:41.864398   46025 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0920 17:42:41.864404   46025 command_runner.go:130] >   btrfs_noversion
	I0920 17:42:41.864411   46025 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0920 17:42:41.864421   46025 command_runner.go:130] >   libdm_no_deferred_remove
	I0920 17:42:41.864428   46025 command_runner.go:130] >   seccomp
	I0920 17:42:41.864438   46025 command_runner.go:130] > LDFlags:          unknown
	I0920 17:42:41.864444   46025 command_runner.go:130] > SeccompEnabled:   true
	I0920 17:42:41.864451   46025 command_runner.go:130] > AppArmorEnabled:  false
	I0920 17:42:41.867683   46025 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 17:42:41.869190   46025 main.go:141] libmachine: (multinode-592246) Calling .GetIP
	I0920 17:42:41.871805   46025 main.go:141] libmachine: (multinode-592246) DBG | domain multinode-592246 has defined MAC address 52:54:00:15:f5:5b in network mk-multinode-592246
	I0920 17:42:41.872202   46025 main.go:141] libmachine: (multinode-592246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:f5:5b", ip: ""} in network mk-multinode-592246: {Iface:virbr1 ExpiryTime:2024-09-20 18:35:34 +0000 UTC Type:0 Mac:52:54:00:15:f5:5b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:multinode-592246 Clientid:01:52:54:00:15:f5:5b}
	I0920 17:42:41.872229   46025 main.go:141] libmachine: (multinode-592246) DBG | domain multinode-592246 has defined IP address 192.168.39.115 and MAC address 52:54:00:15:f5:5b in network mk-multinode-592246
	I0920 17:42:41.872479   46025 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 17:42:41.876805   46025 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0920 17:42:41.876930   46025 kubeadm.go:883] updating cluster {Name:multinode-592246 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.1 ClusterName:multinode-592246 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.115 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.170 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.38 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadge
t:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 17:42:41.878502   46025 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 17:42:41.878563   46025 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 17:42:41.931070   46025 command_runner.go:130] > {
	I0920 17:42:41.931098   46025 command_runner.go:130] >   "images": [
	I0920 17:42:41.931106   46025 command_runner.go:130] >     {
	I0920 17:42:41.931117   46025 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0920 17:42:41.931127   46025 command_runner.go:130] >       "repoTags": [
	I0920 17:42:41.931135   46025 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0920 17:42:41.931141   46025 command_runner.go:130] >       ],
	I0920 17:42:41.931148   46025 command_runner.go:130] >       "repoDigests": [
	I0920 17:42:41.931166   46025 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0920 17:42:41.931182   46025 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0920 17:42:41.931189   46025 command_runner.go:130] >       ],
	I0920 17:42:41.931198   46025 command_runner.go:130] >       "size": "87190579",
	I0920 17:42:41.931204   46025 command_runner.go:130] >       "uid": null,
	I0920 17:42:41.931211   46025 command_runner.go:130] >       "username": "",
	I0920 17:42:41.931221   46025 command_runner.go:130] >       "spec": null,
	I0920 17:42:41.931229   46025 command_runner.go:130] >       "pinned": false
	I0920 17:42:41.931235   46025 command_runner.go:130] >     },
	I0920 17:42:41.931241   46025 command_runner.go:130] >     {
	I0920 17:42:41.931253   46025 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0920 17:42:41.931260   46025 command_runner.go:130] >       "repoTags": [
	I0920 17:42:41.931270   46025 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0920 17:42:41.931278   46025 command_runner.go:130] >       ],
	I0920 17:42:41.931285   46025 command_runner.go:130] >       "repoDigests": [
	I0920 17:42:41.931297   46025 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0920 17:42:41.931311   46025 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0920 17:42:41.931319   46025 command_runner.go:130] >       ],
	I0920 17:42:41.931339   46025 command_runner.go:130] >       "size": "1363676",
	I0920 17:42:41.931348   46025 command_runner.go:130] >       "uid": null,
	I0920 17:42:41.931358   46025 command_runner.go:130] >       "username": "",
	I0920 17:42:41.931367   46025 command_runner.go:130] >       "spec": null,
	I0920 17:42:41.931375   46025 command_runner.go:130] >       "pinned": false
	I0920 17:42:41.931404   46025 command_runner.go:130] >     },
	I0920 17:42:41.931411   46025 command_runner.go:130] >     {
	I0920 17:42:41.931420   46025 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0920 17:42:41.931427   46025 command_runner.go:130] >       "repoTags": [
	I0920 17:42:41.931434   46025 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0920 17:42:41.931438   46025 command_runner.go:130] >       ],
	I0920 17:42:41.931442   46025 command_runner.go:130] >       "repoDigests": [
	I0920 17:42:41.931451   46025 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0920 17:42:41.931459   46025 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0920 17:42:41.931463   46025 command_runner.go:130] >       ],
	I0920 17:42:41.931467   46025 command_runner.go:130] >       "size": "31470524",
	I0920 17:42:41.931471   46025 command_runner.go:130] >       "uid": null,
	I0920 17:42:41.931476   46025 command_runner.go:130] >       "username": "",
	I0920 17:42:41.931479   46025 command_runner.go:130] >       "spec": null,
	I0920 17:42:41.931483   46025 command_runner.go:130] >       "pinned": false
	I0920 17:42:41.931487   46025 command_runner.go:130] >     },
	I0920 17:42:41.931490   46025 command_runner.go:130] >     {
	I0920 17:42:41.931496   46025 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0920 17:42:41.931503   46025 command_runner.go:130] >       "repoTags": [
	I0920 17:42:41.931508   46025 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0920 17:42:41.931512   46025 command_runner.go:130] >       ],
	I0920 17:42:41.931516   46025 command_runner.go:130] >       "repoDigests": [
	I0920 17:42:41.931523   46025 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0920 17:42:41.931535   46025 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0920 17:42:41.931539   46025 command_runner.go:130] >       ],
	I0920 17:42:41.931543   46025 command_runner.go:130] >       "size": "63273227",
	I0920 17:42:41.931547   46025 command_runner.go:130] >       "uid": null,
	I0920 17:42:41.931552   46025 command_runner.go:130] >       "username": "nonroot",
	I0920 17:42:41.931558   46025 command_runner.go:130] >       "spec": null,
	I0920 17:42:41.931562   46025 command_runner.go:130] >       "pinned": false
	I0920 17:42:41.931565   46025 command_runner.go:130] >     },
	I0920 17:42:41.931568   46025 command_runner.go:130] >     {
	I0920 17:42:41.931576   46025 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0920 17:42:41.931582   46025 command_runner.go:130] >       "repoTags": [
	I0920 17:42:41.931586   46025 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0920 17:42:41.931592   46025 command_runner.go:130] >       ],
	I0920 17:42:41.931596   46025 command_runner.go:130] >       "repoDigests": [
	I0920 17:42:41.931603   46025 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0920 17:42:41.931610   46025 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0920 17:42:41.931614   46025 command_runner.go:130] >       ],
	I0920 17:42:41.931620   46025 command_runner.go:130] >       "size": "149009664",
	I0920 17:42:41.931624   46025 command_runner.go:130] >       "uid": {
	I0920 17:42:41.931627   46025 command_runner.go:130] >         "value": "0"
	I0920 17:42:41.931633   46025 command_runner.go:130] >       },
	I0920 17:42:41.931637   46025 command_runner.go:130] >       "username": "",
	I0920 17:42:41.931640   46025 command_runner.go:130] >       "spec": null,
	I0920 17:42:41.931644   46025 command_runner.go:130] >       "pinned": false
	I0920 17:42:41.931648   46025 command_runner.go:130] >     },
	I0920 17:42:41.931651   46025 command_runner.go:130] >     {
	I0920 17:42:41.931657   46025 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0920 17:42:41.931663   46025 command_runner.go:130] >       "repoTags": [
	I0920 17:42:41.931667   46025 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0920 17:42:41.931670   46025 command_runner.go:130] >       ],
	I0920 17:42:41.931674   46025 command_runner.go:130] >       "repoDigests": [
	I0920 17:42:41.931682   46025 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0920 17:42:41.931691   46025 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0920 17:42:41.931697   46025 command_runner.go:130] >       ],
	I0920 17:42:41.931701   46025 command_runner.go:130] >       "size": "95237600",
	I0920 17:42:41.931706   46025 command_runner.go:130] >       "uid": {
	I0920 17:42:41.931710   46025 command_runner.go:130] >         "value": "0"
	I0920 17:42:41.931714   46025 command_runner.go:130] >       },
	I0920 17:42:41.931717   46025 command_runner.go:130] >       "username": "",
	I0920 17:42:41.931722   46025 command_runner.go:130] >       "spec": null,
	I0920 17:42:41.931726   46025 command_runner.go:130] >       "pinned": false
	I0920 17:42:41.931731   46025 command_runner.go:130] >     },
	I0920 17:42:41.931734   46025 command_runner.go:130] >     {
	I0920 17:42:41.931740   46025 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0920 17:42:41.931746   46025 command_runner.go:130] >       "repoTags": [
	I0920 17:42:41.931751   46025 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0920 17:42:41.931756   46025 command_runner.go:130] >       ],
	I0920 17:42:41.931760   46025 command_runner.go:130] >       "repoDigests": [
	I0920 17:42:41.931767   46025 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0920 17:42:41.931778   46025 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0920 17:42:41.931781   46025 command_runner.go:130] >       ],
	I0920 17:42:41.931785   46025 command_runner.go:130] >       "size": "89437508",
	I0920 17:42:41.931788   46025 command_runner.go:130] >       "uid": {
	I0920 17:42:41.931791   46025 command_runner.go:130] >         "value": "0"
	I0920 17:42:41.931795   46025 command_runner.go:130] >       },
	I0920 17:42:41.931798   46025 command_runner.go:130] >       "username": "",
	I0920 17:42:41.931802   46025 command_runner.go:130] >       "spec": null,
	I0920 17:42:41.931806   46025 command_runner.go:130] >       "pinned": false
	I0920 17:42:41.931810   46025 command_runner.go:130] >     },
	I0920 17:42:41.931813   46025 command_runner.go:130] >     {
	I0920 17:42:41.931818   46025 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0920 17:42:41.931823   46025 command_runner.go:130] >       "repoTags": [
	I0920 17:42:41.931827   46025 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0920 17:42:41.931832   46025 command_runner.go:130] >       ],
	I0920 17:42:41.931837   46025 command_runner.go:130] >       "repoDigests": [
	I0920 17:42:41.931874   46025 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0920 17:42:41.931887   46025 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0920 17:42:41.931890   46025 command_runner.go:130] >       ],
	I0920 17:42:41.931895   46025 command_runner.go:130] >       "size": "92733849",
	I0920 17:42:41.931898   46025 command_runner.go:130] >       "uid": null,
	I0920 17:42:41.931906   46025 command_runner.go:130] >       "username": "",
	I0920 17:42:41.931912   46025 command_runner.go:130] >       "spec": null,
	I0920 17:42:41.931917   46025 command_runner.go:130] >       "pinned": false
	I0920 17:42:41.931920   46025 command_runner.go:130] >     },
	I0920 17:42:41.931924   46025 command_runner.go:130] >     {
	I0920 17:42:41.931929   46025 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0920 17:42:41.931933   46025 command_runner.go:130] >       "repoTags": [
	I0920 17:42:41.931938   46025 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0920 17:42:41.931941   46025 command_runner.go:130] >       ],
	I0920 17:42:41.931945   46025 command_runner.go:130] >       "repoDigests": [
	I0920 17:42:41.931955   46025 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0920 17:42:41.931962   46025 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0920 17:42:41.931966   46025 command_runner.go:130] >       ],
	I0920 17:42:41.931971   46025 command_runner.go:130] >       "size": "68420934",
	I0920 17:42:41.931974   46025 command_runner.go:130] >       "uid": {
	I0920 17:42:41.931978   46025 command_runner.go:130] >         "value": "0"
	I0920 17:42:41.931983   46025 command_runner.go:130] >       },
	I0920 17:42:41.931988   46025 command_runner.go:130] >       "username": "",
	I0920 17:42:41.931994   46025 command_runner.go:130] >       "spec": null,
	I0920 17:42:41.931998   46025 command_runner.go:130] >       "pinned": false
	I0920 17:42:41.932001   46025 command_runner.go:130] >     },
	I0920 17:42:41.932004   46025 command_runner.go:130] >     {
	I0920 17:42:41.932010   46025 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0920 17:42:41.932016   46025 command_runner.go:130] >       "repoTags": [
	I0920 17:42:41.932020   46025 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0920 17:42:41.932023   46025 command_runner.go:130] >       ],
	I0920 17:42:41.932027   46025 command_runner.go:130] >       "repoDigests": [
	I0920 17:42:41.932036   46025 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0920 17:42:41.932043   46025 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0920 17:42:41.932049   46025 command_runner.go:130] >       ],
	I0920 17:42:41.932053   46025 command_runner.go:130] >       "size": "742080",
	I0920 17:42:41.932056   46025 command_runner.go:130] >       "uid": {
	I0920 17:42:41.932060   46025 command_runner.go:130] >         "value": "65535"
	I0920 17:42:41.932065   46025 command_runner.go:130] >       },
	I0920 17:42:41.932069   46025 command_runner.go:130] >       "username": "",
	I0920 17:42:41.932075   46025 command_runner.go:130] >       "spec": null,
	I0920 17:42:41.932078   46025 command_runner.go:130] >       "pinned": true
	I0920 17:42:41.932081   46025 command_runner.go:130] >     }
	I0920 17:42:41.932084   46025 command_runner.go:130] >   ]
	I0920 17:42:41.932089   46025 command_runner.go:130] > }
	I0920 17:42:41.932664   46025 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 17:42:41.932686   46025 crio.go:433] Images already preloaded, skipping extraction
	I0920 17:42:41.932744   46025 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 17:42:41.973332   46025 command_runner.go:130] > {
	I0920 17:42:41.973369   46025 command_runner.go:130] >   "images": [
	I0920 17:42:41.973380   46025 command_runner.go:130] >     {
	I0920 17:42:41.973393   46025 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0920 17:42:41.973404   46025 command_runner.go:130] >       "repoTags": [
	I0920 17:42:41.973446   46025 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0920 17:42:41.973465   46025 command_runner.go:130] >       ],
	I0920 17:42:41.973472   46025 command_runner.go:130] >       "repoDigests": [
	I0920 17:42:41.973490   46025 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0920 17:42:41.973502   46025 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0920 17:42:41.973508   46025 command_runner.go:130] >       ],
	I0920 17:42:41.973514   46025 command_runner.go:130] >       "size": "87190579",
	I0920 17:42:41.973521   46025 command_runner.go:130] >       "uid": null,
	I0920 17:42:41.973525   46025 command_runner.go:130] >       "username": "",
	I0920 17:42:41.973539   46025 command_runner.go:130] >       "spec": null,
	I0920 17:42:41.973547   46025 command_runner.go:130] >       "pinned": false
	I0920 17:42:41.973551   46025 command_runner.go:130] >     },
	I0920 17:42:41.973555   46025 command_runner.go:130] >     {
	I0920 17:42:41.973563   46025 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0920 17:42:41.973570   46025 command_runner.go:130] >       "repoTags": [
	I0920 17:42:41.973577   46025 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0920 17:42:41.973581   46025 command_runner.go:130] >       ],
	I0920 17:42:41.973588   46025 command_runner.go:130] >       "repoDigests": [
	I0920 17:42:41.973596   46025 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0920 17:42:41.973607   46025 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0920 17:42:41.973617   46025 command_runner.go:130] >       ],
	I0920 17:42:41.973626   46025 command_runner.go:130] >       "size": "1363676",
	I0920 17:42:41.973633   46025 command_runner.go:130] >       "uid": null,
	I0920 17:42:41.973646   46025 command_runner.go:130] >       "username": "",
	I0920 17:42:41.973656   46025 command_runner.go:130] >       "spec": null,
	I0920 17:42:41.973661   46025 command_runner.go:130] >       "pinned": false
	I0920 17:42:41.973668   46025 command_runner.go:130] >     },
	I0920 17:42:41.973672   46025 command_runner.go:130] >     {
	I0920 17:42:41.973682   46025 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0920 17:42:41.973690   46025 command_runner.go:130] >       "repoTags": [
	I0920 17:42:41.973696   46025 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0920 17:42:41.973702   46025 command_runner.go:130] >       ],
	I0920 17:42:41.973707   46025 command_runner.go:130] >       "repoDigests": [
	I0920 17:42:41.973722   46025 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0920 17:42:41.973738   46025 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0920 17:42:41.973747   46025 command_runner.go:130] >       ],
	I0920 17:42:41.973758   46025 command_runner.go:130] >       "size": "31470524",
	I0920 17:42:41.973766   46025 command_runner.go:130] >       "uid": null,
	I0920 17:42:41.973771   46025 command_runner.go:130] >       "username": "",
	I0920 17:42:41.973778   46025 command_runner.go:130] >       "spec": null,
	I0920 17:42:41.973783   46025 command_runner.go:130] >       "pinned": false
	I0920 17:42:41.973789   46025 command_runner.go:130] >     },
	I0920 17:42:41.973793   46025 command_runner.go:130] >     {
	I0920 17:42:41.973802   46025 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0920 17:42:41.973808   46025 command_runner.go:130] >       "repoTags": [
	I0920 17:42:41.973814   46025 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0920 17:42:41.973819   46025 command_runner.go:130] >       ],
	I0920 17:42:41.973824   46025 command_runner.go:130] >       "repoDigests": [
	I0920 17:42:41.973847   46025 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0920 17:42:41.973869   46025 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0920 17:42:41.973879   46025 command_runner.go:130] >       ],
	I0920 17:42:41.973887   46025 command_runner.go:130] >       "size": "63273227",
	I0920 17:42:41.973891   46025 command_runner.go:130] >       "uid": null,
	I0920 17:42:41.973897   46025 command_runner.go:130] >       "username": "nonroot",
	I0920 17:42:41.973906   46025 command_runner.go:130] >       "spec": null,
	I0920 17:42:41.973913   46025 command_runner.go:130] >       "pinned": false
	I0920 17:42:41.973917   46025 command_runner.go:130] >     },
	I0920 17:42:41.973922   46025 command_runner.go:130] >     {
	I0920 17:42:41.973928   46025 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0920 17:42:41.973938   46025 command_runner.go:130] >       "repoTags": [
	I0920 17:42:41.973946   46025 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0920 17:42:41.973954   46025 command_runner.go:130] >       ],
	I0920 17:42:41.973969   46025 command_runner.go:130] >       "repoDigests": [
	I0920 17:42:41.973979   46025 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0920 17:42:41.973988   46025 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0920 17:42:41.973995   46025 command_runner.go:130] >       ],
	I0920 17:42:41.974000   46025 command_runner.go:130] >       "size": "149009664",
	I0920 17:42:41.974006   46025 command_runner.go:130] >       "uid": {
	I0920 17:42:41.974011   46025 command_runner.go:130] >         "value": "0"
	I0920 17:42:41.974017   46025 command_runner.go:130] >       },
	I0920 17:42:41.974022   46025 command_runner.go:130] >       "username": "",
	I0920 17:42:41.974029   46025 command_runner.go:130] >       "spec": null,
	I0920 17:42:41.974033   46025 command_runner.go:130] >       "pinned": false
	I0920 17:42:41.974039   46025 command_runner.go:130] >     },
	I0920 17:42:41.974044   46025 command_runner.go:130] >     {
	I0920 17:42:41.974053   46025 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0920 17:42:41.974059   46025 command_runner.go:130] >       "repoTags": [
	I0920 17:42:41.974065   46025 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0920 17:42:41.974071   46025 command_runner.go:130] >       ],
	I0920 17:42:41.974076   46025 command_runner.go:130] >       "repoDigests": [
	I0920 17:42:41.974086   46025 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0920 17:42:41.974096   46025 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0920 17:42:41.974105   46025 command_runner.go:130] >       ],
	I0920 17:42:41.974116   46025 command_runner.go:130] >       "size": "95237600",
	I0920 17:42:41.974125   46025 command_runner.go:130] >       "uid": {
	I0920 17:42:41.974135   46025 command_runner.go:130] >         "value": "0"
	I0920 17:42:41.974141   46025 command_runner.go:130] >       },
	I0920 17:42:41.974150   46025 command_runner.go:130] >       "username": "",
	I0920 17:42:41.974160   46025 command_runner.go:130] >       "spec": null,
	I0920 17:42:41.974171   46025 command_runner.go:130] >       "pinned": false
	I0920 17:42:41.974185   46025 command_runner.go:130] >     },
	I0920 17:42:41.974197   46025 command_runner.go:130] >     {
	I0920 17:42:41.974210   46025 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0920 17:42:41.974230   46025 command_runner.go:130] >       "repoTags": [
	I0920 17:42:41.974255   46025 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0920 17:42:41.974266   46025 command_runner.go:130] >       ],
	I0920 17:42:41.974273   46025 command_runner.go:130] >       "repoDigests": [
	I0920 17:42:41.974285   46025 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0920 17:42:41.974297   46025 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0920 17:42:41.974310   46025 command_runner.go:130] >       ],
	I0920 17:42:41.974318   46025 command_runner.go:130] >       "size": "89437508",
	I0920 17:42:41.974325   46025 command_runner.go:130] >       "uid": {
	I0920 17:42:41.974332   46025 command_runner.go:130] >         "value": "0"
	I0920 17:42:41.974340   46025 command_runner.go:130] >       },
	I0920 17:42:41.974352   46025 command_runner.go:130] >       "username": "",
	I0920 17:42:41.974360   46025 command_runner.go:130] >       "spec": null,
	I0920 17:42:41.974371   46025 command_runner.go:130] >       "pinned": false
	I0920 17:42:41.974378   46025 command_runner.go:130] >     },
	I0920 17:42:41.974388   46025 command_runner.go:130] >     {
	I0920 17:42:41.974403   46025 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0920 17:42:41.974414   46025 command_runner.go:130] >       "repoTags": [
	I0920 17:42:41.974423   46025 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0920 17:42:41.974434   46025 command_runner.go:130] >       ],
	I0920 17:42:41.974444   46025 command_runner.go:130] >       "repoDigests": [
	I0920 17:42:41.974470   46025 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0920 17:42:41.974485   46025 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0920 17:42:41.974496   46025 command_runner.go:130] >       ],
	I0920 17:42:41.974509   46025 command_runner.go:130] >       "size": "92733849",
	I0920 17:42:41.974519   46025 command_runner.go:130] >       "uid": null,
	I0920 17:42:41.974527   46025 command_runner.go:130] >       "username": "",
	I0920 17:42:41.974535   46025 command_runner.go:130] >       "spec": null,
	I0920 17:42:41.974542   46025 command_runner.go:130] >       "pinned": false
	I0920 17:42:41.974548   46025 command_runner.go:130] >     },
	I0920 17:42:41.974552   46025 command_runner.go:130] >     {
	I0920 17:42:41.974558   46025 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0920 17:42:41.974562   46025 command_runner.go:130] >       "repoTags": [
	I0920 17:42:41.974567   46025 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0920 17:42:41.974572   46025 command_runner.go:130] >       ],
	I0920 17:42:41.974575   46025 command_runner.go:130] >       "repoDigests": [
	I0920 17:42:41.974583   46025 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0920 17:42:41.974590   46025 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0920 17:42:41.974597   46025 command_runner.go:130] >       ],
	I0920 17:42:41.974601   46025 command_runner.go:130] >       "size": "68420934",
	I0920 17:42:41.974605   46025 command_runner.go:130] >       "uid": {
	I0920 17:42:41.974609   46025 command_runner.go:130] >         "value": "0"
	I0920 17:42:41.974613   46025 command_runner.go:130] >       },
	I0920 17:42:41.974617   46025 command_runner.go:130] >       "username": "",
	I0920 17:42:41.974623   46025 command_runner.go:130] >       "spec": null,
	I0920 17:42:41.974627   46025 command_runner.go:130] >       "pinned": false
	I0920 17:42:41.974634   46025 command_runner.go:130] >     },
	I0920 17:42:41.974638   46025 command_runner.go:130] >     {
	I0920 17:42:41.974645   46025 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0920 17:42:41.974651   46025 command_runner.go:130] >       "repoTags": [
	I0920 17:42:41.974656   46025 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0920 17:42:41.974662   46025 command_runner.go:130] >       ],
	I0920 17:42:41.974667   46025 command_runner.go:130] >       "repoDigests": [
	I0920 17:42:41.974677   46025 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0920 17:42:41.974700   46025 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0920 17:42:41.974707   46025 command_runner.go:130] >       ],
	I0920 17:42:41.974712   46025 command_runner.go:130] >       "size": "742080",
	I0920 17:42:41.974719   46025 command_runner.go:130] >       "uid": {
	I0920 17:42:41.974724   46025 command_runner.go:130] >         "value": "65535"
	I0920 17:42:41.974729   46025 command_runner.go:130] >       },
	I0920 17:42:41.974734   46025 command_runner.go:130] >       "username": "",
	I0920 17:42:41.974740   46025 command_runner.go:130] >       "spec": null,
	I0920 17:42:41.974745   46025 command_runner.go:130] >       "pinned": true
	I0920 17:42:41.974751   46025 command_runner.go:130] >     }
	I0920 17:42:41.974754   46025 command_runner.go:130] >   ]
	I0920 17:42:41.974760   46025 command_runner.go:130] > }
	I0920 17:42:41.974908   46025 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 17:42:41.974921   46025 cache_images.go:84] Images are preloaded, skipping loading
	I0920 17:42:41.974931   46025 kubeadm.go:934] updating node { 192.168.39.115 8443 v1.31.1 crio true true} ...
	I0920 17:42:41.975033   46025 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-592246 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.115
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-592246 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 17:42:41.975101   46025 ssh_runner.go:195] Run: crio config
	I0920 17:42:42.020635   46025 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0920 17:42:42.020670   46025 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0920 17:42:42.020677   46025 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0920 17:42:42.020680   46025 command_runner.go:130] > #
	I0920 17:42:42.020688   46025 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0920 17:42:42.020694   46025 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0920 17:42:42.020701   46025 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0920 17:42:42.020709   46025 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0920 17:42:42.020716   46025 command_runner.go:130] > # reload'.
	I0920 17:42:42.020725   46025 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0920 17:42:42.020736   46025 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0920 17:42:42.020748   46025 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0920 17:42:42.020760   46025 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0920 17:42:42.020765   46025 command_runner.go:130] > [crio]
	I0920 17:42:42.020777   46025 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0920 17:42:42.020788   46025 command_runner.go:130] > # containers images, in this directory.
	I0920 17:42:42.020802   46025 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0920 17:42:42.020825   46025 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0920 17:42:42.020835   46025 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0920 17:42:42.020847   46025 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0920 17:42:42.020858   46025 command_runner.go:130] > # imagestore = ""
	I0920 17:42:42.020869   46025 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0920 17:42:42.020881   46025 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0920 17:42:42.020893   46025 command_runner.go:130] > storage_driver = "overlay"
	I0920 17:42:42.020904   46025 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0920 17:42:42.020914   46025 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0920 17:42:42.020918   46025 command_runner.go:130] > storage_option = [
	I0920 17:42:42.020923   46025 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0920 17:42:42.020927   46025 command_runner.go:130] > ]
	I0920 17:42:42.020933   46025 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0920 17:42:42.020939   46025 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0920 17:42:42.020943   46025 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0920 17:42:42.020948   46025 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0920 17:42:42.020966   46025 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0920 17:42:42.020976   46025 command_runner.go:130] > # always happen on a node reboot
	I0920 17:42:42.020984   46025 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0920 17:42:42.021001   46025 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0920 17:42:42.021013   46025 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0920 17:42:42.021023   46025 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0920 17:42:42.021033   46025 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0920 17:42:42.021046   46025 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0920 17:42:42.021058   46025 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0920 17:42:42.021253   46025 command_runner.go:130] > # internal_wipe = true
	I0920 17:42:42.021280   46025 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0920 17:42:42.021290   46025 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0920 17:42:42.021511   46025 command_runner.go:130] > # internal_repair = false
	I0920 17:42:42.021521   46025 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0920 17:42:42.021528   46025 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0920 17:42:42.021533   46025 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0920 17:42:42.021745   46025 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0920 17:42:42.021765   46025 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0920 17:42:42.021771   46025 command_runner.go:130] > [crio.api]
	I0920 17:42:42.021780   46025 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0920 17:42:42.021997   46025 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0920 17:42:42.022029   46025 command_runner.go:130] > # IP address on which the stream server will listen.
	I0920 17:42:42.022212   46025 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0920 17:42:42.022228   46025 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0920 17:42:42.022237   46025 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0920 17:42:42.022473   46025 command_runner.go:130] > # stream_port = "0"
	I0920 17:42:42.022484   46025 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0920 17:42:42.022693   46025 command_runner.go:130] > # stream_enable_tls = false
	I0920 17:42:42.022703   46025 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0920 17:42:42.022887   46025 command_runner.go:130] > # stream_idle_timeout = ""
	I0920 17:42:42.022898   46025 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0920 17:42:42.022905   46025 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0920 17:42:42.022909   46025 command_runner.go:130] > # minutes.
	I0920 17:42:42.023048   46025 command_runner.go:130] > # stream_tls_cert = ""
	I0920 17:42:42.023061   46025 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0920 17:42:42.023068   46025 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0920 17:42:42.023254   46025 command_runner.go:130] > # stream_tls_key = ""
	I0920 17:42:42.023271   46025 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0920 17:42:42.023282   46025 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0920 17:42:42.023306   46025 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0920 17:42:42.023489   46025 command_runner.go:130] > # stream_tls_ca = ""
	I0920 17:42:42.023502   46025 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0920 17:42:42.023613   46025 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0920 17:42:42.023630   46025 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0920 17:42:42.023715   46025 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0920 17:42:42.023729   46025 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0920 17:42:42.023738   46025 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0920 17:42:42.023747   46025 command_runner.go:130] > [crio.runtime]
	I0920 17:42:42.023757   46025 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0920 17:42:42.023768   46025 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0920 17:42:42.023799   46025 command_runner.go:130] > # "nofile=1024:2048"
	I0920 17:42:42.023816   46025 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0920 17:42:42.023826   46025 command_runner.go:130] > # default_ulimits = [
	I0920 17:42:42.023932   46025 command_runner.go:130] > # ]
	I0920 17:42:42.023950   46025 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0920 17:42:42.024155   46025 command_runner.go:130] > # no_pivot = false
	I0920 17:42:42.024169   46025 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0920 17:42:42.024179   46025 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0920 17:42:42.024370   46025 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0920 17:42:42.024397   46025 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0920 17:42:42.024405   46025 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0920 17:42:42.024416   46025 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0920 17:42:42.024660   46025 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0920 17:42:42.024676   46025 command_runner.go:130] > # Cgroup setting for conmon
	I0920 17:42:42.024687   46025 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0920 17:42:42.024694   46025 command_runner.go:130] > conmon_cgroup = "pod"
	I0920 17:42:42.024704   46025 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0920 17:42:42.024712   46025 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0920 17:42:42.024722   46025 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0920 17:42:42.024732   46025 command_runner.go:130] > conmon_env = [
	I0920 17:42:42.024741   46025 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0920 17:42:42.024750   46025 command_runner.go:130] > ]
	I0920 17:42:42.024758   46025 command_runner.go:130] > # Additional environment variables to set for all the
	I0920 17:42:42.024767   46025 command_runner.go:130] > # containers. These are overridden if set in the
	I0920 17:42:42.024778   46025 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0920 17:42:42.024788   46025 command_runner.go:130] > # default_env = [
	I0920 17:42:42.024794   46025 command_runner.go:130] > # ]
	I0920 17:42:42.024805   46025 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0920 17:42:42.024817   46025 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0920 17:42:42.024838   46025 command_runner.go:130] > # selinux = false
	I0920 17:42:42.024850   46025 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0920 17:42:42.024863   46025 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0920 17:42:42.024877   46025 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0920 17:42:42.024888   46025 command_runner.go:130] > # seccomp_profile = ""
	I0920 17:42:42.024898   46025 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0920 17:42:42.024911   46025 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0920 17:42:42.024924   46025 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0920 17:42:42.024935   46025 command_runner.go:130] > # which might increase security.
	I0920 17:42:42.024945   46025 command_runner.go:130] > # This option is currently deprecated,
	I0920 17:42:42.024957   46025 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0920 17:42:42.024967   46025 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0920 17:42:42.024977   46025 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0920 17:42:42.024989   46025 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0920 17:42:42.024997   46025 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0920 17:42:42.025010   46025 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0920 17:42:42.025020   46025 command_runner.go:130] > # This option supports live configuration reload.
	I0920 17:42:42.025034   46025 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0920 17:42:42.025046   46025 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0920 17:42:42.025055   46025 command_runner.go:130] > # the cgroup blockio controller.
	I0920 17:42:42.025066   46025 command_runner.go:130] > # blockio_config_file = ""
	I0920 17:42:42.025079   46025 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0920 17:42:42.025086   46025 command_runner.go:130] > # blockio parameters.
	I0920 17:42:42.025096   46025 command_runner.go:130] > # blockio_reload = false
	I0920 17:42:42.025106   46025 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0920 17:42:42.025115   46025 command_runner.go:130] > # irqbalance daemon.
	I0920 17:42:42.025126   46025 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0920 17:42:42.025137   46025 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0920 17:42:42.025149   46025 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0920 17:42:42.025162   46025 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0920 17:42:42.025170   46025 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0920 17:42:42.025182   46025 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0920 17:42:42.025190   46025 command_runner.go:130] > # This option supports live configuration reload.
	I0920 17:42:42.025209   46025 command_runner.go:130] > # rdt_config_file = ""
	I0920 17:42:42.025221   46025 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0920 17:42:42.025231   46025 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0920 17:42:42.025273   46025 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0920 17:42:42.025285   46025 command_runner.go:130] > # separate_pull_cgroup = ""
	I0920 17:42:42.025295   46025 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0920 17:42:42.025305   46025 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0920 17:42:42.025314   46025 command_runner.go:130] > # will be added.
	I0920 17:42:42.025321   46025 command_runner.go:130] > # default_capabilities = [
	I0920 17:42:42.025329   46025 command_runner.go:130] > # 	"CHOWN",
	I0920 17:42:42.025336   46025 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0920 17:42:42.025345   46025 command_runner.go:130] > # 	"FSETID",
	I0920 17:42:42.025350   46025 command_runner.go:130] > # 	"FOWNER",
	I0920 17:42:42.025357   46025 command_runner.go:130] > # 	"SETGID",
	I0920 17:42:42.025365   46025 command_runner.go:130] > # 	"SETUID",
	I0920 17:42:42.025370   46025 command_runner.go:130] > # 	"SETPCAP",
	I0920 17:42:42.025377   46025 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0920 17:42:42.025383   46025 command_runner.go:130] > # 	"KILL",
	I0920 17:42:42.025390   46025 command_runner.go:130] > # ]
	I0920 17:42:42.025402   46025 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0920 17:42:42.025415   46025 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0920 17:42:42.025426   46025 command_runner.go:130] > # add_inheritable_capabilities = false
	I0920 17:42:42.025437   46025 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0920 17:42:42.025448   46025 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0920 17:42:42.025455   46025 command_runner.go:130] > default_sysctls = [
	I0920 17:42:42.025469   46025 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0920 17:42:42.025477   46025 command_runner.go:130] > ]
	I0920 17:42:42.025485   46025 command_runner.go:130] > # List of devices on the host that a
	I0920 17:42:42.025494   46025 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0920 17:42:42.025503   46025 command_runner.go:130] > # allowed_devices = [
	I0920 17:42:42.025517   46025 command_runner.go:130] > # 	"/dev/fuse",
	I0920 17:42:42.025525   46025 command_runner.go:130] > # ]
	I0920 17:42:42.025533   46025 command_runner.go:130] > # List of additional devices. specified as
	I0920 17:42:42.025555   46025 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0920 17:42:42.025567   46025 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0920 17:42:42.025579   46025 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0920 17:42:42.025589   46025 command_runner.go:130] > # additional_devices = [
	I0920 17:42:42.025595   46025 command_runner.go:130] > # ]
	I0920 17:42:42.025605   46025 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0920 17:42:42.025614   46025 command_runner.go:130] > # cdi_spec_dirs = [
	I0920 17:42:42.025619   46025 command_runner.go:130] > # 	"/etc/cdi",
	I0920 17:42:42.025627   46025 command_runner.go:130] > # 	"/var/run/cdi",
	I0920 17:42:42.025632   46025 command_runner.go:130] > # ]
	I0920 17:42:42.025645   46025 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0920 17:42:42.025654   46025 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0920 17:42:42.025664   46025 command_runner.go:130] > # Defaults to false.
	I0920 17:42:42.025673   46025 command_runner.go:130] > # device_ownership_from_security_context = false
	I0920 17:42:42.025686   46025 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0920 17:42:42.025700   46025 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0920 17:42:42.025709   46025 command_runner.go:130] > # hooks_dir = [
	I0920 17:42:42.025716   46025 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0920 17:42:42.025724   46025 command_runner.go:130] > # ]
	I0920 17:42:42.025734   46025 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0920 17:42:42.025747   46025 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0920 17:42:42.025757   46025 command_runner.go:130] > # its default mounts from the following two files:
	I0920 17:42:42.025765   46025 command_runner.go:130] > #
	I0920 17:42:42.025775   46025 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0920 17:42:42.025787   46025 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0920 17:42:42.025799   46025 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0920 17:42:42.025807   46025 command_runner.go:130] > #
	I0920 17:42:42.025817   46025 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0920 17:42:42.025830   46025 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0920 17:42:42.025861   46025 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0920 17:42:42.025869   46025 command_runner.go:130] > #      only add mounts it finds in this file.
	I0920 17:42:42.025877   46025 command_runner.go:130] > #
	I0920 17:42:42.025884   46025 command_runner.go:130] > # default_mounts_file = ""
	I0920 17:42:42.025903   46025 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0920 17:42:42.025921   46025 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0920 17:42:42.025930   46025 command_runner.go:130] > pids_limit = 1024
	I0920 17:42:42.025938   46025 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0920 17:42:42.025949   46025 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0920 17:42:42.025959   46025 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0920 17:42:42.025974   46025 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0920 17:42:42.025983   46025 command_runner.go:130] > # log_size_max = -1
	I0920 17:42:42.025993   46025 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0920 17:42:42.026004   46025 command_runner.go:130] > # log_to_journald = false
	I0920 17:42:42.026014   46025 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0920 17:42:42.026027   46025 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0920 17:42:42.026038   46025 command_runner.go:130] > # Path to directory for container attach sockets.
	I0920 17:42:42.026045   46025 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0920 17:42:42.026057   46025 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0920 17:42:42.026064   46025 command_runner.go:130] > # bind_mount_prefix = ""
	I0920 17:42:42.026071   46025 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0920 17:42:42.026080   46025 command_runner.go:130] > # read_only = false
	I0920 17:42:42.026090   46025 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0920 17:42:42.026102   46025 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0920 17:42:42.026112   46025 command_runner.go:130] > # live configuration reload.
	I0920 17:42:42.026118   46025 command_runner.go:130] > # log_level = "info"
	I0920 17:42:42.026130   46025 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0920 17:42:42.026137   46025 command_runner.go:130] > # This option supports live configuration reload.
	I0920 17:42:42.026147   46025 command_runner.go:130] > # log_filter = ""
	I0920 17:42:42.026157   46025 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0920 17:42:42.026170   46025 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0920 17:42:42.026178   46025 command_runner.go:130] > # separated by comma.
	I0920 17:42:42.026188   46025 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0920 17:42:42.026198   46025 command_runner.go:130] > # uid_mappings = ""
	I0920 17:42:42.026208   46025 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0920 17:42:42.026220   46025 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0920 17:42:42.026230   46025 command_runner.go:130] > # separated by comma.
	I0920 17:42:42.026246   46025 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0920 17:42:42.026256   46025 command_runner.go:130] > # gid_mappings = ""
	I0920 17:42:42.026265   46025 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0920 17:42:42.026278   46025 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0920 17:42:42.026290   46025 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0920 17:42:42.026307   46025 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0920 17:42:42.026316   46025 command_runner.go:130] > # minimum_mappable_uid = -1
	I0920 17:42:42.026326   46025 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0920 17:42:42.026337   46025 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0920 17:42:42.026349   46025 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0920 17:42:42.026363   46025 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0920 17:42:42.026372   46025 command_runner.go:130] > # minimum_mappable_gid = -1
	I0920 17:42:42.026381   46025 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0920 17:42:42.026393   46025 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0920 17:42:42.026405   46025 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0920 17:42:42.026414   46025 command_runner.go:130] > # ctr_stop_timeout = 30
	I0920 17:42:42.026423   46025 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0920 17:42:42.026436   46025 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0920 17:42:42.026447   46025 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0920 17:42:42.026459   46025 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0920 17:42:42.026466   46025 command_runner.go:130] > drop_infra_ctr = false
	I0920 17:42:42.026479   46025 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0920 17:42:42.026492   46025 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0920 17:42:42.026503   46025 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0920 17:42:42.026518   46025 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0920 17:42:42.026531   46025 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0920 17:42:42.026541   46025 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0920 17:42:42.026552   46025 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0920 17:42:42.026563   46025 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0920 17:42:42.026573   46025 command_runner.go:130] > # shared_cpuset = ""
	I0920 17:42:42.026582   46025 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0920 17:42:42.026593   46025 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0920 17:42:42.026603   46025 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0920 17:42:42.026617   46025 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0920 17:42:42.026626   46025 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0920 17:42:42.026634   46025 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0920 17:42:42.026646   46025 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0920 17:42:42.026655   46025 command_runner.go:130] > # enable_criu_support = false
	I0920 17:42:42.026662   46025 command_runner.go:130] > # Enable/disable the generation of the container,
	I0920 17:42:42.026679   46025 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0920 17:42:42.026689   46025 command_runner.go:130] > # enable_pod_events = false
	I0920 17:42:42.026699   46025 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0920 17:42:42.026711   46025 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0920 17:42:42.026723   46025 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0920 17:42:42.026732   46025 command_runner.go:130] > # default_runtime = "runc"
	I0920 17:42:42.026740   46025 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0920 17:42:42.026754   46025 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0920 17:42:42.026773   46025 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0920 17:42:42.026784   46025 command_runner.go:130] > # creation as a file is not desired either.
	I0920 17:42:42.026798   46025 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0920 17:42:42.026808   46025 command_runner.go:130] > # the hostname is being managed dynamically.
	I0920 17:42:42.026815   46025 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0920 17:42:42.026823   46025 command_runner.go:130] > # ]
	I0920 17:42:42.026833   46025 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0920 17:42:42.026845   46025 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0920 17:42:42.026857   46025 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0920 17:42:42.026865   46025 command_runner.go:130] > # Each entry in the table should follow the format:
	I0920 17:42:42.026873   46025 command_runner.go:130] > #
	I0920 17:42:42.026884   46025 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0920 17:42:42.026891   46025 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0920 17:42:42.026918   46025 command_runner.go:130] > # runtime_type = "oci"
	I0920 17:42:42.026929   46025 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0920 17:42:42.026938   46025 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0920 17:42:42.026948   46025 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0920 17:42:42.026960   46025 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0920 17:42:42.026967   46025 command_runner.go:130] > # monitor_env = []
	I0920 17:42:42.026979   46025 command_runner.go:130] > # privileged_without_host_devices = false
	I0920 17:42:42.026986   46025 command_runner.go:130] > # allowed_annotations = []
	I0920 17:42:42.026997   46025 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0920 17:42:42.027003   46025 command_runner.go:130] > # Where:
	I0920 17:42:42.027013   46025 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0920 17:42:42.027025   46025 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0920 17:42:42.027037   46025 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0920 17:42:42.027050   46025 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0920 17:42:42.027059   46025 command_runner.go:130] > #   in $PATH.
	I0920 17:42:42.027070   46025 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0920 17:42:42.027082   46025 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0920 17:42:42.027100   46025 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0920 17:42:42.027109   46025 command_runner.go:130] > #   state.
	I0920 17:42:42.027120   46025 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0920 17:42:42.027133   46025 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0920 17:42:42.027145   46025 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0920 17:42:42.027156   46025 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0920 17:42:42.027168   46025 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0920 17:42:42.027181   46025 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0920 17:42:42.027189   46025 command_runner.go:130] > #   The currently recognized values are:
	I0920 17:42:42.027202   46025 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0920 17:42:42.027217   46025 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0920 17:42:42.027230   46025 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0920 17:42:42.027246   46025 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0920 17:42:42.027261   46025 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0920 17:42:42.027275   46025 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0920 17:42:42.027289   46025 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0920 17:42:42.027302   46025 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0920 17:42:42.027314   46025 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0920 17:42:42.027325   46025 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0920 17:42:42.027335   46025 command_runner.go:130] > #   deprecated option "conmon".
	I0920 17:42:42.027345   46025 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0920 17:42:42.027356   46025 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0920 17:42:42.027367   46025 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0920 17:42:42.027378   46025 command_runner.go:130] > #   should be moved to the container's cgroup
	I0920 17:42:42.027390   46025 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0920 17:42:42.027401   46025 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0920 17:42:42.027416   46025 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0920 17:42:42.027427   46025 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0920 17:42:42.027435   46025 command_runner.go:130] > #
	I0920 17:42:42.027443   46025 command_runner.go:130] > # Using the seccomp notifier feature:
	I0920 17:42:42.027451   46025 command_runner.go:130] > #
	I0920 17:42:42.027461   46025 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0920 17:42:42.027473   46025 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0920 17:42:42.027481   46025 command_runner.go:130] > #
	I0920 17:42:42.027496   46025 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0920 17:42:42.027515   46025 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0920 17:42:42.027524   46025 command_runner.go:130] > #
	I0920 17:42:42.027535   46025 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0920 17:42:42.027543   46025 command_runner.go:130] > # feature.
	I0920 17:42:42.027549   46025 command_runner.go:130] > #
	I0920 17:42:42.027562   46025 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0920 17:42:42.027576   46025 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0920 17:42:42.027588   46025 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0920 17:42:42.027602   46025 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0920 17:42:42.027614   46025 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0920 17:42:42.027618   46025 command_runner.go:130] > #
	I0920 17:42:42.027631   46025 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0920 17:42:42.027643   46025 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0920 17:42:42.027652   46025 command_runner.go:130] > #
	I0920 17:42:42.027680   46025 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0920 17:42:42.027696   46025 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0920 17:42:42.027701   46025 command_runner.go:130] > #
	I0920 17:42:42.027711   46025 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0920 17:42:42.027722   46025 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0920 17:42:42.027727   46025 command_runner.go:130] > # limitation.
	I0920 17:42:42.027736   46025 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0920 17:42:42.027745   46025 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0920 17:42:42.027752   46025 command_runner.go:130] > runtime_type = "oci"
	I0920 17:42:42.027760   46025 command_runner.go:130] > runtime_root = "/run/runc"
	I0920 17:42:42.027769   46025 command_runner.go:130] > runtime_config_path = ""
	I0920 17:42:42.027781   46025 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0920 17:42:42.027791   46025 command_runner.go:130] > monitor_cgroup = "pod"
	I0920 17:42:42.027797   46025 command_runner.go:130] > monitor_exec_cgroup = ""
	I0920 17:42:42.027806   46025 command_runner.go:130] > monitor_env = [
	I0920 17:42:42.027815   46025 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0920 17:42:42.027822   46025 command_runner.go:130] > ]
	I0920 17:42:42.027828   46025 command_runner.go:130] > privileged_without_host_devices = false
	I0920 17:42:42.027840   46025 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0920 17:42:42.027851   46025 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0920 17:42:42.027863   46025 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0920 17:42:42.027880   46025 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0920 17:42:42.027895   46025 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0920 17:42:42.027906   46025 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0920 17:42:42.027930   46025 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0920 17:42:42.027945   46025 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0920 17:42:42.027957   46025 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0920 17:42:42.027971   46025 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0920 17:42:42.027980   46025 command_runner.go:130] > # Example:
	I0920 17:42:42.027988   46025 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0920 17:42:42.027999   46025 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0920 17:42:42.028008   46025 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0920 17:42:42.028016   46025 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0920 17:42:42.028025   46025 command_runner.go:130] > # cpuset = 0
	I0920 17:42:42.028031   46025 command_runner.go:130] > # cpushares = "0-1"
	I0920 17:42:42.028040   46025 command_runner.go:130] > # Where:
	I0920 17:42:42.028048   46025 command_runner.go:130] > # The workload name is workload-type.
	I0920 17:42:42.028063   46025 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0920 17:42:42.028078   46025 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0920 17:42:42.028092   46025 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0920 17:42:42.028108   46025 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0920 17:42:42.028119   46025 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0920 17:42:42.028131   46025 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0920 17:42:42.028144   46025 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0920 17:42:42.028155   46025 command_runner.go:130] > # Default value is set to true
	I0920 17:42:42.028163   46025 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0920 17:42:42.028174   46025 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0920 17:42:42.028182   46025 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0920 17:42:42.028193   46025 command_runner.go:130] > # Default value is set to 'false'
	I0920 17:42:42.028201   46025 command_runner.go:130] > # disable_hostport_mapping = false
	I0920 17:42:42.028215   46025 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0920 17:42:42.028223   46025 command_runner.go:130] > #
	I0920 17:42:42.028233   46025 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0920 17:42:42.028246   46025 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0920 17:42:42.028259   46025 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0920 17:42:42.028268   46025 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0920 17:42:42.028277   46025 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0920 17:42:42.028283   46025 command_runner.go:130] > [crio.image]
	I0920 17:42:42.028291   46025 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0920 17:42:42.028298   46025 command_runner.go:130] > # default_transport = "docker://"
	I0920 17:42:42.028312   46025 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0920 17:42:42.028322   46025 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0920 17:42:42.028329   46025 command_runner.go:130] > # global_auth_file = ""
	I0920 17:42:42.028336   46025 command_runner.go:130] > # The image used to instantiate infra containers.
	I0920 17:42:42.028343   46025 command_runner.go:130] > # This option supports live configuration reload.
	I0920 17:42:42.028349   46025 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0920 17:42:42.028357   46025 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0920 17:42:42.028365   46025 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0920 17:42:42.028373   46025 command_runner.go:130] > # This option supports live configuration reload.
	I0920 17:42:42.028379   46025 command_runner.go:130] > # pause_image_auth_file = ""
	I0920 17:42:42.028387   46025 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0920 17:42:42.028396   46025 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0920 17:42:42.028407   46025 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0920 17:42:42.028416   46025 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0920 17:42:42.028423   46025 command_runner.go:130] > # pause_command = "/pause"
	I0920 17:42:42.028432   46025 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0920 17:42:42.028441   46025 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0920 17:42:42.028450   46025 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0920 17:42:42.028459   46025 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0920 17:42:42.028467   46025 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0920 17:42:42.028476   46025 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0920 17:42:42.028482   46025 command_runner.go:130] > # pinned_images = [
	I0920 17:42:42.028486   46025 command_runner.go:130] > # ]
	I0920 17:42:42.028494   46025 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0920 17:42:42.028504   46025 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0920 17:42:42.028522   46025 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0920 17:42:42.028534   46025 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0920 17:42:42.028546   46025 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0920 17:42:42.028556   46025 command_runner.go:130] > # signature_policy = ""
	I0920 17:42:42.028565   46025 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0920 17:42:42.028579   46025 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0920 17:42:42.028592   46025 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0920 17:42:42.028602   46025 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0920 17:42:42.028614   46025 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0920 17:42:42.028624   46025 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0920 17:42:42.028636   46025 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0920 17:42:42.028655   46025 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0920 17:42:42.028664   46025 command_runner.go:130] > # changing them here.
	I0920 17:42:42.028670   46025 command_runner.go:130] > # insecure_registries = [
	I0920 17:42:42.028677   46025 command_runner.go:130] > # ]
	I0920 17:42:42.028686   46025 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0920 17:42:42.028696   46025 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0920 17:42:42.028706   46025 command_runner.go:130] > # image_volumes = "mkdir"
	I0920 17:42:42.028714   46025 command_runner.go:130] > # Temporary directory to use for storing big files
	I0920 17:42:42.028723   46025 command_runner.go:130] > # big_files_temporary_dir = ""
	I0920 17:42:42.028734   46025 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0920 17:42:42.028742   46025 command_runner.go:130] > # CNI plugins.
	I0920 17:42:42.028747   46025 command_runner.go:130] > [crio.network]
	I0920 17:42:42.028756   46025 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0920 17:42:42.028771   46025 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0920 17:42:42.028780   46025 command_runner.go:130] > # cni_default_network = ""
	I0920 17:42:42.028789   46025 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0920 17:42:42.028798   46025 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0920 17:42:42.028805   46025 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0920 17:42:42.028813   46025 command_runner.go:130] > # plugin_dirs = [
	I0920 17:42:42.028820   46025 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0920 17:42:42.028827   46025 command_runner.go:130] > # ]
	I0920 17:42:42.028836   46025 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0920 17:42:42.028844   46025 command_runner.go:130] > [crio.metrics]
	I0920 17:42:42.028852   46025 command_runner.go:130] > # Globally enable or disable metrics support.
	I0920 17:42:42.028860   46025 command_runner.go:130] > enable_metrics = true
	I0920 17:42:42.028868   46025 command_runner.go:130] > # Specify enabled metrics collectors.
	I0920 17:42:42.028877   46025 command_runner.go:130] > # Per default all metrics are enabled.
	I0920 17:42:42.028887   46025 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0920 17:42:42.028898   46025 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0920 17:42:42.028907   46025 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0920 17:42:42.028915   46025 command_runner.go:130] > # metrics_collectors = [
	I0920 17:42:42.028924   46025 command_runner.go:130] > # 	"operations",
	I0920 17:42:42.028931   46025 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0920 17:42:42.028941   46025 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0920 17:42:42.028948   46025 command_runner.go:130] > # 	"operations_errors",
	I0920 17:42:42.028957   46025 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0920 17:42:42.028964   46025 command_runner.go:130] > # 	"image_pulls_by_name",
	I0920 17:42:42.028973   46025 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0920 17:42:42.028980   46025 command_runner.go:130] > # 	"image_pulls_failures",
	I0920 17:42:42.028989   46025 command_runner.go:130] > # 	"image_pulls_successes",
	I0920 17:42:42.028996   46025 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0920 17:42:42.029005   46025 command_runner.go:130] > # 	"image_layer_reuse",
	I0920 17:42:42.029018   46025 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0920 17:42:42.029031   46025 command_runner.go:130] > # 	"containers_oom_total",
	I0920 17:42:42.029039   46025 command_runner.go:130] > # 	"containers_oom",
	I0920 17:42:42.029045   46025 command_runner.go:130] > # 	"processes_defunct",
	I0920 17:42:42.029054   46025 command_runner.go:130] > # 	"operations_total",
	I0920 17:42:42.029061   46025 command_runner.go:130] > # 	"operations_latency_seconds",
	I0920 17:42:42.029071   46025 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0920 17:42:42.029077   46025 command_runner.go:130] > # 	"operations_errors_total",
	I0920 17:42:42.029088   46025 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0920 17:42:42.029108   46025 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0920 17:42:42.029118   46025 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0920 17:42:42.029129   46025 command_runner.go:130] > # 	"image_pulls_success_total",
	I0920 17:42:42.029138   46025 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0920 17:42:42.029145   46025 command_runner.go:130] > # 	"containers_oom_count_total",
	I0920 17:42:42.029155   46025 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0920 17:42:42.029163   46025 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0920 17:42:42.029170   46025 command_runner.go:130] > # ]
	I0920 17:42:42.029177   46025 command_runner.go:130] > # The port on which the metrics server will listen.
	I0920 17:42:42.029186   46025 command_runner.go:130] > # metrics_port = 9090
	I0920 17:42:42.029193   46025 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0920 17:42:42.029202   46025 command_runner.go:130] > # metrics_socket = ""
	I0920 17:42:42.029211   46025 command_runner.go:130] > # The certificate for the secure metrics server.
	I0920 17:42:42.029223   46025 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0920 17:42:42.029236   46025 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0920 17:42:42.029245   46025 command_runner.go:130] > # certificate on any modification event.
	I0920 17:42:42.029251   46025 command_runner.go:130] > # metrics_cert = ""
	I0920 17:42:42.029260   46025 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0920 17:42:42.029269   46025 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0920 17:42:42.029278   46025 command_runner.go:130] > # metrics_key = ""
	I0920 17:42:42.029289   46025 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0920 17:42:42.029299   46025 command_runner.go:130] > [crio.tracing]
	I0920 17:42:42.029311   46025 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0920 17:42:42.029321   46025 command_runner.go:130] > # enable_tracing = false
	I0920 17:42:42.029340   46025 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0920 17:42:42.029350   46025 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0920 17:42:42.029362   46025 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0920 17:42:42.029373   46025 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0920 17:42:42.029381   46025 command_runner.go:130] > # CRI-O NRI configuration.
	I0920 17:42:42.029389   46025 command_runner.go:130] > [crio.nri]
	I0920 17:42:42.029399   46025 command_runner.go:130] > # Globally enable or disable NRI.
	I0920 17:42:42.029407   46025 command_runner.go:130] > # enable_nri = false
	I0920 17:42:42.029414   46025 command_runner.go:130] > # NRI socket to listen on.
	I0920 17:42:42.029423   46025 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0920 17:42:42.029429   46025 command_runner.go:130] > # NRI plugin directory to use.
	I0920 17:42:42.029438   46025 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0920 17:42:42.029452   46025 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0920 17:42:42.029463   46025 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0920 17:42:42.029475   46025 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0920 17:42:42.029485   46025 command_runner.go:130] > # nri_disable_connections = false
	I0920 17:42:42.029492   46025 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0920 17:42:42.029501   46025 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0920 17:42:42.029515   46025 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0920 17:42:42.029524   46025 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0920 17:42:42.029537   46025 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0920 17:42:42.029545   46025 command_runner.go:130] > [crio.stats]
	I0920 17:42:42.029555   46025 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0920 17:42:42.029566   46025 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0920 17:42:42.029575   46025 command_runner.go:130] > # stats_collection_period = 0
	I0920 17:42:42.029883   46025 command_runner.go:130] ! time="2024-09-20 17:42:41.985103444Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0920 17:42:42.029910   46025 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0920 17:42:42.029987   46025 cni.go:84] Creating CNI manager for ""
	I0920 17:42:42.029999   46025 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0920 17:42:42.030051   46025 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 17:42:42.030086   46025 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.115 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-592246 NodeName:multinode-592246 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.115"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.115 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 17:42:42.030261   46025 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.115
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-592246"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.115
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.115"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 17:42:42.030338   46025 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 17:42:42.041470   46025 command_runner.go:130] > kubeadm
	I0920 17:42:42.041497   46025 command_runner.go:130] > kubectl
	I0920 17:42:42.041501   46025 command_runner.go:130] > kubelet
	I0920 17:42:42.041522   46025 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 17:42:42.041576   46025 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 17:42:42.051550   46025 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0920 17:42:42.069658   46025 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 17:42:42.087059   46025 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0920 17:42:42.104520   46025 ssh_runner.go:195] Run: grep 192.168.39.115	control-plane.minikube.internal$ /etc/hosts
	I0920 17:42:42.108429   46025 command_runner.go:130] > 192.168.39.115	control-plane.minikube.internal
	I0920 17:42:42.108518   46025 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:42:42.257045   46025 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 17:42:42.272581   46025 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/multinode-592246 for IP: 192.168.39.115
	I0920 17:42:42.272607   46025 certs.go:194] generating shared ca certs ...
	I0920 17:42:42.272623   46025 certs.go:226] acquiring lock for ca certs: {Name:mkc7ef6c737c6bdc3fdd9dcff8f57029c020d8f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:42:42.272775   46025 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key
	I0920 17:42:42.272815   46025 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key
	I0920 17:42:42.272824   46025 certs.go:256] generating profile certs ...
	I0920 17:42:42.272898   46025 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/multinode-592246/client.key
	I0920 17:42:42.272955   46025 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/multinode-592246/apiserver.key.bdc96fd7
	I0920 17:42:42.272989   46025 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/multinode-592246/proxy-client.key
	I0920 17:42:42.272999   46025 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0920 17:42:42.273018   46025 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0920 17:42:42.273033   46025 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0920 17:42:42.273047   46025 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0920 17:42:42.273061   46025 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/multinode-592246/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0920 17:42:42.273071   46025 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/multinode-592246/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0920 17:42:42.273081   46025 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/multinode-592246/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0920 17:42:42.273090   46025 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/multinode-592246/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0920 17:42:42.273140   46025 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem (1338 bytes)
	W0920 17:42:42.273163   46025 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973_empty.pem, impossibly tiny 0 bytes
	I0920 17:42:42.273171   46025 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 17:42:42.273247   46025 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem (1082 bytes)
	I0920 17:42:42.273283   46025 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem (1123 bytes)
	I0920 17:42:42.273309   46025 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem (1675 bytes)
	I0920 17:42:42.273349   46025 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem (1708 bytes)
	I0920 17:42:42.273377   46025 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem -> /usr/share/ca-certificates/15973.pem
	I0920 17:42:42.273392   46025 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem -> /usr/share/ca-certificates/159732.pem
	I0920 17:42:42.273405   46025 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:42:42.274045   46025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 17:42:42.300027   46025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 17:42:42.325247   46025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 17:42:42.350349   46025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0920 17:42:42.376989   46025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/multinode-592246/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0920 17:42:42.402493   46025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/multinode-592246/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 17:42:42.427495   46025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/multinode-592246/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 17:42:42.452423   46025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/multinode-592246/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 17:42:42.478154   46025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem --> /usr/share/ca-certificates/15973.pem (1338 bytes)
	I0920 17:42:42.503412   46025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /usr/share/ca-certificates/159732.pem (1708 bytes)
	I0920 17:42:42.527851   46025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 17:42:42.553486   46025 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 17:42:42.584354   46025 ssh_runner.go:195] Run: openssl version
	I0920 17:42:42.603197   46025 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0920 17:42:42.603423   46025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/159732.pem && ln -fs /usr/share/ca-certificates/159732.pem /etc/ssl/certs/159732.pem"
	I0920 17:42:42.649844   46025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/159732.pem
	I0920 17:42:42.661820   46025 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 20 17:04 /usr/share/ca-certificates/159732.pem
	I0920 17:42:42.661881   46025 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:04 /usr/share/ca-certificates/159732.pem
	I0920 17:42:42.661991   46025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/159732.pem
	I0920 17:42:42.671437   46025 command_runner.go:130] > 3ec20f2e
	I0920 17:42:42.671852   46025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/159732.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 17:42:42.687660   46025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 17:42:42.701020   46025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:42:42.706033   46025 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 20 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:42:42.706243   46025 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:42:42.706313   46025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:42:42.715744   46025 command_runner.go:130] > b5213941
	I0920 17:42:42.716007   46025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 17:42:42.729389   46025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15973.pem && ln -fs /usr/share/ca-certificates/15973.pem /etc/ssl/certs/15973.pem"
	I0920 17:42:42.744957   46025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15973.pem
	I0920 17:42:42.750133   46025 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 20 17:04 /usr/share/ca-certificates/15973.pem
	I0920 17:42:42.750404   46025 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:04 /usr/share/ca-certificates/15973.pem
	I0920 17:42:42.750455   46025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15973.pem
	I0920 17:42:42.756871   46025 command_runner.go:130] > 51391683
	I0920 17:42:42.757156   46025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15973.pem /etc/ssl/certs/51391683.0"
	I0920 17:42:42.773414   46025 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 17:42:42.778668   46025 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 17:42:42.778698   46025 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0920 17:42:42.778708   46025 command_runner.go:130] > Device: 253,1	Inode: 7337000     Links: 1
	I0920 17:42:42.778718   46025 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0920 17:42:42.778730   46025 command_runner.go:130] > Access: 2024-09-20 17:35:53.092700305 +0000
	I0920 17:42:42.778737   46025 command_runner.go:130] > Modify: 2024-09-20 17:35:53.092700305 +0000
	I0920 17:42:42.778747   46025 command_runner.go:130] > Change: 2024-09-20 17:35:53.092700305 +0000
	I0920 17:42:42.778753   46025 command_runner.go:130] >  Birth: 2024-09-20 17:35:53.092700305 +0000
	I0920 17:42:42.779015   46025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 17:42:42.786116   46025 command_runner.go:130] > Certificate will not expire
	I0920 17:42:42.786459   46025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 17:42:42.793991   46025 command_runner.go:130] > Certificate will not expire
	I0920 17:42:42.794089   46025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 17:42:42.802589   46025 command_runner.go:130] > Certificate will not expire
	I0920 17:42:42.802874   46025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 17:42:42.810923   46025 command_runner.go:130] > Certificate will not expire
	I0920 17:42:42.811068   46025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 17:42:42.824274   46025 command_runner.go:130] > Certificate will not expire
	I0920 17:42:42.824367   46025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 17:42:42.834371   46025 command_runner.go:130] > Certificate will not expire
	I0920 17:42:42.834899   46025 kubeadm.go:392] StartCluster: {Name:multinode-592246 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:multinode-592246 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.115 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.170 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.38 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:f
alse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 17:42:42.835027   46025 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 17:42:42.835093   46025 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 17:42:42.913410   46025 command_runner.go:130] > 06e751c0e9afd49236f121119e05987ce6dec537fd429e6ca9bfafba0bb41a64
	I0920 17:42:42.913433   46025 command_runner.go:130] > aebea8aa76badc1c9b60fc60756c59dd82a7f8fbbc1e86ced5dc5516bf961e35
	I0920 17:42:42.913439   46025 command_runner.go:130] > c45d2aa0f63816d71c71ef16db9b7b26dc627f72d6ef7258b93c23005c445f9d
	I0920 17:42:42.913449   46025 command_runner.go:130] > 33cbb2c4dce5862b3f2c4ce603b1d94a12c4e89fe1a398e64490f33c2a491720
	I0920 17:42:42.913457   46025 command_runner.go:130] > 43d580bf9876b78f6a151a49a9fc863e01628a121c31c8106030c76a6949e273
	I0920 17:42:42.913474   46025 command_runner.go:130] > 18612a28ae5022d41607cfde2996aff0acffaa060ef96c055b522b167dc900d3
	I0920 17:42:42.913483   46025 command_runner.go:130] > 9a4606e22266020766dc8dbc189c0f65cdd349f031c39b0a7d106cf0781cbee8
	I0920 17:42:42.913651   46025 command_runner.go:130] > 33ec6262554fcd5394f13c012ea27e8a9bf53faf30d5cb85237b4a3496811c0e
	I0920 17:42:42.913866   46025 command_runner.go:130] > ca5e246374ae005eb084399c79df21d8dc78d6218c643826ddfec1d4d1ff26d8
	I0920 17:42:42.916558   46025 cri.go:89] found id: "06e751c0e9afd49236f121119e05987ce6dec537fd429e6ca9bfafba0bb41a64"
	I0920 17:42:42.916585   46025 cri.go:89] found id: "aebea8aa76badc1c9b60fc60756c59dd82a7f8fbbc1e86ced5dc5516bf961e35"
	I0920 17:42:42.916592   46025 cri.go:89] found id: "c45d2aa0f63816d71c71ef16db9b7b26dc627f72d6ef7258b93c23005c445f9d"
	I0920 17:42:42.916598   46025 cri.go:89] found id: "33cbb2c4dce5862b3f2c4ce603b1d94a12c4e89fe1a398e64490f33c2a491720"
	I0920 17:42:42.916604   46025 cri.go:89] found id: "43d580bf9876b78f6a151a49a9fc863e01628a121c31c8106030c76a6949e273"
	I0920 17:42:42.916610   46025 cri.go:89] found id: "18612a28ae5022d41607cfde2996aff0acffaa060ef96c055b522b167dc900d3"
	I0920 17:42:42.916615   46025 cri.go:89] found id: "9a4606e22266020766dc8dbc189c0f65cdd349f031c39b0a7d106cf0781cbee8"
	I0920 17:42:42.916621   46025 cri.go:89] found id: "33ec6262554fcd5394f13c012ea27e8a9bf53faf30d5cb85237b4a3496811c0e"
	I0920 17:42:42.916627   46025 cri.go:89] found id: "ca5e246374ae005eb084399c79df21d8dc78d6218c643826ddfec1d4d1ff26d8"
	I0920 17:42:42.916637   46025 cri.go:89] found id: ""
	I0920 17:42:42.916701   46025 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 20 17:44:34 multinode-592246 crio[2694]: time="2024-09-20 17:44:34.490305580Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8b4e30ed-ab33-4f10-b14a-ad203a6312f7 name=/runtime.v1.RuntimeService/Version
	Sep 20 17:44:34 multinode-592246 crio[2694]: time="2024-09-20 17:44:34.495299731Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=debb2177-f858-4c9d-8a24-5ed5d9388310 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 17:44:34 multinode-592246 crio[2694]: time="2024-09-20 17:44:34.495871157Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726854274495843249,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=debb2177-f858-4c9d-8a24-5ed5d9388310 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 17:44:34 multinode-592246 crio[2694]: time="2024-09-20 17:44:34.496569900Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f92cfcc5-8b09-4a8f-86d4-90dfae8f6c90 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:44:34 multinode-592246 crio[2694]: time="2024-09-20 17:44:34.496682281Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f92cfcc5-8b09-4a8f-86d4-90dfae8f6c90 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:44:34 multinode-592246 crio[2694]: time="2024-09-20 17:44:34.497059718Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b19b5d96c04f0998e9fb45f7ad94c150df0f2ab59724158e66ebb26e7f499c8e,PodSandboxId:5a993de08f9bef4380dbc45284a76dc665caa9a5a496c66fcc7b9f0ae83b9b8c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726854202702510290,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-wpfrr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 05422264-8765-46ab-bdf4-b78921ada4a5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cbddb2e724ca8bd2c53e9dd5af73952dfe43e5d4ac79bd580e041a6d7aede69,PodSandboxId:ede81570dbdce75634a199a255cb7dd0b3c5c88896631a79b66dc0beece1baae,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726854176395029369,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zfr9g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f2f8be4-982b-4da8-a0fa-321348cd1a9b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29feefea72c1ac5a313ba5ab4a409ea549c57c7b56bd1658bb702180ce6bc03e,PodSandboxId:24d3fe772b46d9ad291951c5d621c9cc41a42a1f1fb74190199b21694e18206e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726854169394764230,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d66d7bc
d-aa8f-480e-8fd3-fb291ed97a09,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ccbb8ac9634336f88fdd6b655a976f2ff3bf355d24c77128abb0afa2d10e57b,PodSandboxId:7262b866f175e3e9cf9fe9ec65c89d8dec9f99d4da80612b4337c9162f7bf3cc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726854169214990857,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cknvs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd34a408-0e2d-4f85-819f-d99de8948804,},A
nnotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79bea98b7a32af8003c2b53e82f3bcedd25d46534ba4c58b1562a1d4dd62899f,PodSandboxId:6b0a562ee52e5ac9a7f5f030e301e935e51cbda8efd24205ccccc430f531ffdb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726854169317714960,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-sggtt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 116fdcff-31e2-4138-907e-17265f19795a,},Annotations:map[string]string{io.ku
bernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a49b273fff83658fbf7e813b47a71246aa5dc45e49d5cc2db55d579f621cd95,PodSandboxId:842ed051ffcd16391da8d46e90365e0a91454cc5a12a35c6368692e68b72d517,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726854169164374591,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-592246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d64fadc298f2cbad9993623dd59110d,},Annotations:map[string
]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d35c011af5ed95ef938176fd7f24330921876bc721a2c8d4a93fd8a65d020d0,PodSandboxId:63174a48a0af89838bfd988930e0e65dfece3fa5159362c3b81a7ab54b71b226,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726854169142256115,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-592246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ea9e608bd76e5a2abeb0f2985e4ffd4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3
fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa7a4ac935e0470da3205f280946a5075ad00d4d398edf2ecbd0d6b72f3fa634,PodSandboxId:9fc32f59cbb6f67f9be565b8132e9a611918e5dcaea3f931e987cbb0e3a02bab,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726854169108363071,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-592246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b329d23e42855b3fc45631c897e94259,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:203ca48efd7a7e366da2e2947cb259f02447634e8f2a44df78dcefd8909c9a0e,PodSandboxId:be318076a6d8ffd3459a6204eccf5cc88b4fe5fa0bfafd4ef5eb8a25b8f9a893,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726854169019232745,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-592246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab3f648581b2c5aff8dec8b8093fa25a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06e751c0e9afd49236f121119e05987ce6dec537fd429e6ca9bfafba0bb41a64,PodSandboxId:ede81570dbdce75634a199a255cb7dd0b3c5c88896631a79b66dc0beece1baae,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726854162778718962,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zfr9g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f2f8be4-982b-4da8-a0fa-321348cd1a9b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b77d864bd8491e5247f3ce9904092adc651e1ff2977b3112136c18674b7a794,PodSandboxId:fec0ed249777dfa09c803c80bea9d91d294a5f73d90a5915f377d90589c47027,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726853836843972851,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-wpfrr,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 05422264-8765-46ab-bdf4-b78921ada4a5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c45d2aa0f63816d71c71ef16db9b7b26dc627f72d6ef7258b93c23005c445f9d,PodSandboxId:6b207f97a82ec1ea60f569be1a050c8bceeab08e59757c223205f217fd460602,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726853779304248510,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: d66d7bcd-aa8f-480e-8fd3-fb291ed97a09,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33cbb2c4dce5862b3f2c4ce603b1d94a12c4e89fe1a398e64490f33c2a491720,PodSandboxId:040dddb9b8699e642d434322e6c9311ab50710ae6e1dcfb49cd5573b21e39150,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726853767220444391,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-sggtt,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 116fdcff-31e2-4138-907e-17265f19795a,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43d580bf9876b78f6a151a49a9fc863e01628a121c31c8106030c76a6949e273,PodSandboxId:2ae710703257fa4fbfa8124e9f96c4da3483703c856c19977e12bbd696a7cff6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726853767060741978,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cknvs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd34a408-0e2d-4f85-819f
-d99de8948804,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18612a28ae5022d41607cfde2996aff0acffaa060ef96c055b522b167dc900d3,PodSandboxId:6dda31a6542e3654d10490db5310878e1401b0733a541a4cd64da9a521c5496a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726853756103764943,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-592246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b329d23e42855b3fc45631c897e94259,}
,Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a4606e22266020766dc8dbc189c0f65cdd349f031c39b0a7d106cf0781cbee8,PodSandboxId:26746e4572518a57357496479b39372cf85666fb52fa78be30f90c4df9eba034,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726853756093875934,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-592246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ea9e608bd76e5a2abeb0f2985e4ffd4,},Annotations:map[string]string{io.kubernetes.
container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33ec6262554fcd5394f13c012ea27e8a9bf53faf30d5cb85237b4a3496811c0e,PodSandboxId:f2e6ce6e57f368303f81a132a6701cff72d48204930ad307391eb837744c1a41,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726853756031895520,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-592246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab3f648581b2c5aff8dec8b8093fa25a,},Annotations:map[string]string{io.kubernetes.container.hash:
7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca5e246374ae005eb084399c79df21d8dc78d6218c643826ddfec1d4d1ff26d8,PodSandboxId:d1137e733ede785ed20953767b9d14df484c80311c56d28fa78c81bed27466a0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726853755986219359,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-592246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d64fadc298f2cbad9993623dd59110d,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f92cfcc5-8b09-4a8f-86d4-90dfae8f6c90 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:44:34 multinode-592246 crio[2694]: time="2024-09-20 17:44:34.506509656Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=5ef57c85-e980-4ae9-bce4-8c5ff7ab5922 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 20 17:44:34 multinode-592246 crio[2694]: time="2024-09-20 17:44:34.507315361Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:5a993de08f9bef4380dbc45284a76dc665caa9a5a496c66fcc7b9f0ae83b9b8c,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-wpfrr,Uid:05422264-8765-46ab-bdf4-b78921ada4a5,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726854202552892715,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-wpfrr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 05422264-8765-46ab-bdf4-b78921ada4a5,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-20T17:42:56.078719686Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:63174a48a0af89838bfd988930e0e65dfece3fa5159362c3b81a7ab54b71b226,Metadata:&PodSandboxMetadata{Name:etcd-multinode-592246,Uid:9ea9e608bd76e5a2abeb0f2985e4ffd4,Namespace:kube-system,Attempt:1,},State:
SANDBOX_READY,CreatedAt:1726854168803747216,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-592246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ea9e608bd76e5a2abeb0f2985e4ffd4,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.115:2379,kubernetes.io/config.hash: 9ea9e608bd76e5a2abeb0f2985e4ffd4,kubernetes.io/config.seen: 2024-09-20T17:36:01.356896530Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:24d3fe772b46d9ad291951c5d621c9cc41a42a1f1fb74190199b21694e18206e,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:d66d7bcd-aa8f-480e-8fd3-fb291ed97a09,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726854168792769218,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: d66d7bcd-aa8f-480e-8fd3-fb291ed97a09,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-20T17:36:18.856477671Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7262b866f175e3e9cf9fe9ec65c89d8dec9f99d4da80612b4337c9162f7bf3cc,Metad
ata:&PodSandboxMetadata{Name:kube-proxy-cknvs,Uid:bd34a408-0e2d-4f85-819f-d99de8948804,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726854168792068801,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-cknvs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd34a408-0e2d-4f85-819f-d99de8948804,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-20T17:36:06.404743190Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:842ed051ffcd16391da8d46e90365e0a91454cc5a12a35c6368692e68b72d517,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-592246,Uid:3d64fadc298f2cbad9993623dd59110d,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726854168789928851,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multino
de-592246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d64fadc298f2cbad9993623dd59110d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 3d64fadc298f2cbad9993623dd59110d,kubernetes.io/config.seen: 2024-09-20T17:36:01.356902001Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:be318076a6d8ffd3459a6204eccf5cc88b4fe5fa0bfafd4ef5eb8a25b8f9a893,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-592246,Uid:ab3f648581b2c5aff8dec8b8093fa25a,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726854168780565496,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-592246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab3f648581b2c5aff8dec8b8093fa25a,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.115:8443,kubernetes.io/config.hash: ab3f648581b2c5aff8dec8b8
093fa25a,kubernetes.io/config.seen: 2024-09-20T17:36:01.356900757Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9fc32f59cbb6f67f9be565b8132e9a611918e5dcaea3f931e987cbb0e3a02bab,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-592246,Uid:b329d23e42855b3fc45631c897e94259,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726854168771810370,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-592246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b329d23e42855b3fc45631c897e94259,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b329d23e42855b3fc45631c897e94259,kubernetes.io/config.seen: 2024-09-20T17:36:01.356903128Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6b0a562ee52e5ac9a7f5f030e301e935e51cbda8efd24205ccccc430f531ffdb,Metadata:&PodSandboxMetadata{Name:kindnet-sggtt,Uid:116fdcff-31e2-4138-907e-17265f19795a,Names
pace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726854168767610644,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-sggtt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 116fdcff-31e2-4138-907e-17265f19795a,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-20T17:36:06.404625734Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ede81570dbdce75634a199a255cb7dd0b3c5c88896631a79b66dc0beece1baae,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-zfr9g,Uid:5f2f8be4-982b-4da8-a0fa-321348cd1a9b,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726854162610850033,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-zfr9g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f2f8be4-982b-4da8-a0fa-321348cd1a9b,k8s-app: kube-dns,pod-temp
late-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-20T17:36:18.867163222Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fec0ed249777dfa09c803c80bea9d91d294a5f73d90a5915f377d90589c47027,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-wpfrr,Uid:05422264-8765-46ab-bdf4-b78921ada4a5,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726853833447056637,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-wpfrr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 05422264-8765-46ab-bdf4-b78921ada4a5,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-20T17:37:13.129646422Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6b207f97a82ec1ea60f569be1a050c8bceeab08e59757c223205f217fd460602,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:d66d7bcd-aa8f-480e-8fd3-fb291ed97a09,Namespace:kube-system,Attempt:0,}
,State:SANDBOX_NOTREADY,CreatedAt:1726853779167080754,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d66d7bcd-aa8f-480e-8fd3-fb291ed97a09,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path
\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-20T17:36:18.856477671Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2ae710703257fa4fbfa8124e9f96c4da3483703c856c19977e12bbd696a7cff6,Metadata:&PodSandboxMetadata{Name:kube-proxy-cknvs,Uid:bd34a408-0e2d-4f85-819f-d99de8948804,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726853766749487182,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-cknvs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd34a408-0e2d-4f85-819f-d99de8948804,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-20T17:36:06.404743190Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:040dddb9b8699e642d434322e6c9311ab50710ae6e1dcfb49cd5573b21e39150,Metadata:&PodSandboxMetadata{Name:kindnet-sggtt,Uid:116fdcff-31e2-4138-907e-17265f19795
a,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726853766718156334,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-sggtt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 116fdcff-31e2-4138-907e-17265f19795a,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-20T17:36:06.404625734Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:26746e4572518a57357496479b39372cf85666fb52fa78be30f90c4df9eba034,Metadata:&PodSandboxMetadata{Name:etcd-multinode-592246,Uid:9ea9e608bd76e5a2abeb0f2985e4ffd4,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726853755862975695,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-592246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ea9e608bd76e5a2abeb0f2985e4ffd4,tier: contr
ol-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.115:2379,kubernetes.io/config.hash: 9ea9e608bd76e5a2abeb0f2985e4ffd4,kubernetes.io/config.seen: 2024-09-20T17:35:55.364812520Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f2e6ce6e57f368303f81a132a6701cff72d48204930ad307391eb837744c1a41,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-592246,Uid:ab3f648581b2c5aff8dec8b8093fa25a,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726853755860569237,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-592246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab3f648581b2c5aff8dec8b8093fa25a,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.115:8443,kubernetes.io/config.hash: ab3f648581b2c5aff8dec8b8093fa25a,kubernetes.io/config.seen: 2
024-09-20T17:35:55.364814343Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6dda31a6542e3654d10490db5310878e1401b0733a541a4cd64da9a521c5496a,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-592246,Uid:b329d23e42855b3fc45631c897e94259,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726853755844225273,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-592246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b329d23e42855b3fc45631c897e94259,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b329d23e42855b3fc45631c897e94259,kubernetes.io/config.seen: 2024-09-20T17:35:55.364811060Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d1137e733ede785ed20953767b9d14df484c80311c56d28fa78c81bed27466a0,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-592246,Uid:3d64fadc298f2cbad9993623dd59110d,Namespace:kube-s
ystem,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726853755841288131,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-592246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d64fadc298f2cbad9993623dd59110d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 3d64fadc298f2cbad9993623dd59110d,kubernetes.io/config.seen: 2024-09-20T17:35:55.364806336Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=5ef57c85-e980-4ae9-bce4-8c5ff7ab5922 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 20 17:44:34 multinode-592246 crio[2694]: time="2024-09-20 17:44:34.508549534Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=910c7af0-7863-463a-8550-14430d5d919b name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:44:34 multinode-592246 crio[2694]: time="2024-09-20 17:44:34.508633033Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=910c7af0-7863-463a-8550-14430d5d919b name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:44:34 multinode-592246 crio[2694]: time="2024-09-20 17:44:34.509281102Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b19b5d96c04f0998e9fb45f7ad94c150df0f2ab59724158e66ebb26e7f499c8e,PodSandboxId:5a993de08f9bef4380dbc45284a76dc665caa9a5a496c66fcc7b9f0ae83b9b8c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726854202702510290,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-wpfrr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 05422264-8765-46ab-bdf4-b78921ada4a5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cbddb2e724ca8bd2c53e9dd5af73952dfe43e5d4ac79bd580e041a6d7aede69,PodSandboxId:ede81570dbdce75634a199a255cb7dd0b3c5c88896631a79b66dc0beece1baae,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726854176395029369,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zfr9g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f2f8be4-982b-4da8-a0fa-321348cd1a9b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29feefea72c1ac5a313ba5ab4a409ea549c57c7b56bd1658bb702180ce6bc03e,PodSandboxId:24d3fe772b46d9ad291951c5d621c9cc41a42a1f1fb74190199b21694e18206e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726854169394764230,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d66d7bc
d-aa8f-480e-8fd3-fb291ed97a09,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ccbb8ac9634336f88fdd6b655a976f2ff3bf355d24c77128abb0afa2d10e57b,PodSandboxId:7262b866f175e3e9cf9fe9ec65c89d8dec9f99d4da80612b4337c9162f7bf3cc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726854169214990857,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cknvs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd34a408-0e2d-4f85-819f-d99de8948804,},A
nnotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79bea98b7a32af8003c2b53e82f3bcedd25d46534ba4c58b1562a1d4dd62899f,PodSandboxId:6b0a562ee52e5ac9a7f5f030e301e935e51cbda8efd24205ccccc430f531ffdb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726854169317714960,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-sggtt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 116fdcff-31e2-4138-907e-17265f19795a,},Annotations:map[string]string{io.ku
bernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a49b273fff83658fbf7e813b47a71246aa5dc45e49d5cc2db55d579f621cd95,PodSandboxId:842ed051ffcd16391da8d46e90365e0a91454cc5a12a35c6368692e68b72d517,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726854169164374591,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-592246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d64fadc298f2cbad9993623dd59110d,},Annotations:map[string
]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d35c011af5ed95ef938176fd7f24330921876bc721a2c8d4a93fd8a65d020d0,PodSandboxId:63174a48a0af89838bfd988930e0e65dfece3fa5159362c3b81a7ab54b71b226,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726854169142256115,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-592246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ea9e608bd76e5a2abeb0f2985e4ffd4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3
fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa7a4ac935e0470da3205f280946a5075ad00d4d398edf2ecbd0d6b72f3fa634,PodSandboxId:9fc32f59cbb6f67f9be565b8132e9a611918e5dcaea3f931e987cbb0e3a02bab,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726854169108363071,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-592246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b329d23e42855b3fc45631c897e94259,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:203ca48efd7a7e366da2e2947cb259f02447634e8f2a44df78dcefd8909c9a0e,PodSandboxId:be318076a6d8ffd3459a6204eccf5cc88b4fe5fa0bfafd4ef5eb8a25b8f9a893,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726854169019232745,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-592246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab3f648581b2c5aff8dec8b8093fa25a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06e751c0e9afd49236f121119e05987ce6dec537fd429e6ca9bfafba0bb41a64,PodSandboxId:ede81570dbdce75634a199a255cb7dd0b3c5c88896631a79b66dc0beece1baae,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726854162778718962,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zfr9g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f2f8be4-982b-4da8-a0fa-321348cd1a9b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b77d864bd8491e5247f3ce9904092adc651e1ff2977b3112136c18674b7a794,PodSandboxId:fec0ed249777dfa09c803c80bea9d91d294a5f73d90a5915f377d90589c47027,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726853836843972851,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-wpfrr,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 05422264-8765-46ab-bdf4-b78921ada4a5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c45d2aa0f63816d71c71ef16db9b7b26dc627f72d6ef7258b93c23005c445f9d,PodSandboxId:6b207f97a82ec1ea60f569be1a050c8bceeab08e59757c223205f217fd460602,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726853779304248510,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: d66d7bcd-aa8f-480e-8fd3-fb291ed97a09,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33cbb2c4dce5862b3f2c4ce603b1d94a12c4e89fe1a398e64490f33c2a491720,PodSandboxId:040dddb9b8699e642d434322e6c9311ab50710ae6e1dcfb49cd5573b21e39150,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726853767220444391,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-sggtt,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 116fdcff-31e2-4138-907e-17265f19795a,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43d580bf9876b78f6a151a49a9fc863e01628a121c31c8106030c76a6949e273,PodSandboxId:2ae710703257fa4fbfa8124e9f96c4da3483703c856c19977e12bbd696a7cff6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726853767060741978,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cknvs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd34a408-0e2d-4f85-819f
-d99de8948804,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18612a28ae5022d41607cfde2996aff0acffaa060ef96c055b522b167dc900d3,PodSandboxId:6dda31a6542e3654d10490db5310878e1401b0733a541a4cd64da9a521c5496a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726853756103764943,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-592246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b329d23e42855b3fc45631c897e94259,}
,Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a4606e22266020766dc8dbc189c0f65cdd349f031c39b0a7d106cf0781cbee8,PodSandboxId:26746e4572518a57357496479b39372cf85666fb52fa78be30f90c4df9eba034,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726853756093875934,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-592246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ea9e608bd76e5a2abeb0f2985e4ffd4,},Annotations:map[string]string{io.kubernetes.
container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33ec6262554fcd5394f13c012ea27e8a9bf53faf30d5cb85237b4a3496811c0e,PodSandboxId:f2e6ce6e57f368303f81a132a6701cff72d48204930ad307391eb837744c1a41,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726853756031895520,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-592246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab3f648581b2c5aff8dec8b8093fa25a,},Annotations:map[string]string{io.kubernetes.container.hash:
7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca5e246374ae005eb084399c79df21d8dc78d6218c643826ddfec1d4d1ff26d8,PodSandboxId:d1137e733ede785ed20953767b9d14df484c80311c56d28fa78c81bed27466a0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726853755986219359,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-592246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d64fadc298f2cbad9993623dd59110d,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=910c7af0-7863-463a-8550-14430d5d919b name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:44:34 multinode-592246 crio[2694]: time="2024-09-20 17:44:34.544982313Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bcc521a6-100e-4f80-ad35-ff2012f6d08d name=/runtime.v1.RuntimeService/Version
	Sep 20 17:44:34 multinode-592246 crio[2694]: time="2024-09-20 17:44:34.545070720Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bcc521a6-100e-4f80-ad35-ff2012f6d08d name=/runtime.v1.RuntimeService/Version
	Sep 20 17:44:34 multinode-592246 crio[2694]: time="2024-09-20 17:44:34.545981328Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3fb99391-1c81-415e-99c1-20ee14255ed8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 17:44:34 multinode-592246 crio[2694]: time="2024-09-20 17:44:34.546690608Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726854274546661194,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3fb99391-1c81-415e-99c1-20ee14255ed8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 17:44:34 multinode-592246 crio[2694]: time="2024-09-20 17:44:34.547215690Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c71b7ec1-ca84-4c0e-9e34-239d10146d88 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:44:34 multinode-592246 crio[2694]: time="2024-09-20 17:44:34.547275932Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c71b7ec1-ca84-4c0e-9e34-239d10146d88 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:44:34 multinode-592246 crio[2694]: time="2024-09-20 17:44:34.547842847Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b19b5d96c04f0998e9fb45f7ad94c150df0f2ab59724158e66ebb26e7f499c8e,PodSandboxId:5a993de08f9bef4380dbc45284a76dc665caa9a5a496c66fcc7b9f0ae83b9b8c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726854202702510290,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-wpfrr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 05422264-8765-46ab-bdf4-b78921ada4a5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cbddb2e724ca8bd2c53e9dd5af73952dfe43e5d4ac79bd580e041a6d7aede69,PodSandboxId:ede81570dbdce75634a199a255cb7dd0b3c5c88896631a79b66dc0beece1baae,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726854176395029369,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zfr9g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f2f8be4-982b-4da8-a0fa-321348cd1a9b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29feefea72c1ac5a313ba5ab4a409ea549c57c7b56bd1658bb702180ce6bc03e,PodSandboxId:24d3fe772b46d9ad291951c5d621c9cc41a42a1f1fb74190199b21694e18206e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726854169394764230,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d66d7bc
d-aa8f-480e-8fd3-fb291ed97a09,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ccbb8ac9634336f88fdd6b655a976f2ff3bf355d24c77128abb0afa2d10e57b,PodSandboxId:7262b866f175e3e9cf9fe9ec65c89d8dec9f99d4da80612b4337c9162f7bf3cc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726854169214990857,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cknvs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd34a408-0e2d-4f85-819f-d99de8948804,},A
nnotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79bea98b7a32af8003c2b53e82f3bcedd25d46534ba4c58b1562a1d4dd62899f,PodSandboxId:6b0a562ee52e5ac9a7f5f030e301e935e51cbda8efd24205ccccc430f531ffdb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726854169317714960,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-sggtt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 116fdcff-31e2-4138-907e-17265f19795a,},Annotations:map[string]string{io.ku
bernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a49b273fff83658fbf7e813b47a71246aa5dc45e49d5cc2db55d579f621cd95,PodSandboxId:842ed051ffcd16391da8d46e90365e0a91454cc5a12a35c6368692e68b72d517,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726854169164374591,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-592246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d64fadc298f2cbad9993623dd59110d,},Annotations:map[string
]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d35c011af5ed95ef938176fd7f24330921876bc721a2c8d4a93fd8a65d020d0,PodSandboxId:63174a48a0af89838bfd988930e0e65dfece3fa5159362c3b81a7ab54b71b226,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726854169142256115,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-592246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ea9e608bd76e5a2abeb0f2985e4ffd4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3
fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa7a4ac935e0470da3205f280946a5075ad00d4d398edf2ecbd0d6b72f3fa634,PodSandboxId:9fc32f59cbb6f67f9be565b8132e9a611918e5dcaea3f931e987cbb0e3a02bab,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726854169108363071,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-592246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b329d23e42855b3fc45631c897e94259,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:203ca48efd7a7e366da2e2947cb259f02447634e8f2a44df78dcefd8909c9a0e,PodSandboxId:be318076a6d8ffd3459a6204eccf5cc88b4fe5fa0bfafd4ef5eb8a25b8f9a893,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726854169019232745,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-592246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab3f648581b2c5aff8dec8b8093fa25a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06e751c0e9afd49236f121119e05987ce6dec537fd429e6ca9bfafba0bb41a64,PodSandboxId:ede81570dbdce75634a199a255cb7dd0b3c5c88896631a79b66dc0beece1baae,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726854162778718962,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zfr9g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f2f8be4-982b-4da8-a0fa-321348cd1a9b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b77d864bd8491e5247f3ce9904092adc651e1ff2977b3112136c18674b7a794,PodSandboxId:fec0ed249777dfa09c803c80bea9d91d294a5f73d90a5915f377d90589c47027,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726853836843972851,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-wpfrr,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 05422264-8765-46ab-bdf4-b78921ada4a5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c45d2aa0f63816d71c71ef16db9b7b26dc627f72d6ef7258b93c23005c445f9d,PodSandboxId:6b207f97a82ec1ea60f569be1a050c8bceeab08e59757c223205f217fd460602,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726853779304248510,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: d66d7bcd-aa8f-480e-8fd3-fb291ed97a09,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33cbb2c4dce5862b3f2c4ce603b1d94a12c4e89fe1a398e64490f33c2a491720,PodSandboxId:040dddb9b8699e642d434322e6c9311ab50710ae6e1dcfb49cd5573b21e39150,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726853767220444391,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-sggtt,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 116fdcff-31e2-4138-907e-17265f19795a,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43d580bf9876b78f6a151a49a9fc863e01628a121c31c8106030c76a6949e273,PodSandboxId:2ae710703257fa4fbfa8124e9f96c4da3483703c856c19977e12bbd696a7cff6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726853767060741978,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cknvs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd34a408-0e2d-4f85-819f
-d99de8948804,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18612a28ae5022d41607cfde2996aff0acffaa060ef96c055b522b167dc900d3,PodSandboxId:6dda31a6542e3654d10490db5310878e1401b0733a541a4cd64da9a521c5496a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726853756103764943,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-592246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b329d23e42855b3fc45631c897e94259,}
,Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a4606e22266020766dc8dbc189c0f65cdd349f031c39b0a7d106cf0781cbee8,PodSandboxId:26746e4572518a57357496479b39372cf85666fb52fa78be30f90c4df9eba034,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726853756093875934,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-592246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ea9e608bd76e5a2abeb0f2985e4ffd4,},Annotations:map[string]string{io.kubernetes.
container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33ec6262554fcd5394f13c012ea27e8a9bf53faf30d5cb85237b4a3496811c0e,PodSandboxId:f2e6ce6e57f368303f81a132a6701cff72d48204930ad307391eb837744c1a41,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726853756031895520,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-592246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab3f648581b2c5aff8dec8b8093fa25a,},Annotations:map[string]string{io.kubernetes.container.hash:
7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca5e246374ae005eb084399c79df21d8dc78d6218c643826ddfec1d4d1ff26d8,PodSandboxId:d1137e733ede785ed20953767b9d14df484c80311c56d28fa78c81bed27466a0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726853755986219359,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-592246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d64fadc298f2cbad9993623dd59110d,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c71b7ec1-ca84-4c0e-9e34-239d10146d88 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:44:34 multinode-592246 crio[2694]: time="2024-09-20 17:44:34.591776125Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=78108b72-b8cf-407a-aa63-cf43f2a8734c name=/runtime.v1.RuntimeService/Version
	Sep 20 17:44:34 multinode-592246 crio[2694]: time="2024-09-20 17:44:34.591873100Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=78108b72-b8cf-407a-aa63-cf43f2a8734c name=/runtime.v1.RuntimeService/Version
	Sep 20 17:44:34 multinode-592246 crio[2694]: time="2024-09-20 17:44:34.593390179Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ad675cbe-5009-4e85-b61f-99f74fa94d1d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 17:44:34 multinode-592246 crio[2694]: time="2024-09-20 17:44:34.594559709Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726854274594529719,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ad675cbe-5009-4e85-b61f-99f74fa94d1d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 17:44:34 multinode-592246 crio[2694]: time="2024-09-20 17:44:34.595292306Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b8e22b27-6d00-48de-a461-97f6f1d68cc1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:44:34 multinode-592246 crio[2694]: time="2024-09-20 17:44:34.595349242Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b8e22b27-6d00-48de-a461-97f6f1d68cc1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:44:34 multinode-592246 crio[2694]: time="2024-09-20 17:44:34.595694519Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b19b5d96c04f0998e9fb45f7ad94c150df0f2ab59724158e66ebb26e7f499c8e,PodSandboxId:5a993de08f9bef4380dbc45284a76dc665caa9a5a496c66fcc7b9f0ae83b9b8c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726854202702510290,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-wpfrr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 05422264-8765-46ab-bdf4-b78921ada4a5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cbddb2e724ca8bd2c53e9dd5af73952dfe43e5d4ac79bd580e041a6d7aede69,PodSandboxId:ede81570dbdce75634a199a255cb7dd0b3c5c88896631a79b66dc0beece1baae,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726854176395029369,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zfr9g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f2f8be4-982b-4da8-a0fa-321348cd1a9b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29feefea72c1ac5a313ba5ab4a409ea549c57c7b56bd1658bb702180ce6bc03e,PodSandboxId:24d3fe772b46d9ad291951c5d621c9cc41a42a1f1fb74190199b21694e18206e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726854169394764230,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d66d7bc
d-aa8f-480e-8fd3-fb291ed97a09,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ccbb8ac9634336f88fdd6b655a976f2ff3bf355d24c77128abb0afa2d10e57b,PodSandboxId:7262b866f175e3e9cf9fe9ec65c89d8dec9f99d4da80612b4337c9162f7bf3cc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726854169214990857,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cknvs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd34a408-0e2d-4f85-819f-d99de8948804,},A
nnotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79bea98b7a32af8003c2b53e82f3bcedd25d46534ba4c58b1562a1d4dd62899f,PodSandboxId:6b0a562ee52e5ac9a7f5f030e301e935e51cbda8efd24205ccccc430f531ffdb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726854169317714960,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-sggtt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 116fdcff-31e2-4138-907e-17265f19795a,},Annotations:map[string]string{io.ku
bernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a49b273fff83658fbf7e813b47a71246aa5dc45e49d5cc2db55d579f621cd95,PodSandboxId:842ed051ffcd16391da8d46e90365e0a91454cc5a12a35c6368692e68b72d517,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726854169164374591,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-592246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d64fadc298f2cbad9993623dd59110d,},Annotations:map[string
]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d35c011af5ed95ef938176fd7f24330921876bc721a2c8d4a93fd8a65d020d0,PodSandboxId:63174a48a0af89838bfd988930e0e65dfece3fa5159362c3b81a7ab54b71b226,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726854169142256115,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-592246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ea9e608bd76e5a2abeb0f2985e4ffd4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3
fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa7a4ac935e0470da3205f280946a5075ad00d4d398edf2ecbd0d6b72f3fa634,PodSandboxId:9fc32f59cbb6f67f9be565b8132e9a611918e5dcaea3f931e987cbb0e3a02bab,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726854169108363071,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-592246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b329d23e42855b3fc45631c897e94259,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:203ca48efd7a7e366da2e2947cb259f02447634e8f2a44df78dcefd8909c9a0e,PodSandboxId:be318076a6d8ffd3459a6204eccf5cc88b4fe5fa0bfafd4ef5eb8a25b8f9a893,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726854169019232745,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-592246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab3f648581b2c5aff8dec8b8093fa25a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06e751c0e9afd49236f121119e05987ce6dec537fd429e6ca9bfafba0bb41a64,PodSandboxId:ede81570dbdce75634a199a255cb7dd0b3c5c88896631a79b66dc0beece1baae,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726854162778718962,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zfr9g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f2f8be4-982b-4da8-a0fa-321348cd1a9b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b77d864bd8491e5247f3ce9904092adc651e1ff2977b3112136c18674b7a794,PodSandboxId:fec0ed249777dfa09c803c80bea9d91d294a5f73d90a5915f377d90589c47027,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726853836843972851,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-wpfrr,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 05422264-8765-46ab-bdf4-b78921ada4a5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c45d2aa0f63816d71c71ef16db9b7b26dc627f72d6ef7258b93c23005c445f9d,PodSandboxId:6b207f97a82ec1ea60f569be1a050c8bceeab08e59757c223205f217fd460602,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726853779304248510,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: d66d7bcd-aa8f-480e-8fd3-fb291ed97a09,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33cbb2c4dce5862b3f2c4ce603b1d94a12c4e89fe1a398e64490f33c2a491720,PodSandboxId:040dddb9b8699e642d434322e6c9311ab50710ae6e1dcfb49cd5573b21e39150,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726853767220444391,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-sggtt,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 116fdcff-31e2-4138-907e-17265f19795a,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43d580bf9876b78f6a151a49a9fc863e01628a121c31c8106030c76a6949e273,PodSandboxId:2ae710703257fa4fbfa8124e9f96c4da3483703c856c19977e12bbd696a7cff6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726853767060741978,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cknvs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd34a408-0e2d-4f85-819f
-d99de8948804,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18612a28ae5022d41607cfde2996aff0acffaa060ef96c055b522b167dc900d3,PodSandboxId:6dda31a6542e3654d10490db5310878e1401b0733a541a4cd64da9a521c5496a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726853756103764943,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-592246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b329d23e42855b3fc45631c897e94259,}
,Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a4606e22266020766dc8dbc189c0f65cdd349f031c39b0a7d106cf0781cbee8,PodSandboxId:26746e4572518a57357496479b39372cf85666fb52fa78be30f90c4df9eba034,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726853756093875934,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-592246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ea9e608bd76e5a2abeb0f2985e4ffd4,},Annotations:map[string]string{io.kubernetes.
container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33ec6262554fcd5394f13c012ea27e8a9bf53faf30d5cb85237b4a3496811c0e,PodSandboxId:f2e6ce6e57f368303f81a132a6701cff72d48204930ad307391eb837744c1a41,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726853756031895520,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-592246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab3f648581b2c5aff8dec8b8093fa25a,},Annotations:map[string]string{io.kubernetes.container.hash:
7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca5e246374ae005eb084399c79df21d8dc78d6218c643826ddfec1d4d1ff26d8,PodSandboxId:d1137e733ede785ed20953767b9d14df484c80311c56d28fa78c81bed27466a0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726853755986219359,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-592246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d64fadc298f2cbad9993623dd59110d,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b8e22b27-6d00-48de-a461-97f6f1d68cc1 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	b19b5d96c04f0       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   5a993de08f9be       busybox-7dff88458-wpfrr
	3cbddb2e724ca       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      About a minute ago   Running             coredns                   2                   ede81570dbdce       coredns-7c65d6cfc9-zfr9g
	29feefea72c1a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   24d3fe772b46d       storage-provisioner
	79bea98b7a32a       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      About a minute ago   Running             kindnet-cni               1                   6b0a562ee52e5       kindnet-sggtt
	8ccbb8ac96343       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      About a minute ago   Running             kube-proxy                1                   7262b866f175e       kube-proxy-cknvs
	2a49b273fff83       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      About a minute ago   Running             kube-controller-manager   1                   842ed051ffcd1       kube-controller-manager-multinode-592246
	1d35c011af5ed       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      About a minute ago   Running             etcd                      1                   63174a48a0af8       etcd-multinode-592246
	aa7a4ac935e04       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      About a minute ago   Running             kube-scheduler            1                   9fc32f59cbb6f       kube-scheduler-multinode-592246
	203ca48efd7a7       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      About a minute ago   Running             kube-apiserver            1                   be318076a6d8f       kube-apiserver-multinode-592246
	06e751c0e9afd       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      About a minute ago   Exited              coredns                   1                   ede81570dbdce       coredns-7c65d6cfc9-zfr9g
	5b77d864bd849       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   7 minutes ago        Exited              busybox                   0                   fec0ed249777d       busybox-7dff88458-wpfrr
	c45d2aa0f6381       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago        Exited              storage-provisioner       0                   6b207f97a82ec       storage-provisioner
	33cbb2c4dce58       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      8 minutes ago        Exited              kindnet-cni               0                   040dddb9b8699       kindnet-sggtt
	43d580bf9876b       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      8 minutes ago        Exited              kube-proxy                0                   2ae710703257f       kube-proxy-cknvs
	18612a28ae502       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      8 minutes ago        Exited              kube-scheduler            0                   6dda31a6542e3       kube-scheduler-multinode-592246
	9a4606e222660       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      8 minutes ago        Exited              etcd                      0                   26746e4572518       etcd-multinode-592246
	33ec6262554fc       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      8 minutes ago        Exited              kube-apiserver            0                   f2e6ce6e57f36       kube-apiserver-multinode-592246
	ca5e246374ae0       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      8 minutes ago        Exited              kube-controller-manager   0                   d1137e733ede7       kube-controller-manager-multinode-592246
	
	
	==> coredns [06e751c0e9afd49236f121119e05987ce6dec537fd429e6ca9bfafba0bb41a64] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:38238 - 10616 "HINFO IN 8267272908449185641.7413770237771884312. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012775253s
	
	
	==> coredns [3cbddb2e724ca8bd2c53e9dd5af73952dfe43e5d4ac79bd580e041a6d7aede69] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:54171 - 9641 "HINFO IN 8912997293967558746.8180543232414713051. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013222727s
	
	
	==> describe nodes <==
	Name:               multinode-592246
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-592246
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0626f22cf0d915d75e291a5bce701f94395056e1
	                    minikube.k8s.io/name=multinode-592246
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T17_36_02_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 17:35:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-592246
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 17:44:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 17:42:55 +0000   Fri, 20 Sep 2024 17:35:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 17:42:55 +0000   Fri, 20 Sep 2024 17:35:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 17:42:55 +0000   Fri, 20 Sep 2024 17:35:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 17:42:55 +0000   Fri, 20 Sep 2024 17:36:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.115
	  Hostname:    multinode-592246
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8f4fab7aa1d046888118c1e3e32ba809
	  System UUID:                8f4fab7a-a1d0-4688-8118-c1e3e32ba809
	  Boot ID:                    9c4b19b5-bf62-47d5-aca0-a031c994d070
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-wpfrr                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m21s
	  kube-system                 coredns-7c65d6cfc9-zfr9g                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     8m28s
	  kube-system                 etcd-multinode-592246                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         8m33s
	  kube-system                 kindnet-sggtt                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      8m28s
	  kube-system                 kube-apiserver-multinode-592246             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m33s
	  kube-system                 kube-controller-manager-multinode-592246    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m33s
	  kube-system                 kube-proxy-cknvs                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m28s
	  kube-system                 kube-scheduler-multinode-592246             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m33s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m27s                  kube-proxy       
	  Normal  Starting                 102s                   kube-proxy       
	  Normal  Starting                 8m39s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m39s (x8 over 8m39s)  kubelet          Node multinode-592246 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m39s (x8 over 8m39s)  kubelet          Node multinode-592246 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m39s (x7 over 8m39s)  kubelet          Node multinode-592246 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m39s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    8m33s                  kubelet          Node multinode-592246 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  8m33s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m33s                  kubelet          Node multinode-592246 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     8m33s                  kubelet          Node multinode-592246 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m33s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m29s                  node-controller  Node multinode-592246 event: Registered Node multinode-592246 in Controller
	  Normal  NodeReady                8m16s                  kubelet          Node multinode-592246 status is now: NodeReady
	  Normal  RegisteredNode           99s                    node-controller  Node multinode-592246 event: Registered Node multinode-592246 in Controller
	  Normal  Starting                 99s                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  99s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  99s                    kubelet          Node multinode-592246 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    99s                    kubelet          Node multinode-592246 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     99s                    kubelet          Node multinode-592246 status is now: NodeHasSufficientPID
	
	
	Name:               multinode-592246-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-592246-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0626f22cf0d915d75e291a5bce701f94395056e1
	                    minikube.k8s.io/name=multinode-592246
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_20T17_43_33_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 17:43:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-592246-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 17:44:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 17:44:03 +0000   Fri, 20 Sep 2024 17:43:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 17:44:03 +0000   Fri, 20 Sep 2024 17:43:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 17:44:03 +0000   Fri, 20 Sep 2024 17:43:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 17:44:03 +0000   Fri, 20 Sep 2024 17:43:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.170
	  Hostname:    multinode-592246-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b9ba005550b34d8fa33427ca08d3b8fb
	  System UUID:                b9ba0055-50b3-4d8f-a334-27ca08d3b8fb
	  Boot ID:                    c8a8322b-e940-4971-b2dc-f9147d893d89
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-7854z    0 (0%)        0 (0%)      0 (0%)           0 (0%)         67s
	  kube-system                 kindnet-w5zt6              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      7m43s
	  kube-system                 kube-proxy-v8z58           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 7m39s                  kube-proxy  
	  Normal  Starting                 57s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  7m43s (x2 over 7m44s)  kubelet     Node multinode-592246-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m43s (x2 over 7m44s)  kubelet     Node multinode-592246-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m43s (x2 over 7m44s)  kubelet     Node multinode-592246-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m43s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m24s                  kubelet     Node multinode-592246-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  62s (x2 over 62s)      kubelet     Node multinode-592246-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    62s (x2 over 62s)      kubelet     Node multinode-592246-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     62s (x2 over 62s)      kubelet     Node multinode-592246-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  62s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                42s                    kubelet     Node multinode-592246-m02 status is now: NodeReady
	
	
	Name:               multinode-592246-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-592246-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0626f22cf0d915d75e291a5bce701f94395056e1
	                    minikube.k8s.io/name=multinode-592246
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_20T17_44_12_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 17:44:12 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-592246-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 17:44:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 17:44:31 +0000   Fri, 20 Sep 2024 17:44:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 17:44:31 +0000   Fri, 20 Sep 2024 17:44:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 17:44:31 +0000   Fri, 20 Sep 2024 17:44:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 17:44:31 +0000   Fri, 20 Sep 2024 17:44:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.38
	  Hostname:    multinode-592246-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7788b8731f9e4702bc4eea2dc394a664
	  System UUID:                7788b873-1f9e-4702-bc4e-ea2dc394a664
	  Boot ID:                    6e0d47a0-90b9-4461-8bbe-5dcebced4df8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-mkw76       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m45s
	  kube-system                 kube-proxy-nr2d6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m39s                  kube-proxy  
	  Normal  Starting                 18s                    kube-proxy  
	  Normal  Starting                 5m49s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  6m45s (x2 over 6m45s)  kubelet     Node multinode-592246-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m45s (x2 over 6m45s)  kubelet     Node multinode-592246-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m45s (x2 over 6m45s)  kubelet     Node multinode-592246-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m45s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m24s                  kubelet     Node multinode-592246-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     5m54s (x2 over 5m54s)  kubelet     Node multinode-592246-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m54s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    5m54s (x2 over 5m54s)  kubelet     Node multinode-592246-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m54s (x2 over 5m54s)  kubelet     Node multinode-592246-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m34s                  kubelet     Node multinode-592246-m03 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  23s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  22s (x2 over 23s)      kubelet     Node multinode-592246-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x2 over 23s)      kubelet     Node multinode-592246-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x2 over 23s)      kubelet     Node multinode-592246-m03 status is now: NodeHasSufficientPID
	  Normal  NodeReady                3s                     kubelet     Node multinode-592246-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.160772] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.152669] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.293329] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +4.186472] systemd-fstab-generator[745]: Ignoring "noauto" option for root device
	[  +3.821716] systemd-fstab-generator[877]: Ignoring "noauto" option for root device
	[  +0.067034] kauditd_printk_skb: 158 callbacks suppressed
	[Sep20 17:36] systemd-fstab-generator[1213]: Ignoring "noauto" option for root device
	[  +0.094284] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.125591] systemd-fstab-generator[1316]: Ignoring "noauto" option for root device
	[  +0.147888] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.772092] kauditd_printk_skb: 60 callbacks suppressed
	[Sep20 17:37] kauditd_printk_skb: 14 callbacks suppressed
	[Sep20 17:42] systemd-fstab-generator[2620]: Ignoring "noauto" option for root device
	[  +0.159068] systemd-fstab-generator[2632]: Ignoring "noauto" option for root device
	[  +0.175970] systemd-fstab-generator[2646]: Ignoring "noauto" option for root device
	[  +0.140154] systemd-fstab-generator[2658]: Ignoring "noauto" option for root device
	[  +0.300253] systemd-fstab-generator[2686]: Ignoring "noauto" option for root device
	[  +4.278628] systemd-fstab-generator[2778]: Ignoring "noauto" option for root device
	[  +0.080784] kauditd_printk_skb: 100 callbacks suppressed
	[  +6.691154] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.881424] systemd-fstab-generator[3638]: Ignoring "noauto" option for root device
	[  +0.104124] kauditd_printk_skb: 62 callbacks suppressed
	[Sep20 17:43] kauditd_printk_skb: 21 callbacks suppressed
	[  +2.163575] systemd-fstab-generator[3804]: Ignoring "noauto" option for root device
	[ +15.301461] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [1d35c011af5ed95ef938176fd7f24330921876bc721a2c8d4a93fd8a65d020d0] <==
	{"level":"info","ts":"2024-09-20T17:42:49.621639Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"efb3de1b79640a9c","local-member-id":"c7abbacde39fb9a4","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T17:42:49.621684Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T17:42:49.623672Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T17:42:49.630487Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-20T17:42:49.630784Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.115:2380"}
	{"level":"info","ts":"2024-09-20T17:42:49.630802Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.115:2380"}
	{"level":"info","ts":"2024-09-20T17:42:49.630970Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"c7abbacde39fb9a4","initial-advertise-peer-urls":["https://192.168.39.115:2380"],"listen-peer-urls":["https://192.168.39.115:2380"],"advertise-client-urls":["https://192.168.39.115:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.115:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-20T17:42:49.630988Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-20T17:42:50.975624Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c7abbacde39fb9a4 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-20T17:42:50.975696Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c7abbacde39fb9a4 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-20T17:42:50.975739Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c7abbacde39fb9a4 received MsgPreVoteResp from c7abbacde39fb9a4 at term 2"}
	{"level":"info","ts":"2024-09-20T17:42:50.975765Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c7abbacde39fb9a4 became candidate at term 3"}
	{"level":"info","ts":"2024-09-20T17:42:50.975771Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c7abbacde39fb9a4 received MsgVoteResp from c7abbacde39fb9a4 at term 3"}
	{"level":"info","ts":"2024-09-20T17:42:50.975780Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c7abbacde39fb9a4 became leader at term 3"}
	{"level":"info","ts":"2024-09-20T17:42:50.975787Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c7abbacde39fb9a4 elected leader c7abbacde39fb9a4 at term 3"}
	{"level":"info","ts":"2024-09-20T17:42:50.978404Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"c7abbacde39fb9a4","local-member-attributes":"{Name:multinode-592246 ClientURLs:[https://192.168.39.115:2379]}","request-path":"/0/members/c7abbacde39fb9a4/attributes","cluster-id":"efb3de1b79640a9c","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-20T17:42:50.978605Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T17:42:50.979097Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T17:42:50.979272Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-20T17:42:50.979317Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-20T17:42:50.980083Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T17:42:50.980100Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T17:42:50.981057Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.115:2379"}
	{"level":"info","ts":"2024-09-20T17:42:50.981147Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	2024/09/20 17:42:53 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	
	
	==> etcd [9a4606e22266020766dc8dbc189c0f65cdd349f031c39b0a7d106cf0781cbee8] <==
	{"level":"info","ts":"2024-09-20T17:36:51.128353Z","caller":"traceutil/trace.go:171","msg":"trace[860051481] linearizableReadLoop","detail":"{readStateIndex:498; appliedIndex:497; }","duration":"168.793627ms","start":"2024-09-20T17:36:50.959529Z","end":"2024-09-20T17:36:51.128323Z","steps":["trace[860051481] 'read index received'  (duration: 168.558585ms)","trace[860051481] 'applied index is now lower than readState.Index'  (duration: 234.144µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-20T17:36:51.128772Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"169.219843ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-592246-m02\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T17:36:51.128881Z","caller":"traceutil/trace.go:171","msg":"trace[2020512061] range","detail":"{range_begin:/registry/minions/multinode-592246-m02; range_end:; response_count:0; response_revision:475; }","duration":"169.360757ms","start":"2024-09-20T17:36:50.959508Z","end":"2024-09-20T17:36:51.128868Z","steps":["trace[2020512061] 'agreement among raft nodes before linearized reading'  (duration: 168.929618ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T17:36:56.491340Z","caller":"traceutil/trace.go:171","msg":"trace[538718766] linearizableReadLoop","detail":"{readStateIndex:539; appliedIndex:538; }","duration":"155.431394ms","start":"2024-09-20T17:36:56.335885Z","end":"2024-09-20T17:36:56.491316Z","steps":["trace[538718766] 'read index received'  (duration: 155.132392ms)","trace[538718766] 'applied index is now lower than readState.Index'  (duration: 297.572µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-20T17:36:56.491565Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"155.646356ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T17:36:56.491608Z","caller":"traceutil/trace.go:171","msg":"trace[1760439891] transaction","detail":"{read_only:false; response_revision:514; number_of_response:1; }","duration":"236.03247ms","start":"2024-09-20T17:36:56.255561Z","end":"2024-09-20T17:36:56.491594Z","steps":["trace[1760439891] 'process raft request'  (duration: 235.547097ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T17:36:56.491629Z","caller":"traceutil/trace.go:171","msg":"trace[2102876701] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:514; }","duration":"155.740939ms","start":"2024-09-20T17:36:56.335879Z","end":"2024-09-20T17:36:56.491620Z","steps":["trace[2102876701] 'agreement among raft nodes before linearized reading'  (duration: 155.626359ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T17:36:56.781324Z","caller":"traceutil/trace.go:171","msg":"trace[828030985] linearizableReadLoop","detail":"{readStateIndex:540; appliedIndex:539; }","duration":"241.570321ms","start":"2024-09-20T17:36:56.539742Z","end":"2024-09-20T17:36:56.781312Z","steps":["trace[828030985] 'read index received'  (duration: 241.35053ms)","trace[828030985] 'applied index is now lower than readState.Index'  (duration: 218.259µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-20T17:36:56.781622Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"241.879547ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-592246-m02\" ","response":"range_response_count:1 size:2894"}
	{"level":"info","ts":"2024-09-20T17:36:56.781712Z","caller":"traceutil/trace.go:171","msg":"trace[772908242] range","detail":"{range_begin:/registry/minions/multinode-592246-m02; range_end:; response_count:1; response_revision:515; }","duration":"241.981914ms","start":"2024-09-20T17:36:56.539721Z","end":"2024-09-20T17:36:56.781703Z","steps":["trace[772908242] 'agreement among raft nodes before linearized reading'  (duration: 241.764021ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T17:36:56.782007Z","caller":"traceutil/trace.go:171","msg":"trace[1059746576] transaction","detail":"{read_only:false; response_revision:515; number_of_response:1; }","duration":"281.562442ms","start":"2024-09-20T17:36:56.500436Z","end":"2024-09-20T17:36:56.781999Z","steps":["trace[1059746576] 'process raft request'  (duration: 280.708253ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T17:37:49.485050Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"149.722919ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T17:37:49.486324Z","caller":"traceutil/trace.go:171","msg":"trace[899354476] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:612; }","duration":"151.065478ms","start":"2024-09-20T17:37:49.335246Z","end":"2024-09-20T17:37:49.486311Z","steps":["trace[899354476] 'range keys from in-memory index tree'  (duration: 149.70888ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T17:37:49.485388Z","caller":"traceutil/trace.go:171","msg":"trace[107722675] transaction","detail":"{read_only:false; response_revision:613; number_of_response:1; }","duration":"216.922529ms","start":"2024-09-20T17:37:49.268429Z","end":"2024-09-20T17:37:49.485351Z","steps":["trace[107722675] 'process raft request'  (duration: 211.382834ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T17:37:53.833399Z","caller":"traceutil/trace.go:171","msg":"trace[1442306491] transaction","detail":"{read_only:false; response_revision:647; number_of_response:1; }","duration":"214.711513ms","start":"2024-09-20T17:37:53.618674Z","end":"2024-09-20T17:37:53.833385Z","steps":["trace[1442306491] 'process raft request'  (duration: 214.577593ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T17:41:05.719111Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-20T17:41:05.719315Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-592246","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.115:2380"],"advertise-client-urls":["https://192.168.39.115:2379"]}
	{"level":"warn","ts":"2024-09-20T17:41:05.719451Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-20T17:41:05.719576Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-20T17:41:05.788737Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.115:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-20T17:41:05.788842Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.115:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-20T17:41:05.790448Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"c7abbacde39fb9a4","current-leader-member-id":"c7abbacde39fb9a4"}
	{"level":"info","ts":"2024-09-20T17:41:05.794128Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.115:2380"}
	{"level":"info","ts":"2024-09-20T17:41:05.794302Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.115:2380"}
	{"level":"info","ts":"2024-09-20T17:41:05.794313Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-592246","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.115:2380"],"advertise-client-urls":["https://192.168.39.115:2379"]}
	
	
	==> kernel <==
	 17:44:35 up 9 min,  0 users,  load average: 0.30, 0.29, 0.14
	Linux multinode-592246 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [33cbb2c4dce5862b3f2c4ce603b1d94a12c4e89fe1a398e64490f33c2a491720] <==
	I0920 17:40:18.252126       1 main.go:322] Node multinode-592246-m03 has CIDR [10.244.4.0/24] 
	I0920 17:40:28.255328       1 main.go:295] Handling node with IPs: map[192.168.39.115:{}]
	I0920 17:40:28.255392       1 main.go:299] handling current node
	I0920 17:40:28.255412       1 main.go:295] Handling node with IPs: map[192.168.39.170:{}]
	I0920 17:40:28.255418       1 main.go:322] Node multinode-592246-m02 has CIDR [10.244.1.0/24] 
	I0920 17:40:28.255578       1 main.go:295] Handling node with IPs: map[192.168.39.38:{}]
	I0920 17:40:28.255595       1 main.go:322] Node multinode-592246-m03 has CIDR [10.244.4.0/24] 
	I0920 17:40:38.248167       1 main.go:295] Handling node with IPs: map[192.168.39.115:{}]
	I0920 17:40:38.248375       1 main.go:299] handling current node
	I0920 17:40:38.248409       1 main.go:295] Handling node with IPs: map[192.168.39.170:{}]
	I0920 17:40:38.248416       1 main.go:322] Node multinode-592246-m02 has CIDR [10.244.1.0/24] 
	I0920 17:40:38.248553       1 main.go:295] Handling node with IPs: map[192.168.39.38:{}]
	I0920 17:40:38.248574       1 main.go:322] Node multinode-592246-m03 has CIDR [10.244.4.0/24] 
	I0920 17:40:48.253607       1 main.go:295] Handling node with IPs: map[192.168.39.38:{}]
	I0920 17:40:48.253716       1 main.go:322] Node multinode-592246-m03 has CIDR [10.244.4.0/24] 
	I0920 17:40:48.253927       1 main.go:295] Handling node with IPs: map[192.168.39.115:{}]
	I0920 17:40:48.253949       1 main.go:299] handling current node
	I0920 17:40:48.253972       1 main.go:295] Handling node with IPs: map[192.168.39.170:{}]
	I0920 17:40:48.253977       1 main.go:322] Node multinode-592246-m02 has CIDR [10.244.1.0/24] 
	I0920 17:40:58.256897       1 main.go:295] Handling node with IPs: map[192.168.39.115:{}]
	I0920 17:40:58.256998       1 main.go:299] handling current node
	I0920 17:40:58.257037       1 main.go:295] Handling node with IPs: map[192.168.39.170:{}]
	I0920 17:40:58.257046       1 main.go:322] Node multinode-592246-m02 has CIDR [10.244.1.0/24] 
	I0920 17:40:58.257273       1 main.go:295] Handling node with IPs: map[192.168.39.38:{}]
	I0920 17:40:58.257295       1 main.go:322] Node multinode-592246-m03 has CIDR [10.244.4.0/24] 
	
	
	==> kindnet [79bea98b7a32af8003c2b53e82f3bcedd25d46534ba4c58b1562a1d4dd62899f] <==
	I0920 17:43:50.393547       1 main.go:322] Node multinode-592246-m03 has CIDR [10.244.4.0/24] 
	I0920 17:44:00.396361       1 main.go:295] Handling node with IPs: map[192.168.39.115:{}]
	I0920 17:44:00.396530       1 main.go:299] handling current node
	I0920 17:44:00.396565       1 main.go:295] Handling node with IPs: map[192.168.39.170:{}]
	I0920 17:44:00.396590       1 main.go:322] Node multinode-592246-m02 has CIDR [10.244.1.0/24] 
	I0920 17:44:00.396805       1 main.go:295] Handling node with IPs: map[192.168.39.38:{}]
	I0920 17:44:00.396908       1 main.go:322] Node multinode-592246-m03 has CIDR [10.244.4.0/24] 
	I0920 17:44:10.398374       1 main.go:295] Handling node with IPs: map[192.168.39.115:{}]
	I0920 17:44:10.398425       1 main.go:299] handling current node
	I0920 17:44:10.398446       1 main.go:295] Handling node with IPs: map[192.168.39.170:{}]
	I0920 17:44:10.398453       1 main.go:322] Node multinode-592246-m02 has CIDR [10.244.1.0/24] 
	I0920 17:44:10.398621       1 main.go:295] Handling node with IPs: map[192.168.39.38:{}]
	I0920 17:44:10.398629       1 main.go:322] Node multinode-592246-m03 has CIDR [10.244.4.0/24] 
	I0920 17:44:20.393389       1 main.go:295] Handling node with IPs: map[192.168.39.115:{}]
	I0920 17:44:20.393456       1 main.go:299] handling current node
	I0920 17:44:20.393472       1 main.go:295] Handling node with IPs: map[192.168.39.170:{}]
	I0920 17:44:20.393478       1 main.go:322] Node multinode-592246-m02 has CIDR [10.244.1.0/24] 
	I0920 17:44:20.393672       1 main.go:295] Handling node with IPs: map[192.168.39.38:{}]
	I0920 17:44:20.393689       1 main.go:322] Node multinode-592246-m03 has CIDR [10.244.2.0/24] 
	I0920 17:44:30.400389       1 main.go:295] Handling node with IPs: map[192.168.39.115:{}]
	I0920 17:44:30.400465       1 main.go:299] handling current node
	I0920 17:44:30.400499       1 main.go:295] Handling node with IPs: map[192.168.39.170:{}]
	I0920 17:44:30.400506       1 main.go:322] Node multinode-592246-m02 has CIDR [10.244.1.0/24] 
	I0920 17:44:30.400784       1 main.go:295] Handling node with IPs: map[192.168.39.38:{}]
	I0920 17:44:30.400808       1 main.go:322] Node multinode-592246-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [203ca48efd7a7e366da2e2947cb259f02447634e8f2a44df78dcefd8909c9a0e] <==
	I0920 17:42:52.370555       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0920 17:42:52.375966       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0920 17:42:52.382604       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0920 17:42:52.385077       1 policy_source.go:224] refreshing policies
	I0920 17:42:52.390083       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0920 17:42:52.393460       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0920 17:42:52.403370       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0920 17:42:52.403791       1 aggregator.go:171] initial CRD sync complete...
	I0920 17:42:52.404075       1 autoregister_controller.go:144] Starting autoregister controller
	I0920 17:42:52.404128       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0920 17:42:52.404857       1 cache.go:39] Caches are synced for autoregister controller
	I0920 17:42:52.470862       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	E0920 17:42:53.122236       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E0920 17:42:53.122283       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 6.499µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E0920 17:42:53.123442       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E0920 17:42:53.124760       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0920 17:42:53.126130       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="3.886743ms" method="PATCH" path="/api/v1/namespaces/kube-system/events/etcd-multinode-592246.17f7049fdaca5f60" result=null
	I0920 17:42:53.271289       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0920 17:42:55.791782       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0920 17:42:55.812925       1 controller.go:615] quota admission added evaluator for: endpoints
	I0920 17:42:55.953631       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0920 17:42:55.963758       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0920 17:42:55.975918       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0920 17:42:56.072245       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0920 17:42:56.095224       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-apiserver [33ec6262554fcd5394f13c012ea27e8a9bf53faf30d5cb85237b4a3496811c0e] <==
	I0920 17:41:05.746928       1 apiservice_controller.go:134] Shutting down APIServiceRegistrationController
	I0920 17:41:05.746942       1 system_namespaces_controller.go:76] Shutting down system namespaces controller
	I0920 17:41:05.746963       1 customresource_discovery_controller.go:328] Shutting down DiscoveryController
	I0920 17:41:05.746993       1 autoregister_controller.go:168] Shutting down autoregister controller
	I0920 17:41:05.747017       1 remote_available_controller.go:427] Shutting down RemoteAvailability controller
	I0920 17:41:05.747113       1 crdregistration_controller.go:145] Shutting down crd-autoregister controller
	I0920 17:41:05.747136       1 crd_finalizer.go:281] Shutting down CRDFinalizer
	I0920 17:41:05.747240       1 establishing_controller.go:92] Shutting down EstablishingController
	I0920 17:41:05.747268       1 controller.go:120] Shutting down OpenAPI V3 controller
	I0920 17:41:05.747354       1 controller.go:170] Shutting down OpenAPI controller
	I0920 17:41:05.747423       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I0920 17:41:05.747434       1 local_available_controller.go:172] Shutting down LocalAvailability controller
	I0920 17:41:05.747447       1 apiapproval_controller.go:201] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
	I0920 17:41:05.747509       1 nonstructuralschema_controller.go:207] Shutting down NonStructuralSchemaConditionController
	I0920 17:41:05.747537       1 naming_controller.go:305] Shutting down NamingConditionController
	I0920 17:41:05.747554       1 apf_controller.go:389] Shutting down API Priority and Fairness config worker
	I0920 17:41:05.747805       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0920 17:41:05.747838       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0920 17:41:05.751394       1 dynamic_cafile_content.go:174] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0920 17:41:05.755050       1 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0920 17:41:05.755141       1 dynamic_cafile_content.go:174] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0920 17:41:05.756679       1 dynamic_serving_content.go:149] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0920 17:41:05.756975       1 controller.go:157] Shutting down quota evaluator
	I0920 17:41:05.757020       1 controller.go:176] quota evaluator worker shutdown
	I0920 17:41:05.757885       1 secure_serving.go:258] Stopped listening on [::]:8443
	
	
	==> kube-controller-manager [2a49b273fff83658fbf7e813b47a71246aa5dc45e49d5cc2db55d579f621cd95] <==
	I0920 17:43:52.348111       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-592246-m02"
	I0920 17:43:52.366875       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-592246-m02"
	I0920 17:43:52.378542       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="231.893µs"
	I0920 17:43:52.395934       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="36.89µs"
	I0920 17:43:55.749159       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-592246-m02"
	I0920 17:43:56.838548       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="10.847869ms"
	I0920 17:43:56.838830       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="51.84µs"
	I0920 17:44:03.094482       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-592246-m02"
	I0920 17:44:10.541979       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-592246-m03"
	I0920 17:44:10.572713       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-592246-m03"
	I0920 17:44:10.810436       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-592246-m03"
	I0920 17:44:10.810976       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-592246-m02"
	I0920 17:44:12.044105       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-592246-m03\" does not exist"
	I0920 17:44:12.044268       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-592246-m02"
	I0920 17:44:12.059221       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-592246-m03" podCIDRs=["10.244.2.0/24"]
	I0920 17:44:12.059277       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-592246-m03"
	I0920 17:44:12.059342       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-592246-m03"
	I0920 17:44:12.070952       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-592246-m03"
	I0920 17:44:12.334090       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-592246-m03"
	I0920 17:44:12.676390       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-592246-m03"
	I0920 17:44:15.834574       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-592246-m03"
	I0920 17:44:22.291722       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-592246-m03"
	I0920 17:44:31.631492       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-592246-m02"
	I0920 17:44:31.631626       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-592246-m03"
	I0920 17:44:31.652814       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-592246-m03"
	
	
	==> kube-controller-manager [ca5e246374ae005eb084399c79df21d8dc78d6218c643826ddfec1d4d1ff26d8] <==
	I0920 17:38:39.321880       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-592246-m03"
	I0920 17:38:39.322224       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-592246-m02"
	I0920 17:38:40.460454       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-592246-m02"
	I0920 17:38:40.460653       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-592246-m03\" does not exist"
	I0920 17:38:40.477844       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-592246-m03" podCIDRs=["10.244.4.0/24"]
	I0920 17:38:40.477886       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-592246-m03"
	I0920 17:38:40.477913       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-592246-m03"
	I0920 17:38:40.500958       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-592246-m03"
	I0920 17:38:40.565384       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-592246-m03"
	I0920 17:38:40.877043       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-592246-m03"
	I0920 17:38:41.195467       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-592246-m03"
	I0920 17:38:50.677031       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-592246-m03"
	I0920 17:39:00.170910       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-592246-m02"
	I0920 17:39:00.171429       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-592246-m03"
	I0920 17:39:00.184131       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-592246-m03"
	I0920 17:39:00.555815       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-592246-m03"
	I0920 17:39:45.577434       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-592246-m03"
	I0920 17:39:45.577714       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-592246-m02"
	I0920 17:39:45.579853       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-592246-m03"
	I0920 17:39:45.607460       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-592246-m02"
	I0920 17:39:45.611985       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-592246-m03"
	I0920 17:39:45.646132       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="11.500365ms"
	I0920 17:39:45.647120       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="23.068µs"
	I0920 17:39:50.735658       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-592246-m02"
	I0920 17:40:00.817761       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-592246-m03"
	
	
	==> kube-proxy [43d580bf9876b78f6a151a49a9fc863e01628a121c31c8106030c76a6949e273] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 17:36:07.426655       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0920 17:36:07.477866       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.115"]
	E0920 17:36:07.478051       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 17:36:07.577099       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 17:36:07.577148       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 17:36:07.577222       1 server_linux.go:169] "Using iptables Proxier"
	I0920 17:36:07.579737       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 17:36:07.580038       1 server.go:483] "Version info" version="v1.31.1"
	I0920 17:36:07.580061       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 17:36:07.581632       1 config.go:199] "Starting service config controller"
	I0920 17:36:07.581683       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 17:36:07.581732       1 config.go:105] "Starting endpoint slice config controller"
	I0920 17:36:07.581749       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 17:36:07.582339       1 config.go:328] "Starting node config controller"
	I0920 17:36:07.582364       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 17:36:07.682406       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 17:36:07.682494       1 shared_informer.go:320] Caches are synced for service config
	I0920 17:36:07.682584       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [8ccbb8ac9634336f88fdd6b655a976f2ff3bf355d24c77128abb0afa2d10e57b] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 17:42:50.094041       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0920 17:42:52.385916       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.115"]
	E0920 17:42:52.386551       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 17:42:52.503824       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 17:42:52.503921       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 17:42:52.503960       1 server_linux.go:169] "Using iptables Proxier"
	I0920 17:42:52.506732       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 17:42:52.507121       1 server.go:483] "Version info" version="v1.31.1"
	I0920 17:42:52.507747       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 17:42:52.509809       1 config.go:199] "Starting service config controller"
	I0920 17:42:52.509922       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 17:42:52.510028       1 config.go:105] "Starting endpoint slice config controller"
	I0920 17:42:52.510115       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 17:42:52.511644       1 config.go:328] "Starting node config controller"
	I0920 17:42:52.511716       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 17:42:52.612344       1 shared_informer.go:320] Caches are synced for node config
	I0920 17:42:52.612531       1 shared_informer.go:320] Caches are synced for service config
	I0920 17:42:52.612545       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [18612a28ae5022d41607cfde2996aff0acffaa060ef96c055b522b167dc900d3] <==
	E0920 17:35:58.675350       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 17:35:58.674082       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0920 17:35:58.675498       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0920 17:35:59.497242       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0920 17:35:59.497355       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 17:35:59.503164       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0920 17:35:59.503336       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 17:35:59.534618       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0920 17:35:59.534709       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 17:35:59.576662       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0920 17:35:59.576784       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0920 17:35:59.690862       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0920 17:35:59.690955       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 17:35:59.763228       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0920 17:35:59.763312       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 17:35:59.804468       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0920 17:35:59.804570       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 17:35:59.935401       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0920 17:35:59.935863       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0920 17:35:59.983417       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0920 17:35:59.983586       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 17:35:59.988632       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0920 17:35:59.988679       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0920 17:36:01.760804       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0920 17:41:05.718162       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [aa7a4ac935e0470da3205f280946a5075ad00d4d398edf2ecbd0d6b72f3fa634] <==
	I0920 17:42:50.515286       1 serving.go:386] Generated self-signed cert in-memory
	W0920 17:42:52.315248       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0920 17:42:52.315328       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0920 17:42:52.315338       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0920 17:42:52.315350       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0920 17:42:52.383988       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0920 17:42:52.384034       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 17:42:52.402775       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0920 17:42:52.402834       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0920 17:42:52.403107       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0920 17:42:52.403289       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0920 17:42:52.503016       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 20 17:43:05 multinode-592246 kubelet[3645]: E0920 17:43:05.238650    3645 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726854185238047095,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:43:05 multinode-592246 kubelet[3645]: E0920 17:43:05.238682    3645 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726854185238047095,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:43:15 multinode-592246 kubelet[3645]: E0920 17:43:15.241407    3645 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726854195240101315,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:43:15 multinode-592246 kubelet[3645]: E0920 17:43:15.241861    3645 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726854195240101315,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:43:25 multinode-592246 kubelet[3645]: E0920 17:43:25.243375    3645 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726854205242938113,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:43:25 multinode-592246 kubelet[3645]: E0920 17:43:25.243427    3645 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726854205242938113,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:43:35 multinode-592246 kubelet[3645]: E0920 17:43:35.245454    3645 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726854215245069981,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:43:35 multinode-592246 kubelet[3645]: E0920 17:43:35.245497    3645 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726854215245069981,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:43:45 multinode-592246 kubelet[3645]: E0920 17:43:45.246961    3645 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726854225246508692,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:43:45 multinode-592246 kubelet[3645]: E0920 17:43:45.248761    3645 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726854225246508692,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:43:55 multinode-592246 kubelet[3645]: E0920 17:43:55.251283    3645 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726854235250495462,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:43:55 multinode-592246 kubelet[3645]: E0920 17:43:55.251588    3645 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726854235250495462,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:43:55 multinode-592246 kubelet[3645]: E0920 17:43:55.260458    3645 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 20 17:43:55 multinode-592246 kubelet[3645]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 20 17:43:55 multinode-592246 kubelet[3645]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 20 17:43:55 multinode-592246 kubelet[3645]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 20 17:43:55 multinode-592246 kubelet[3645]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 20 17:44:05 multinode-592246 kubelet[3645]: E0920 17:44:05.259095    3645 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726854245254871452,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:44:05 multinode-592246 kubelet[3645]: E0920 17:44:05.259219    3645 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726854245254871452,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:44:15 multinode-592246 kubelet[3645]: E0920 17:44:15.266996    3645 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726854255262280972,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:44:15 multinode-592246 kubelet[3645]: E0920 17:44:15.267064    3645 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726854255262280972,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:44:25 multinode-592246 kubelet[3645]: E0920 17:44:25.274730    3645 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726854265269846376,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:44:25 multinode-592246 kubelet[3645]: E0920 17:44:25.275387    3645 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726854265269846376,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:44:35 multinode-592246 kubelet[3645]: E0920 17:44:35.277632    3645 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726854275276949245,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:44:35 multinode-592246 kubelet[3645]: E0920 17:44:35.277703    3645 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726854275276949245,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 17:44:34.132050   47145 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19672-8777/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-592246 -n multinode-592246
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-592246 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (333.25s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (144.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592246 stop
E0920 17:44:43.001974   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/functional-945494/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-592246 stop: exit status 82 (2m0.500284842s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-592246-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-592246 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592246 status
E0920 17:46:39.932169   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/functional-945494/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:351: (dbg) Done: out/minikube-linux-amd64 -p multinode-592246 status: (18.79542951s)
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592246 status --alsologtostderr
multinode_test.go:358: (dbg) Done: out/minikube-linux-amd64 -p multinode-592246 status --alsologtostderr: (3.391612325s)
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-linux-amd64 -p multinode-592246 status --alsologtostderr": 
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-linux-amd64 -p multinode-592246 status --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-592246 -n multinode-592246
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592246 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-592246 logs -n 25: (1.588205291s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-592246 ssh -n                                                                 | multinode-592246 | jenkins | v1.34.0 | 20 Sep 24 17:38 UTC | 20 Sep 24 17:38 UTC |
	|         | multinode-592246-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-592246 cp multinode-592246-m02:/home/docker/cp-test.txt                       | multinode-592246 | jenkins | v1.34.0 | 20 Sep 24 17:38 UTC | 20 Sep 24 17:38 UTC |
	|         | multinode-592246:/home/docker/cp-test_multinode-592246-m02_multinode-592246.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-592246 ssh -n                                                                 | multinode-592246 | jenkins | v1.34.0 | 20 Sep 24 17:38 UTC | 20 Sep 24 17:38 UTC |
	|         | multinode-592246-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-592246 ssh -n multinode-592246 sudo cat                                       | multinode-592246 | jenkins | v1.34.0 | 20 Sep 24 17:38 UTC | 20 Sep 24 17:38 UTC |
	|         | /home/docker/cp-test_multinode-592246-m02_multinode-592246.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-592246 cp multinode-592246-m02:/home/docker/cp-test.txt                       | multinode-592246 | jenkins | v1.34.0 | 20 Sep 24 17:38 UTC | 20 Sep 24 17:38 UTC |
	|         | multinode-592246-m03:/home/docker/cp-test_multinode-592246-m02_multinode-592246-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-592246 ssh -n                                                                 | multinode-592246 | jenkins | v1.34.0 | 20 Sep 24 17:38 UTC | 20 Sep 24 17:38 UTC |
	|         | multinode-592246-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-592246 ssh -n multinode-592246-m03 sudo cat                                   | multinode-592246 | jenkins | v1.34.0 | 20 Sep 24 17:38 UTC | 20 Sep 24 17:38 UTC |
	|         | /home/docker/cp-test_multinode-592246-m02_multinode-592246-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-592246 cp testdata/cp-test.txt                                                | multinode-592246 | jenkins | v1.34.0 | 20 Sep 24 17:38 UTC | 20 Sep 24 17:38 UTC |
	|         | multinode-592246-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-592246 ssh -n                                                                 | multinode-592246 | jenkins | v1.34.0 | 20 Sep 24 17:38 UTC | 20 Sep 24 17:38 UTC |
	|         | multinode-592246-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-592246 cp multinode-592246-m03:/home/docker/cp-test.txt                       | multinode-592246 | jenkins | v1.34.0 | 20 Sep 24 17:38 UTC | 20 Sep 24 17:38 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3830266903/001/cp-test_multinode-592246-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-592246 ssh -n                                                                 | multinode-592246 | jenkins | v1.34.0 | 20 Sep 24 17:38 UTC | 20 Sep 24 17:38 UTC |
	|         | multinode-592246-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-592246 cp multinode-592246-m03:/home/docker/cp-test.txt                       | multinode-592246 | jenkins | v1.34.0 | 20 Sep 24 17:38 UTC | 20 Sep 24 17:38 UTC |
	|         | multinode-592246:/home/docker/cp-test_multinode-592246-m03_multinode-592246.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-592246 ssh -n                                                                 | multinode-592246 | jenkins | v1.34.0 | 20 Sep 24 17:38 UTC | 20 Sep 24 17:38 UTC |
	|         | multinode-592246-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-592246 ssh -n multinode-592246 sudo cat                                       | multinode-592246 | jenkins | v1.34.0 | 20 Sep 24 17:38 UTC | 20 Sep 24 17:38 UTC |
	|         | /home/docker/cp-test_multinode-592246-m03_multinode-592246.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-592246 cp multinode-592246-m03:/home/docker/cp-test.txt                       | multinode-592246 | jenkins | v1.34.0 | 20 Sep 24 17:38 UTC | 20 Sep 24 17:38 UTC |
	|         | multinode-592246-m02:/home/docker/cp-test_multinode-592246-m03_multinode-592246-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-592246 ssh -n                                                                 | multinode-592246 | jenkins | v1.34.0 | 20 Sep 24 17:38 UTC | 20 Sep 24 17:38 UTC |
	|         | multinode-592246-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-592246 ssh -n multinode-592246-m02 sudo cat                                   | multinode-592246 | jenkins | v1.34.0 | 20 Sep 24 17:38 UTC | 20 Sep 24 17:38 UTC |
	|         | /home/docker/cp-test_multinode-592246-m03_multinode-592246-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-592246 node stop m03                                                          | multinode-592246 | jenkins | v1.34.0 | 20 Sep 24 17:38 UTC | 20 Sep 24 17:38 UTC |
	| node    | multinode-592246 node start                                                             | multinode-592246 | jenkins | v1.34.0 | 20 Sep 24 17:38 UTC | 20 Sep 24 17:39 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-592246                                                                | multinode-592246 | jenkins | v1.34.0 | 20 Sep 24 17:39 UTC |                     |
	| stop    | -p multinode-592246                                                                     | multinode-592246 | jenkins | v1.34.0 | 20 Sep 24 17:39 UTC |                     |
	| start   | -p multinode-592246                                                                     | multinode-592246 | jenkins | v1.34.0 | 20 Sep 24 17:41 UTC | 20 Sep 24 17:44 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-592246                                                                | multinode-592246 | jenkins | v1.34.0 | 20 Sep 24 17:44 UTC |                     |
	| node    | multinode-592246 node delete                                                            | multinode-592246 | jenkins | v1.34.0 | 20 Sep 24 17:44 UTC | 20 Sep 24 17:44 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-592246 stop                                                                   | multinode-592246 | jenkins | v1.34.0 | 20 Sep 24 17:44 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 17:41:04
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 17:41:04.775899   46025 out.go:345] Setting OutFile to fd 1 ...
	I0920 17:41:04.776019   46025 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:41:04.776027   46025 out.go:358] Setting ErrFile to fd 2...
	I0920 17:41:04.776032   46025 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:41:04.776228   46025 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-8777/.minikube/bin
	I0920 17:41:04.776882   46025 out.go:352] Setting JSON to false
	I0920 17:41:04.777816   46025 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5008,"bootTime":1726849057,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 17:41:04.777928   46025 start.go:139] virtualization: kvm guest
	I0920 17:41:04.780468   46025 out.go:177] * [multinode-592246] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 17:41:04.782007   46025 notify.go:220] Checking for updates...
	I0920 17:41:04.782061   46025 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 17:41:04.783438   46025 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 17:41:04.784720   46025 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-8777/kubeconfig
	I0920 17:41:04.785982   46025 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-8777/.minikube
	I0920 17:41:04.787298   46025 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 17:41:04.788639   46025 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 17:41:04.790380   46025 config.go:182] Loaded profile config "multinode-592246": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 17:41:04.790500   46025 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 17:41:04.790995   46025 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:41:04.791067   46025 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:41:04.807205   46025 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41099
	I0920 17:41:04.807705   46025 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:41:04.808348   46025 main.go:141] libmachine: Using API Version  1
	I0920 17:41:04.808365   46025 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:41:04.808727   46025 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:41:04.808917   46025 main.go:141] libmachine: (multinode-592246) Calling .DriverName
	I0920 17:41:04.846738   46025 out.go:177] * Using the kvm2 driver based on existing profile
	I0920 17:41:04.848040   46025 start.go:297] selected driver: kvm2
	I0920 17:41:04.848060   46025 start.go:901] validating driver "kvm2" against &{Name:multinode-592246 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:multinode-592246 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.115 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.170 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.38 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false insp
ektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 17:41:04.848208   46025 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 17:41:04.848562   46025 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 17:41:04.848668   46025 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19672-8777/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 17:41:04.864156   46025 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 17:41:04.864910   46025 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 17:41:04.864944   46025 cni.go:84] Creating CNI manager for ""
	I0920 17:41:04.865010   46025 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0920 17:41:04.865062   46025 start.go:340] cluster config:
	{Name:multinode-592246 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-592246 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.115 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.170 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.38 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow
:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetCli
entPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 17:41:04.865191   46025 iso.go:125] acquiring lock: {Name:mkba95ef0488e46f622333e9f317f43def93040b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 17:41:04.867437   46025 out.go:177] * Starting "multinode-592246" primary control-plane node in "multinode-592246" cluster
	I0920 17:41:04.869049   46025 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 17:41:04.869120   46025 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0920 17:41:04.869134   46025 cache.go:56] Caching tarball of preloaded images
	I0920 17:41:04.869256   46025 preload.go:172] Found /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 17:41:04.869269   46025 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 17:41:04.869395   46025 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/multinode-592246/config.json ...
	I0920 17:41:04.869632   46025 start.go:360] acquireMachinesLock for multinode-592246: {Name:mkfeedb385cf08b5d2aa00913e85815d02a180c2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 17:41:04.869687   46025 start.go:364] duration metric: took 33.418µs to acquireMachinesLock for "multinode-592246"
	I0920 17:41:04.869704   46025 start.go:96] Skipping create...Using existing machine configuration
	I0920 17:41:04.869710   46025 fix.go:54] fixHost starting: 
	I0920 17:41:04.870023   46025 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:41:04.870073   46025 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:41:04.884965   46025 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35873
	I0920 17:41:04.885479   46025 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:41:04.886129   46025 main.go:141] libmachine: Using API Version  1
	I0920 17:41:04.886155   46025 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:41:04.886499   46025 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:41:04.886714   46025 main.go:141] libmachine: (multinode-592246) Calling .DriverName
	I0920 17:41:04.886838   46025 main.go:141] libmachine: (multinode-592246) Calling .GetState
	I0920 17:41:04.888728   46025 fix.go:112] recreateIfNeeded on multinode-592246: state=Running err=<nil>
	W0920 17:41:04.888771   46025 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 17:41:04.891125   46025 out.go:177] * Updating the running kvm2 "multinode-592246" VM ...
	I0920 17:41:04.892701   46025 machine.go:93] provisionDockerMachine start ...
	I0920 17:41:04.892728   46025 main.go:141] libmachine: (multinode-592246) Calling .DriverName
	I0920 17:41:04.892976   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHHostname
	I0920 17:41:04.896148   46025 main.go:141] libmachine: (multinode-592246) DBG | domain multinode-592246 has defined MAC address 52:54:00:15:f5:5b in network mk-multinode-592246
	I0920 17:41:04.896626   46025 main.go:141] libmachine: (multinode-592246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:f5:5b", ip: ""} in network mk-multinode-592246: {Iface:virbr1 ExpiryTime:2024-09-20 18:35:34 +0000 UTC Type:0 Mac:52:54:00:15:f5:5b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:multinode-592246 Clientid:01:52:54:00:15:f5:5b}
	I0920 17:41:04.896654   46025 main.go:141] libmachine: (multinode-592246) DBG | domain multinode-592246 has defined IP address 192.168.39.115 and MAC address 52:54:00:15:f5:5b in network mk-multinode-592246
	I0920 17:41:04.896816   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHPort
	I0920 17:41:04.896998   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHKeyPath
	I0920 17:41:04.897142   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHKeyPath
	I0920 17:41:04.897236   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHUsername
	I0920 17:41:04.897405   46025 main.go:141] libmachine: Using SSH client type: native
	I0920 17:41:04.897638   46025 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0920 17:41:04.897650   46025 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 17:41:05.015262   46025 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-592246
	
	I0920 17:41:05.015293   46025 main.go:141] libmachine: (multinode-592246) Calling .GetMachineName
	I0920 17:41:05.015623   46025 buildroot.go:166] provisioning hostname "multinode-592246"
	I0920 17:41:05.015647   46025 main.go:141] libmachine: (multinode-592246) Calling .GetMachineName
	I0920 17:41:05.015836   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHHostname
	I0920 17:41:05.018677   46025 main.go:141] libmachine: (multinode-592246) DBG | domain multinode-592246 has defined MAC address 52:54:00:15:f5:5b in network mk-multinode-592246
	I0920 17:41:05.019049   46025 main.go:141] libmachine: (multinode-592246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:f5:5b", ip: ""} in network mk-multinode-592246: {Iface:virbr1 ExpiryTime:2024-09-20 18:35:34 +0000 UTC Type:0 Mac:52:54:00:15:f5:5b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:multinode-592246 Clientid:01:52:54:00:15:f5:5b}
	I0920 17:41:05.019078   46025 main.go:141] libmachine: (multinode-592246) DBG | domain multinode-592246 has defined IP address 192.168.39.115 and MAC address 52:54:00:15:f5:5b in network mk-multinode-592246
	I0920 17:41:05.019264   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHPort
	I0920 17:41:05.019510   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHKeyPath
	I0920 17:41:05.019663   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHKeyPath
	I0920 17:41:05.019809   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHUsername
	I0920 17:41:05.019961   46025 main.go:141] libmachine: Using SSH client type: native
	I0920 17:41:05.020139   46025 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0920 17:41:05.020150   46025 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-592246 && echo "multinode-592246" | sudo tee /etc/hostname
	I0920 17:41:05.149959   46025 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-592246
	
	I0920 17:41:05.149993   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHHostname
	I0920 17:41:05.153075   46025 main.go:141] libmachine: (multinode-592246) DBG | domain multinode-592246 has defined MAC address 52:54:00:15:f5:5b in network mk-multinode-592246
	I0920 17:41:05.153497   46025 main.go:141] libmachine: (multinode-592246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:f5:5b", ip: ""} in network mk-multinode-592246: {Iface:virbr1 ExpiryTime:2024-09-20 18:35:34 +0000 UTC Type:0 Mac:52:54:00:15:f5:5b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:multinode-592246 Clientid:01:52:54:00:15:f5:5b}
	I0920 17:41:05.153519   46025 main.go:141] libmachine: (multinode-592246) DBG | domain multinode-592246 has defined IP address 192.168.39.115 and MAC address 52:54:00:15:f5:5b in network mk-multinode-592246
	I0920 17:41:05.153821   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHPort
	I0920 17:41:05.154059   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHKeyPath
	I0920 17:41:05.154273   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHKeyPath
	I0920 17:41:05.154478   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHUsername
	I0920 17:41:05.154690   46025 main.go:141] libmachine: Using SSH client type: native
	I0920 17:41:05.154910   46025 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0920 17:41:05.154934   46025 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-592246' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-592246/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-592246' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 17:41:05.270956   46025 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 17:41:05.270994   46025 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-8777/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-8777/.minikube}
	I0920 17:41:05.271023   46025 buildroot.go:174] setting up certificates
	I0920 17:41:05.271033   46025 provision.go:84] configureAuth start
	I0920 17:41:05.271045   46025 main.go:141] libmachine: (multinode-592246) Calling .GetMachineName
	I0920 17:41:05.271343   46025 main.go:141] libmachine: (multinode-592246) Calling .GetIP
	I0920 17:41:05.274198   46025 main.go:141] libmachine: (multinode-592246) DBG | domain multinode-592246 has defined MAC address 52:54:00:15:f5:5b in network mk-multinode-592246
	I0920 17:41:05.274611   46025 main.go:141] libmachine: (multinode-592246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:f5:5b", ip: ""} in network mk-multinode-592246: {Iface:virbr1 ExpiryTime:2024-09-20 18:35:34 +0000 UTC Type:0 Mac:52:54:00:15:f5:5b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:multinode-592246 Clientid:01:52:54:00:15:f5:5b}
	I0920 17:41:05.274638   46025 main.go:141] libmachine: (multinode-592246) DBG | domain multinode-592246 has defined IP address 192.168.39.115 and MAC address 52:54:00:15:f5:5b in network mk-multinode-592246
	I0920 17:41:05.274774   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHHostname
	I0920 17:41:05.277464   46025 main.go:141] libmachine: (multinode-592246) DBG | domain multinode-592246 has defined MAC address 52:54:00:15:f5:5b in network mk-multinode-592246
	I0920 17:41:05.277880   46025 main.go:141] libmachine: (multinode-592246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:f5:5b", ip: ""} in network mk-multinode-592246: {Iface:virbr1 ExpiryTime:2024-09-20 18:35:34 +0000 UTC Type:0 Mac:52:54:00:15:f5:5b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:multinode-592246 Clientid:01:52:54:00:15:f5:5b}
	I0920 17:41:05.277911   46025 main.go:141] libmachine: (multinode-592246) DBG | domain multinode-592246 has defined IP address 192.168.39.115 and MAC address 52:54:00:15:f5:5b in network mk-multinode-592246
	I0920 17:41:05.278039   46025 provision.go:143] copyHostCerts
	I0920 17:41:05.278067   46025 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem
	I0920 17:41:05.278100   46025 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem, removing ...
	I0920 17:41:05.278117   46025 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem
	I0920 17:41:05.278187   46025 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem (1082 bytes)
	I0920 17:41:05.278263   46025 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem
	I0920 17:41:05.278294   46025 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem, removing ...
	I0920 17:41:05.278309   46025 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem
	I0920 17:41:05.278338   46025 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem (1123 bytes)
	I0920 17:41:05.278379   46025 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem
	I0920 17:41:05.278396   46025 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem, removing ...
	I0920 17:41:05.278402   46025 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem
	I0920 17:41:05.278423   46025 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem (1675 bytes)
	I0920 17:41:05.278468   46025 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem org=jenkins.multinode-592246 san=[127.0.0.1 192.168.39.115 localhost minikube multinode-592246]
	I0920 17:41:05.396738   46025 provision.go:177] copyRemoteCerts
	I0920 17:41:05.396806   46025 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 17:41:05.396830   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHHostname
	I0920 17:41:05.399987   46025 main.go:141] libmachine: (multinode-592246) DBG | domain multinode-592246 has defined MAC address 52:54:00:15:f5:5b in network mk-multinode-592246
	I0920 17:41:05.400358   46025 main.go:141] libmachine: (multinode-592246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:f5:5b", ip: ""} in network mk-multinode-592246: {Iface:virbr1 ExpiryTime:2024-09-20 18:35:34 +0000 UTC Type:0 Mac:52:54:00:15:f5:5b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:multinode-592246 Clientid:01:52:54:00:15:f5:5b}
	I0920 17:41:05.400388   46025 main.go:141] libmachine: (multinode-592246) DBG | domain multinode-592246 has defined IP address 192.168.39.115 and MAC address 52:54:00:15:f5:5b in network mk-multinode-592246
	I0920 17:41:05.400627   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHPort
	I0920 17:41:05.400878   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHKeyPath
	I0920 17:41:05.401147   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHUsername
	I0920 17:41:05.401327   46025 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/multinode-592246/id_rsa Username:docker}
	I0920 17:41:05.489745   46025 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0920 17:41:05.489866   46025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0920 17:41:05.516428   46025 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0920 17:41:05.516514   46025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 17:41:05.541581   46025 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0920 17:41:05.541670   46025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0920 17:41:05.570087   46025 provision.go:87] duration metric: took 299.037398ms to configureAuth
	I0920 17:41:05.570125   46025 buildroot.go:189] setting minikube options for container-runtime
	I0920 17:41:05.570367   46025 config.go:182] Loaded profile config "multinode-592246": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 17:41:05.570453   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHHostname
	I0920 17:41:05.573603   46025 main.go:141] libmachine: (multinode-592246) DBG | domain multinode-592246 has defined MAC address 52:54:00:15:f5:5b in network mk-multinode-592246
	I0920 17:41:05.573960   46025 main.go:141] libmachine: (multinode-592246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:f5:5b", ip: ""} in network mk-multinode-592246: {Iface:virbr1 ExpiryTime:2024-09-20 18:35:34 +0000 UTC Type:0 Mac:52:54:00:15:f5:5b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:multinode-592246 Clientid:01:52:54:00:15:f5:5b}
	I0920 17:41:05.573987   46025 main.go:141] libmachine: (multinode-592246) DBG | domain multinode-592246 has defined IP address 192.168.39.115 and MAC address 52:54:00:15:f5:5b in network mk-multinode-592246
	I0920 17:41:05.574190   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHPort
	I0920 17:41:05.574406   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHKeyPath
	I0920 17:41:05.574600   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHKeyPath
	I0920 17:41:05.574763   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHUsername
	I0920 17:41:05.574914   46025 main.go:141] libmachine: Using SSH client type: native
	I0920 17:41:05.575106   46025 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0920 17:41:05.575126   46025 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 17:42:36.423601   46025 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 17:42:36.423630   46025 machine.go:96] duration metric: took 1m31.53091135s to provisionDockerMachine
	I0920 17:42:36.423644   46025 start.go:293] postStartSetup for "multinode-592246" (driver="kvm2")
	I0920 17:42:36.423654   46025 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 17:42:36.423673   46025 main.go:141] libmachine: (multinode-592246) Calling .DriverName
	I0920 17:42:36.424043   46025 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 17:42:36.424077   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHHostname
	I0920 17:42:36.428263   46025 main.go:141] libmachine: (multinode-592246) DBG | domain multinode-592246 has defined MAC address 52:54:00:15:f5:5b in network mk-multinode-592246
	I0920 17:42:36.428759   46025 main.go:141] libmachine: (multinode-592246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:f5:5b", ip: ""} in network mk-multinode-592246: {Iface:virbr1 ExpiryTime:2024-09-20 18:35:34 +0000 UTC Type:0 Mac:52:54:00:15:f5:5b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:multinode-592246 Clientid:01:52:54:00:15:f5:5b}
	I0920 17:42:36.428795   46025 main.go:141] libmachine: (multinode-592246) DBG | domain multinode-592246 has defined IP address 192.168.39.115 and MAC address 52:54:00:15:f5:5b in network mk-multinode-592246
	I0920 17:42:36.429046   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHPort
	I0920 17:42:36.429262   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHKeyPath
	I0920 17:42:36.429443   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHUsername
	I0920 17:42:36.429599   46025 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/multinode-592246/id_rsa Username:docker}
	I0920 17:42:36.518170   46025 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 17:42:36.522871   46025 command_runner.go:130] > NAME=Buildroot
	I0920 17:42:36.522898   46025 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0920 17:42:36.522903   46025 command_runner.go:130] > ID=buildroot
	I0920 17:42:36.522907   46025 command_runner.go:130] > VERSION_ID=2023.02.9
	I0920 17:42:36.522913   46025 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0920 17:42:36.523062   46025 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 17:42:36.523085   46025 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/addons for local assets ...
	I0920 17:42:36.523148   46025 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/files for local assets ...
	I0920 17:42:36.523225   46025 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem -> 159732.pem in /etc/ssl/certs
	I0920 17:42:36.523247   46025 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem -> /etc/ssl/certs/159732.pem
	I0920 17:42:36.523351   46025 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 17:42:36.533404   46025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /etc/ssl/certs/159732.pem (1708 bytes)
	I0920 17:42:36.560789   46025 start.go:296] duration metric: took 137.13163ms for postStartSetup
	I0920 17:42:36.560829   46025 fix.go:56] duration metric: took 1m31.691118587s for fixHost
	I0920 17:42:36.560849   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHHostname
	I0920 17:42:36.563832   46025 main.go:141] libmachine: (multinode-592246) DBG | domain multinode-592246 has defined MAC address 52:54:00:15:f5:5b in network mk-multinode-592246
	I0920 17:42:36.564299   46025 main.go:141] libmachine: (multinode-592246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:f5:5b", ip: ""} in network mk-multinode-592246: {Iface:virbr1 ExpiryTime:2024-09-20 18:35:34 +0000 UTC Type:0 Mac:52:54:00:15:f5:5b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:multinode-592246 Clientid:01:52:54:00:15:f5:5b}
	I0920 17:42:36.564336   46025 main.go:141] libmachine: (multinode-592246) DBG | domain multinode-592246 has defined IP address 192.168.39.115 and MAC address 52:54:00:15:f5:5b in network mk-multinode-592246
	I0920 17:42:36.564592   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHPort
	I0920 17:42:36.564853   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHKeyPath
	I0920 17:42:36.565128   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHKeyPath
	I0920 17:42:36.565268   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHUsername
	I0920 17:42:36.565446   46025 main.go:141] libmachine: Using SSH client type: native
	I0920 17:42:36.565610   46025 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0920 17:42:36.565619   46025 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 17:42:36.678751   46025 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726854156.653403265
	
	I0920 17:42:36.678772   46025 fix.go:216] guest clock: 1726854156.653403265
	I0920 17:42:36.678778   46025 fix.go:229] Guest: 2024-09-20 17:42:36.653403265 +0000 UTC Remote: 2024-09-20 17:42:36.560833204 +0000 UTC m=+91.824072456 (delta=92.570061ms)
	I0920 17:42:36.678798   46025 fix.go:200] guest clock delta is within tolerance: 92.570061ms
	I0920 17:42:36.678804   46025 start.go:83] releasing machines lock for "multinode-592246", held for 1m31.809106382s
	I0920 17:42:36.678826   46025 main.go:141] libmachine: (multinode-592246) Calling .DriverName
	I0920 17:42:36.679086   46025 main.go:141] libmachine: (multinode-592246) Calling .GetIP
	I0920 17:42:36.681941   46025 main.go:141] libmachine: (multinode-592246) DBG | domain multinode-592246 has defined MAC address 52:54:00:15:f5:5b in network mk-multinode-592246
	I0920 17:42:36.682299   46025 main.go:141] libmachine: (multinode-592246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:f5:5b", ip: ""} in network mk-multinode-592246: {Iface:virbr1 ExpiryTime:2024-09-20 18:35:34 +0000 UTC Type:0 Mac:52:54:00:15:f5:5b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:multinode-592246 Clientid:01:52:54:00:15:f5:5b}
	I0920 17:42:36.682327   46025 main.go:141] libmachine: (multinode-592246) DBG | domain multinode-592246 has defined IP address 192.168.39.115 and MAC address 52:54:00:15:f5:5b in network mk-multinode-592246
	I0920 17:42:36.682493   46025 main.go:141] libmachine: (multinode-592246) Calling .DriverName
	I0920 17:42:36.683054   46025 main.go:141] libmachine: (multinode-592246) Calling .DriverName
	I0920 17:42:36.683269   46025 main.go:141] libmachine: (multinode-592246) Calling .DriverName
	I0920 17:42:36.683389   46025 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 17:42:36.683434   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHHostname
	I0920 17:42:36.683511   46025 ssh_runner.go:195] Run: cat /version.json
	I0920 17:42:36.683537   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHHostname
	I0920 17:42:36.686217   46025 main.go:141] libmachine: (multinode-592246) DBG | domain multinode-592246 has defined MAC address 52:54:00:15:f5:5b in network mk-multinode-592246
	I0920 17:42:36.686685   46025 main.go:141] libmachine: (multinode-592246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:f5:5b", ip: ""} in network mk-multinode-592246: {Iface:virbr1 ExpiryTime:2024-09-20 18:35:34 +0000 UTC Type:0 Mac:52:54:00:15:f5:5b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:multinode-592246 Clientid:01:52:54:00:15:f5:5b}
	I0920 17:42:36.686718   46025 main.go:141] libmachine: (multinode-592246) DBG | domain multinode-592246 has defined IP address 192.168.39.115 and MAC address 52:54:00:15:f5:5b in network mk-multinode-592246
	I0920 17:42:36.686746   46025 main.go:141] libmachine: (multinode-592246) DBG | domain multinode-592246 has defined MAC address 52:54:00:15:f5:5b in network mk-multinode-592246
	I0920 17:42:36.686838   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHPort
	I0920 17:42:36.687029   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHKeyPath
	I0920 17:42:36.687186   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHUsername
	I0920 17:42:36.687307   46025 main.go:141] libmachine: (multinode-592246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:f5:5b", ip: ""} in network mk-multinode-592246: {Iface:virbr1 ExpiryTime:2024-09-20 18:35:34 +0000 UTC Type:0 Mac:52:54:00:15:f5:5b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:multinode-592246 Clientid:01:52:54:00:15:f5:5b}
	I0920 17:42:36.687329   46025 main.go:141] libmachine: (multinode-592246) DBG | domain multinode-592246 has defined IP address 192.168.39.115 and MAC address 52:54:00:15:f5:5b in network mk-multinode-592246
	I0920 17:42:36.687330   46025 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/multinode-592246/id_rsa Username:docker}
	I0920 17:42:36.687513   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHPort
	I0920 17:42:36.687684   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHKeyPath
	I0920 17:42:36.687866   46025 main.go:141] libmachine: (multinode-592246) Calling .GetSSHUsername
	I0920 17:42:36.688089   46025 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/multinode-592246/id_rsa Username:docker}
	I0920 17:42:36.802985   46025 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0920 17:42:36.803058   46025 command_runner.go:130] > {"iso_version": "v1.34.0-1726784654-19672", "kicbase_version": "v0.0.45-1726589491-19662", "minikube_version": "v1.34.0", "commit": "342ed9b49b7fd0c6b2cb4410be5c5d5251f51ed8"}
	I0920 17:42:36.803204   46025 ssh_runner.go:195] Run: systemctl --version
	I0920 17:42:36.809568   46025 command_runner.go:130] > systemd 252 (252)
	I0920 17:42:36.809604   46025 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0920 17:42:36.809892   46025 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 17:42:36.974820   46025 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0920 17:42:36.980821   46025 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0920 17:42:36.980880   46025 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 17:42:36.980930   46025 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 17:42:36.990884   46025 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0920 17:42:36.990916   46025 start.go:495] detecting cgroup driver to use...
	I0920 17:42:36.990998   46025 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 17:42:37.008567   46025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 17:42:37.023571   46025 docker.go:217] disabling cri-docker service (if available) ...
	I0920 17:42:37.023647   46025 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 17:42:37.038376   46025 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 17:42:37.053240   46025 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 17:42:37.213261   46025 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 17:42:37.359615   46025 docker.go:233] disabling docker service ...
	I0920 17:42:37.359683   46025 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 17:42:37.378108   46025 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 17:42:37.393108   46025 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 17:42:37.532993   46025 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 17:42:37.688371   46025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 17:42:37.704066   46025 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 17:42:37.723665   46025 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0920 17:42:37.723713   46025 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 17:42:37.723766   46025 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:42:37.735327   46025 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 17:42:37.735392   46025 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:42:37.746568   46025 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:42:37.758278   46025 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:42:37.769584   46025 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 17:42:37.781000   46025 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:42:37.792660   46025 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:42:37.803812   46025 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:42:37.815396   46025 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 17:42:37.826186   46025 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0920 17:42:37.826272   46025 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 17:42:37.836458   46025 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:42:37.977954   46025 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 17:42:41.751954   46025 ssh_runner.go:235] Completed: sudo systemctl restart crio: (3.773960184s)
	I0920 17:42:41.751985   46025 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 17:42:41.752046   46025 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 17:42:41.756921   46025 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0920 17:42:41.756948   46025 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0920 17:42:41.756954   46025 command_runner.go:130] > Device: 0,22	Inode: 1344        Links: 1
	I0920 17:42:41.756960   46025 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0920 17:42:41.756967   46025 command_runner.go:130] > Access: 2024-09-20 17:42:41.673882032 +0000
	I0920 17:42:41.756977   46025 command_runner.go:130] > Modify: 2024-09-20 17:42:41.603880334 +0000
	I0920 17:42:41.756985   46025 command_runner.go:130] > Change: 2024-09-20 17:42:41.603880334 +0000
	I0920 17:42:41.756992   46025 command_runner.go:130] >  Birth: -
	I0920 17:42:41.757029   46025 start.go:563] Will wait 60s for crictl version
	I0920 17:42:41.757084   46025 ssh_runner.go:195] Run: which crictl
	I0920 17:42:41.761129   46025 command_runner.go:130] > /usr/bin/crictl
	I0920 17:42:41.761187   46025 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 17:42:41.801244   46025 command_runner.go:130] > Version:  0.1.0
	I0920 17:42:41.801267   46025 command_runner.go:130] > RuntimeName:  cri-o
	I0920 17:42:41.801319   46025 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0920 17:42:41.801337   46025 command_runner.go:130] > RuntimeApiVersion:  v1
	I0920 17:42:41.802867   46025 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 17:42:41.802932   46025 ssh_runner.go:195] Run: crio --version
	I0920 17:42:41.834111   46025 command_runner.go:130] > crio version 1.29.1
	I0920 17:42:41.834138   46025 command_runner.go:130] > Version:        1.29.1
	I0920 17:42:41.834156   46025 command_runner.go:130] > GitCommit:      unknown
	I0920 17:42:41.834162   46025 command_runner.go:130] > GitCommitDate:  unknown
	I0920 17:42:41.834169   46025 command_runner.go:130] > GitTreeState:   clean
	I0920 17:42:41.834178   46025 command_runner.go:130] > BuildDate:      2024-09-20T03:55:27Z
	I0920 17:42:41.834183   46025 command_runner.go:130] > GoVersion:      go1.21.6
	I0920 17:42:41.834189   46025 command_runner.go:130] > Compiler:       gc
	I0920 17:42:41.834196   46025 command_runner.go:130] > Platform:       linux/amd64
	I0920 17:42:41.834202   46025 command_runner.go:130] > Linkmode:       dynamic
	I0920 17:42:41.834209   46025 command_runner.go:130] > BuildTags:      
	I0920 17:42:41.834218   46025 command_runner.go:130] >   containers_image_ostree_stub
	I0920 17:42:41.834228   46025 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0920 17:42:41.834232   46025 command_runner.go:130] >   btrfs_noversion
	I0920 17:42:41.834244   46025 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0920 17:42:41.834250   46025 command_runner.go:130] >   libdm_no_deferred_remove
	I0920 17:42:41.834260   46025 command_runner.go:130] >   seccomp
	I0920 17:42:41.834267   46025 command_runner.go:130] > LDFlags:          unknown
	I0920 17:42:41.834274   46025 command_runner.go:130] > SeccompEnabled:   true
	I0920 17:42:41.834284   46025 command_runner.go:130] > AppArmorEnabled:  false
	I0920 17:42:41.834384   46025 ssh_runner.go:195] Run: crio --version
	I0920 17:42:41.864280   46025 command_runner.go:130] > crio version 1.29.1
	I0920 17:42:41.864315   46025 command_runner.go:130] > Version:        1.29.1
	I0920 17:42:41.864324   46025 command_runner.go:130] > GitCommit:      unknown
	I0920 17:42:41.864331   46025 command_runner.go:130] > GitCommitDate:  unknown
	I0920 17:42:41.864338   46025 command_runner.go:130] > GitTreeState:   clean
	I0920 17:42:41.864347   46025 command_runner.go:130] > BuildDate:      2024-09-20T03:55:27Z
	I0920 17:42:41.864352   46025 command_runner.go:130] > GoVersion:      go1.21.6
	I0920 17:42:41.864356   46025 command_runner.go:130] > Compiler:       gc
	I0920 17:42:41.864361   46025 command_runner.go:130] > Platform:       linux/amd64
	I0920 17:42:41.864366   46025 command_runner.go:130] > Linkmode:       dynamic
	I0920 17:42:41.864370   46025 command_runner.go:130] > BuildTags:      
	I0920 17:42:41.864390   46025 command_runner.go:130] >   containers_image_ostree_stub
	I0920 17:42:41.864398   46025 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0920 17:42:41.864404   46025 command_runner.go:130] >   btrfs_noversion
	I0920 17:42:41.864411   46025 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0920 17:42:41.864421   46025 command_runner.go:130] >   libdm_no_deferred_remove
	I0920 17:42:41.864428   46025 command_runner.go:130] >   seccomp
	I0920 17:42:41.864438   46025 command_runner.go:130] > LDFlags:          unknown
	I0920 17:42:41.864444   46025 command_runner.go:130] > SeccompEnabled:   true
	I0920 17:42:41.864451   46025 command_runner.go:130] > AppArmorEnabled:  false
	I0920 17:42:41.867683   46025 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 17:42:41.869190   46025 main.go:141] libmachine: (multinode-592246) Calling .GetIP
	I0920 17:42:41.871805   46025 main.go:141] libmachine: (multinode-592246) DBG | domain multinode-592246 has defined MAC address 52:54:00:15:f5:5b in network mk-multinode-592246
	I0920 17:42:41.872202   46025 main.go:141] libmachine: (multinode-592246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:f5:5b", ip: ""} in network mk-multinode-592246: {Iface:virbr1 ExpiryTime:2024-09-20 18:35:34 +0000 UTC Type:0 Mac:52:54:00:15:f5:5b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:multinode-592246 Clientid:01:52:54:00:15:f5:5b}
	I0920 17:42:41.872229   46025 main.go:141] libmachine: (multinode-592246) DBG | domain multinode-592246 has defined IP address 192.168.39.115 and MAC address 52:54:00:15:f5:5b in network mk-multinode-592246
	I0920 17:42:41.872479   46025 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 17:42:41.876805   46025 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0920 17:42:41.876930   46025 kubeadm.go:883] updating cluster {Name:multinode-592246 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.1 ClusterName:multinode-592246 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.115 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.170 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.38 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadge
t:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 17:42:41.878502   46025 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 17:42:41.878563   46025 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 17:42:41.931070   46025 command_runner.go:130] > {
	I0920 17:42:41.931098   46025 command_runner.go:130] >   "images": [
	I0920 17:42:41.931106   46025 command_runner.go:130] >     {
	I0920 17:42:41.931117   46025 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0920 17:42:41.931127   46025 command_runner.go:130] >       "repoTags": [
	I0920 17:42:41.931135   46025 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0920 17:42:41.931141   46025 command_runner.go:130] >       ],
	I0920 17:42:41.931148   46025 command_runner.go:130] >       "repoDigests": [
	I0920 17:42:41.931166   46025 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0920 17:42:41.931182   46025 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0920 17:42:41.931189   46025 command_runner.go:130] >       ],
	I0920 17:42:41.931198   46025 command_runner.go:130] >       "size": "87190579",
	I0920 17:42:41.931204   46025 command_runner.go:130] >       "uid": null,
	I0920 17:42:41.931211   46025 command_runner.go:130] >       "username": "",
	I0920 17:42:41.931221   46025 command_runner.go:130] >       "spec": null,
	I0920 17:42:41.931229   46025 command_runner.go:130] >       "pinned": false
	I0920 17:42:41.931235   46025 command_runner.go:130] >     },
	I0920 17:42:41.931241   46025 command_runner.go:130] >     {
	I0920 17:42:41.931253   46025 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0920 17:42:41.931260   46025 command_runner.go:130] >       "repoTags": [
	I0920 17:42:41.931270   46025 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0920 17:42:41.931278   46025 command_runner.go:130] >       ],
	I0920 17:42:41.931285   46025 command_runner.go:130] >       "repoDigests": [
	I0920 17:42:41.931297   46025 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0920 17:42:41.931311   46025 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0920 17:42:41.931319   46025 command_runner.go:130] >       ],
	I0920 17:42:41.931339   46025 command_runner.go:130] >       "size": "1363676",
	I0920 17:42:41.931348   46025 command_runner.go:130] >       "uid": null,
	I0920 17:42:41.931358   46025 command_runner.go:130] >       "username": "",
	I0920 17:42:41.931367   46025 command_runner.go:130] >       "spec": null,
	I0920 17:42:41.931375   46025 command_runner.go:130] >       "pinned": false
	I0920 17:42:41.931404   46025 command_runner.go:130] >     },
	I0920 17:42:41.931411   46025 command_runner.go:130] >     {
	I0920 17:42:41.931420   46025 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0920 17:42:41.931427   46025 command_runner.go:130] >       "repoTags": [
	I0920 17:42:41.931434   46025 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0920 17:42:41.931438   46025 command_runner.go:130] >       ],
	I0920 17:42:41.931442   46025 command_runner.go:130] >       "repoDigests": [
	I0920 17:42:41.931451   46025 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0920 17:42:41.931459   46025 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0920 17:42:41.931463   46025 command_runner.go:130] >       ],
	I0920 17:42:41.931467   46025 command_runner.go:130] >       "size": "31470524",
	I0920 17:42:41.931471   46025 command_runner.go:130] >       "uid": null,
	I0920 17:42:41.931476   46025 command_runner.go:130] >       "username": "",
	I0920 17:42:41.931479   46025 command_runner.go:130] >       "spec": null,
	I0920 17:42:41.931483   46025 command_runner.go:130] >       "pinned": false
	I0920 17:42:41.931487   46025 command_runner.go:130] >     },
	I0920 17:42:41.931490   46025 command_runner.go:130] >     {
	I0920 17:42:41.931496   46025 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0920 17:42:41.931503   46025 command_runner.go:130] >       "repoTags": [
	I0920 17:42:41.931508   46025 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0920 17:42:41.931512   46025 command_runner.go:130] >       ],
	I0920 17:42:41.931516   46025 command_runner.go:130] >       "repoDigests": [
	I0920 17:42:41.931523   46025 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0920 17:42:41.931535   46025 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0920 17:42:41.931539   46025 command_runner.go:130] >       ],
	I0920 17:42:41.931543   46025 command_runner.go:130] >       "size": "63273227",
	I0920 17:42:41.931547   46025 command_runner.go:130] >       "uid": null,
	I0920 17:42:41.931552   46025 command_runner.go:130] >       "username": "nonroot",
	I0920 17:42:41.931558   46025 command_runner.go:130] >       "spec": null,
	I0920 17:42:41.931562   46025 command_runner.go:130] >       "pinned": false
	I0920 17:42:41.931565   46025 command_runner.go:130] >     },
	I0920 17:42:41.931568   46025 command_runner.go:130] >     {
	I0920 17:42:41.931576   46025 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0920 17:42:41.931582   46025 command_runner.go:130] >       "repoTags": [
	I0920 17:42:41.931586   46025 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0920 17:42:41.931592   46025 command_runner.go:130] >       ],
	I0920 17:42:41.931596   46025 command_runner.go:130] >       "repoDigests": [
	I0920 17:42:41.931603   46025 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0920 17:42:41.931610   46025 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0920 17:42:41.931614   46025 command_runner.go:130] >       ],
	I0920 17:42:41.931620   46025 command_runner.go:130] >       "size": "149009664",
	I0920 17:42:41.931624   46025 command_runner.go:130] >       "uid": {
	I0920 17:42:41.931627   46025 command_runner.go:130] >         "value": "0"
	I0920 17:42:41.931633   46025 command_runner.go:130] >       },
	I0920 17:42:41.931637   46025 command_runner.go:130] >       "username": "",
	I0920 17:42:41.931640   46025 command_runner.go:130] >       "spec": null,
	I0920 17:42:41.931644   46025 command_runner.go:130] >       "pinned": false
	I0920 17:42:41.931648   46025 command_runner.go:130] >     },
	I0920 17:42:41.931651   46025 command_runner.go:130] >     {
	I0920 17:42:41.931657   46025 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0920 17:42:41.931663   46025 command_runner.go:130] >       "repoTags": [
	I0920 17:42:41.931667   46025 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0920 17:42:41.931670   46025 command_runner.go:130] >       ],
	I0920 17:42:41.931674   46025 command_runner.go:130] >       "repoDigests": [
	I0920 17:42:41.931682   46025 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0920 17:42:41.931691   46025 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0920 17:42:41.931697   46025 command_runner.go:130] >       ],
	I0920 17:42:41.931701   46025 command_runner.go:130] >       "size": "95237600",
	I0920 17:42:41.931706   46025 command_runner.go:130] >       "uid": {
	I0920 17:42:41.931710   46025 command_runner.go:130] >         "value": "0"
	I0920 17:42:41.931714   46025 command_runner.go:130] >       },
	I0920 17:42:41.931717   46025 command_runner.go:130] >       "username": "",
	I0920 17:42:41.931722   46025 command_runner.go:130] >       "spec": null,
	I0920 17:42:41.931726   46025 command_runner.go:130] >       "pinned": false
	I0920 17:42:41.931731   46025 command_runner.go:130] >     },
	I0920 17:42:41.931734   46025 command_runner.go:130] >     {
	I0920 17:42:41.931740   46025 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0920 17:42:41.931746   46025 command_runner.go:130] >       "repoTags": [
	I0920 17:42:41.931751   46025 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0920 17:42:41.931756   46025 command_runner.go:130] >       ],
	I0920 17:42:41.931760   46025 command_runner.go:130] >       "repoDigests": [
	I0920 17:42:41.931767   46025 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0920 17:42:41.931778   46025 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0920 17:42:41.931781   46025 command_runner.go:130] >       ],
	I0920 17:42:41.931785   46025 command_runner.go:130] >       "size": "89437508",
	I0920 17:42:41.931788   46025 command_runner.go:130] >       "uid": {
	I0920 17:42:41.931791   46025 command_runner.go:130] >         "value": "0"
	I0920 17:42:41.931795   46025 command_runner.go:130] >       },
	I0920 17:42:41.931798   46025 command_runner.go:130] >       "username": "",
	I0920 17:42:41.931802   46025 command_runner.go:130] >       "spec": null,
	I0920 17:42:41.931806   46025 command_runner.go:130] >       "pinned": false
	I0920 17:42:41.931810   46025 command_runner.go:130] >     },
	I0920 17:42:41.931813   46025 command_runner.go:130] >     {
	I0920 17:42:41.931818   46025 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0920 17:42:41.931823   46025 command_runner.go:130] >       "repoTags": [
	I0920 17:42:41.931827   46025 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0920 17:42:41.931832   46025 command_runner.go:130] >       ],
	I0920 17:42:41.931837   46025 command_runner.go:130] >       "repoDigests": [
	I0920 17:42:41.931874   46025 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0920 17:42:41.931887   46025 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0920 17:42:41.931890   46025 command_runner.go:130] >       ],
	I0920 17:42:41.931895   46025 command_runner.go:130] >       "size": "92733849",
	I0920 17:42:41.931898   46025 command_runner.go:130] >       "uid": null,
	I0920 17:42:41.931906   46025 command_runner.go:130] >       "username": "",
	I0920 17:42:41.931912   46025 command_runner.go:130] >       "spec": null,
	I0920 17:42:41.931917   46025 command_runner.go:130] >       "pinned": false
	I0920 17:42:41.931920   46025 command_runner.go:130] >     },
	I0920 17:42:41.931924   46025 command_runner.go:130] >     {
	I0920 17:42:41.931929   46025 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0920 17:42:41.931933   46025 command_runner.go:130] >       "repoTags": [
	I0920 17:42:41.931938   46025 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0920 17:42:41.931941   46025 command_runner.go:130] >       ],
	I0920 17:42:41.931945   46025 command_runner.go:130] >       "repoDigests": [
	I0920 17:42:41.931955   46025 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0920 17:42:41.931962   46025 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0920 17:42:41.931966   46025 command_runner.go:130] >       ],
	I0920 17:42:41.931971   46025 command_runner.go:130] >       "size": "68420934",
	I0920 17:42:41.931974   46025 command_runner.go:130] >       "uid": {
	I0920 17:42:41.931978   46025 command_runner.go:130] >         "value": "0"
	I0920 17:42:41.931983   46025 command_runner.go:130] >       },
	I0920 17:42:41.931988   46025 command_runner.go:130] >       "username": "",
	I0920 17:42:41.931994   46025 command_runner.go:130] >       "spec": null,
	I0920 17:42:41.931998   46025 command_runner.go:130] >       "pinned": false
	I0920 17:42:41.932001   46025 command_runner.go:130] >     },
	I0920 17:42:41.932004   46025 command_runner.go:130] >     {
	I0920 17:42:41.932010   46025 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0920 17:42:41.932016   46025 command_runner.go:130] >       "repoTags": [
	I0920 17:42:41.932020   46025 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0920 17:42:41.932023   46025 command_runner.go:130] >       ],
	I0920 17:42:41.932027   46025 command_runner.go:130] >       "repoDigests": [
	I0920 17:42:41.932036   46025 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0920 17:42:41.932043   46025 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0920 17:42:41.932049   46025 command_runner.go:130] >       ],
	I0920 17:42:41.932053   46025 command_runner.go:130] >       "size": "742080",
	I0920 17:42:41.932056   46025 command_runner.go:130] >       "uid": {
	I0920 17:42:41.932060   46025 command_runner.go:130] >         "value": "65535"
	I0920 17:42:41.932065   46025 command_runner.go:130] >       },
	I0920 17:42:41.932069   46025 command_runner.go:130] >       "username": "",
	I0920 17:42:41.932075   46025 command_runner.go:130] >       "spec": null,
	I0920 17:42:41.932078   46025 command_runner.go:130] >       "pinned": true
	I0920 17:42:41.932081   46025 command_runner.go:130] >     }
	I0920 17:42:41.932084   46025 command_runner.go:130] >   ]
	I0920 17:42:41.932089   46025 command_runner.go:130] > }
	I0920 17:42:41.932664   46025 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 17:42:41.932686   46025 crio.go:433] Images already preloaded, skipping extraction
	I0920 17:42:41.932744   46025 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 17:42:41.973332   46025 command_runner.go:130] > {
	I0920 17:42:41.973369   46025 command_runner.go:130] >   "images": [
	I0920 17:42:41.973380   46025 command_runner.go:130] >     {
	I0920 17:42:41.973393   46025 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0920 17:42:41.973404   46025 command_runner.go:130] >       "repoTags": [
	I0920 17:42:41.973446   46025 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0920 17:42:41.973465   46025 command_runner.go:130] >       ],
	I0920 17:42:41.973472   46025 command_runner.go:130] >       "repoDigests": [
	I0920 17:42:41.973490   46025 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0920 17:42:41.973502   46025 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0920 17:42:41.973508   46025 command_runner.go:130] >       ],
	I0920 17:42:41.973514   46025 command_runner.go:130] >       "size": "87190579",
	I0920 17:42:41.973521   46025 command_runner.go:130] >       "uid": null,
	I0920 17:42:41.973525   46025 command_runner.go:130] >       "username": "",
	I0920 17:42:41.973539   46025 command_runner.go:130] >       "spec": null,
	I0920 17:42:41.973547   46025 command_runner.go:130] >       "pinned": false
	I0920 17:42:41.973551   46025 command_runner.go:130] >     },
	I0920 17:42:41.973555   46025 command_runner.go:130] >     {
	I0920 17:42:41.973563   46025 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0920 17:42:41.973570   46025 command_runner.go:130] >       "repoTags": [
	I0920 17:42:41.973577   46025 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0920 17:42:41.973581   46025 command_runner.go:130] >       ],
	I0920 17:42:41.973588   46025 command_runner.go:130] >       "repoDigests": [
	I0920 17:42:41.973596   46025 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0920 17:42:41.973607   46025 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0920 17:42:41.973617   46025 command_runner.go:130] >       ],
	I0920 17:42:41.973626   46025 command_runner.go:130] >       "size": "1363676",
	I0920 17:42:41.973633   46025 command_runner.go:130] >       "uid": null,
	I0920 17:42:41.973646   46025 command_runner.go:130] >       "username": "",
	I0920 17:42:41.973656   46025 command_runner.go:130] >       "spec": null,
	I0920 17:42:41.973661   46025 command_runner.go:130] >       "pinned": false
	I0920 17:42:41.973668   46025 command_runner.go:130] >     },
	I0920 17:42:41.973672   46025 command_runner.go:130] >     {
	I0920 17:42:41.973682   46025 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0920 17:42:41.973690   46025 command_runner.go:130] >       "repoTags": [
	I0920 17:42:41.973696   46025 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0920 17:42:41.973702   46025 command_runner.go:130] >       ],
	I0920 17:42:41.973707   46025 command_runner.go:130] >       "repoDigests": [
	I0920 17:42:41.973722   46025 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0920 17:42:41.973738   46025 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0920 17:42:41.973747   46025 command_runner.go:130] >       ],
	I0920 17:42:41.973758   46025 command_runner.go:130] >       "size": "31470524",
	I0920 17:42:41.973766   46025 command_runner.go:130] >       "uid": null,
	I0920 17:42:41.973771   46025 command_runner.go:130] >       "username": "",
	I0920 17:42:41.973778   46025 command_runner.go:130] >       "spec": null,
	I0920 17:42:41.973783   46025 command_runner.go:130] >       "pinned": false
	I0920 17:42:41.973789   46025 command_runner.go:130] >     },
	I0920 17:42:41.973793   46025 command_runner.go:130] >     {
	I0920 17:42:41.973802   46025 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0920 17:42:41.973808   46025 command_runner.go:130] >       "repoTags": [
	I0920 17:42:41.973814   46025 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0920 17:42:41.973819   46025 command_runner.go:130] >       ],
	I0920 17:42:41.973824   46025 command_runner.go:130] >       "repoDigests": [
	I0920 17:42:41.973847   46025 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0920 17:42:41.973869   46025 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0920 17:42:41.973879   46025 command_runner.go:130] >       ],
	I0920 17:42:41.973887   46025 command_runner.go:130] >       "size": "63273227",
	I0920 17:42:41.973891   46025 command_runner.go:130] >       "uid": null,
	I0920 17:42:41.973897   46025 command_runner.go:130] >       "username": "nonroot",
	I0920 17:42:41.973906   46025 command_runner.go:130] >       "spec": null,
	I0920 17:42:41.973913   46025 command_runner.go:130] >       "pinned": false
	I0920 17:42:41.973917   46025 command_runner.go:130] >     },
	I0920 17:42:41.973922   46025 command_runner.go:130] >     {
	I0920 17:42:41.973928   46025 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0920 17:42:41.973938   46025 command_runner.go:130] >       "repoTags": [
	I0920 17:42:41.973946   46025 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0920 17:42:41.973954   46025 command_runner.go:130] >       ],
	I0920 17:42:41.973969   46025 command_runner.go:130] >       "repoDigests": [
	I0920 17:42:41.973979   46025 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0920 17:42:41.973988   46025 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0920 17:42:41.973995   46025 command_runner.go:130] >       ],
	I0920 17:42:41.974000   46025 command_runner.go:130] >       "size": "149009664",
	I0920 17:42:41.974006   46025 command_runner.go:130] >       "uid": {
	I0920 17:42:41.974011   46025 command_runner.go:130] >         "value": "0"
	I0920 17:42:41.974017   46025 command_runner.go:130] >       },
	I0920 17:42:41.974022   46025 command_runner.go:130] >       "username": "",
	I0920 17:42:41.974029   46025 command_runner.go:130] >       "spec": null,
	I0920 17:42:41.974033   46025 command_runner.go:130] >       "pinned": false
	I0920 17:42:41.974039   46025 command_runner.go:130] >     },
	I0920 17:42:41.974044   46025 command_runner.go:130] >     {
	I0920 17:42:41.974053   46025 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0920 17:42:41.974059   46025 command_runner.go:130] >       "repoTags": [
	I0920 17:42:41.974065   46025 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0920 17:42:41.974071   46025 command_runner.go:130] >       ],
	I0920 17:42:41.974076   46025 command_runner.go:130] >       "repoDigests": [
	I0920 17:42:41.974086   46025 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0920 17:42:41.974096   46025 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0920 17:42:41.974105   46025 command_runner.go:130] >       ],
	I0920 17:42:41.974116   46025 command_runner.go:130] >       "size": "95237600",
	I0920 17:42:41.974125   46025 command_runner.go:130] >       "uid": {
	I0920 17:42:41.974135   46025 command_runner.go:130] >         "value": "0"
	I0920 17:42:41.974141   46025 command_runner.go:130] >       },
	I0920 17:42:41.974150   46025 command_runner.go:130] >       "username": "",
	I0920 17:42:41.974160   46025 command_runner.go:130] >       "spec": null,
	I0920 17:42:41.974171   46025 command_runner.go:130] >       "pinned": false
	I0920 17:42:41.974185   46025 command_runner.go:130] >     },
	I0920 17:42:41.974197   46025 command_runner.go:130] >     {
	I0920 17:42:41.974210   46025 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0920 17:42:41.974230   46025 command_runner.go:130] >       "repoTags": [
	I0920 17:42:41.974255   46025 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0920 17:42:41.974266   46025 command_runner.go:130] >       ],
	I0920 17:42:41.974273   46025 command_runner.go:130] >       "repoDigests": [
	I0920 17:42:41.974285   46025 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0920 17:42:41.974297   46025 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0920 17:42:41.974310   46025 command_runner.go:130] >       ],
	I0920 17:42:41.974318   46025 command_runner.go:130] >       "size": "89437508",
	I0920 17:42:41.974325   46025 command_runner.go:130] >       "uid": {
	I0920 17:42:41.974332   46025 command_runner.go:130] >         "value": "0"
	I0920 17:42:41.974340   46025 command_runner.go:130] >       },
	I0920 17:42:41.974352   46025 command_runner.go:130] >       "username": "",
	I0920 17:42:41.974360   46025 command_runner.go:130] >       "spec": null,
	I0920 17:42:41.974371   46025 command_runner.go:130] >       "pinned": false
	I0920 17:42:41.974378   46025 command_runner.go:130] >     },
	I0920 17:42:41.974388   46025 command_runner.go:130] >     {
	I0920 17:42:41.974403   46025 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0920 17:42:41.974414   46025 command_runner.go:130] >       "repoTags": [
	I0920 17:42:41.974423   46025 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0920 17:42:41.974434   46025 command_runner.go:130] >       ],
	I0920 17:42:41.974444   46025 command_runner.go:130] >       "repoDigests": [
	I0920 17:42:41.974470   46025 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0920 17:42:41.974485   46025 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0920 17:42:41.974496   46025 command_runner.go:130] >       ],
	I0920 17:42:41.974509   46025 command_runner.go:130] >       "size": "92733849",
	I0920 17:42:41.974519   46025 command_runner.go:130] >       "uid": null,
	I0920 17:42:41.974527   46025 command_runner.go:130] >       "username": "",
	I0920 17:42:41.974535   46025 command_runner.go:130] >       "spec": null,
	I0920 17:42:41.974542   46025 command_runner.go:130] >       "pinned": false
	I0920 17:42:41.974548   46025 command_runner.go:130] >     },
	I0920 17:42:41.974552   46025 command_runner.go:130] >     {
	I0920 17:42:41.974558   46025 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0920 17:42:41.974562   46025 command_runner.go:130] >       "repoTags": [
	I0920 17:42:41.974567   46025 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0920 17:42:41.974572   46025 command_runner.go:130] >       ],
	I0920 17:42:41.974575   46025 command_runner.go:130] >       "repoDigests": [
	I0920 17:42:41.974583   46025 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0920 17:42:41.974590   46025 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0920 17:42:41.974597   46025 command_runner.go:130] >       ],
	I0920 17:42:41.974601   46025 command_runner.go:130] >       "size": "68420934",
	I0920 17:42:41.974605   46025 command_runner.go:130] >       "uid": {
	I0920 17:42:41.974609   46025 command_runner.go:130] >         "value": "0"
	I0920 17:42:41.974613   46025 command_runner.go:130] >       },
	I0920 17:42:41.974617   46025 command_runner.go:130] >       "username": "",
	I0920 17:42:41.974623   46025 command_runner.go:130] >       "spec": null,
	I0920 17:42:41.974627   46025 command_runner.go:130] >       "pinned": false
	I0920 17:42:41.974634   46025 command_runner.go:130] >     },
	I0920 17:42:41.974638   46025 command_runner.go:130] >     {
	I0920 17:42:41.974645   46025 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0920 17:42:41.974651   46025 command_runner.go:130] >       "repoTags": [
	I0920 17:42:41.974656   46025 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0920 17:42:41.974662   46025 command_runner.go:130] >       ],
	I0920 17:42:41.974667   46025 command_runner.go:130] >       "repoDigests": [
	I0920 17:42:41.974677   46025 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0920 17:42:41.974700   46025 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0920 17:42:41.974707   46025 command_runner.go:130] >       ],
	I0920 17:42:41.974712   46025 command_runner.go:130] >       "size": "742080",
	I0920 17:42:41.974719   46025 command_runner.go:130] >       "uid": {
	I0920 17:42:41.974724   46025 command_runner.go:130] >         "value": "65535"
	I0920 17:42:41.974729   46025 command_runner.go:130] >       },
	I0920 17:42:41.974734   46025 command_runner.go:130] >       "username": "",
	I0920 17:42:41.974740   46025 command_runner.go:130] >       "spec": null,
	I0920 17:42:41.974745   46025 command_runner.go:130] >       "pinned": true
	I0920 17:42:41.974751   46025 command_runner.go:130] >     }
	I0920 17:42:41.974754   46025 command_runner.go:130] >   ]
	I0920 17:42:41.974760   46025 command_runner.go:130] > }
	I0920 17:42:41.974908   46025 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 17:42:41.974921   46025 cache_images.go:84] Images are preloaded, skipping loading
	I0920 17:42:41.974931   46025 kubeadm.go:934] updating node { 192.168.39.115 8443 v1.31.1 crio true true} ...
	I0920 17:42:41.975033   46025 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-592246 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.115
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-592246 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 17:42:41.975101   46025 ssh_runner.go:195] Run: crio config
	I0920 17:42:42.020635   46025 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0920 17:42:42.020670   46025 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0920 17:42:42.020677   46025 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0920 17:42:42.020680   46025 command_runner.go:130] > #
	I0920 17:42:42.020688   46025 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0920 17:42:42.020694   46025 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0920 17:42:42.020701   46025 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0920 17:42:42.020709   46025 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0920 17:42:42.020716   46025 command_runner.go:130] > # reload'.
	I0920 17:42:42.020725   46025 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0920 17:42:42.020736   46025 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0920 17:42:42.020748   46025 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0920 17:42:42.020760   46025 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0920 17:42:42.020765   46025 command_runner.go:130] > [crio]
	I0920 17:42:42.020777   46025 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0920 17:42:42.020788   46025 command_runner.go:130] > # containers images, in this directory.
	I0920 17:42:42.020802   46025 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0920 17:42:42.020825   46025 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0920 17:42:42.020835   46025 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0920 17:42:42.020847   46025 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0920 17:42:42.020858   46025 command_runner.go:130] > # imagestore = ""
	I0920 17:42:42.020869   46025 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0920 17:42:42.020881   46025 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0920 17:42:42.020893   46025 command_runner.go:130] > storage_driver = "overlay"
	I0920 17:42:42.020904   46025 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0920 17:42:42.020914   46025 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0920 17:42:42.020918   46025 command_runner.go:130] > storage_option = [
	I0920 17:42:42.020923   46025 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0920 17:42:42.020927   46025 command_runner.go:130] > ]
	I0920 17:42:42.020933   46025 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0920 17:42:42.020939   46025 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0920 17:42:42.020943   46025 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0920 17:42:42.020948   46025 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0920 17:42:42.020966   46025 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0920 17:42:42.020976   46025 command_runner.go:130] > # always happen on a node reboot
	I0920 17:42:42.020984   46025 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0920 17:42:42.021001   46025 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0920 17:42:42.021013   46025 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0920 17:42:42.021023   46025 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0920 17:42:42.021033   46025 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0920 17:42:42.021046   46025 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0920 17:42:42.021058   46025 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0920 17:42:42.021253   46025 command_runner.go:130] > # internal_wipe = true
	I0920 17:42:42.021280   46025 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0920 17:42:42.021290   46025 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0920 17:42:42.021511   46025 command_runner.go:130] > # internal_repair = false
	I0920 17:42:42.021521   46025 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0920 17:42:42.021528   46025 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0920 17:42:42.021533   46025 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0920 17:42:42.021745   46025 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0920 17:42:42.021765   46025 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0920 17:42:42.021771   46025 command_runner.go:130] > [crio.api]
	I0920 17:42:42.021780   46025 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0920 17:42:42.021997   46025 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0920 17:42:42.022029   46025 command_runner.go:130] > # IP address on which the stream server will listen.
	I0920 17:42:42.022212   46025 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0920 17:42:42.022228   46025 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0920 17:42:42.022237   46025 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0920 17:42:42.022473   46025 command_runner.go:130] > # stream_port = "0"
	I0920 17:42:42.022484   46025 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0920 17:42:42.022693   46025 command_runner.go:130] > # stream_enable_tls = false
	I0920 17:42:42.022703   46025 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0920 17:42:42.022887   46025 command_runner.go:130] > # stream_idle_timeout = ""
	I0920 17:42:42.022898   46025 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0920 17:42:42.022905   46025 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0920 17:42:42.022909   46025 command_runner.go:130] > # minutes.
	I0920 17:42:42.023048   46025 command_runner.go:130] > # stream_tls_cert = ""
	I0920 17:42:42.023061   46025 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0920 17:42:42.023068   46025 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0920 17:42:42.023254   46025 command_runner.go:130] > # stream_tls_key = ""
	I0920 17:42:42.023271   46025 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0920 17:42:42.023282   46025 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0920 17:42:42.023306   46025 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0920 17:42:42.023489   46025 command_runner.go:130] > # stream_tls_ca = ""
	I0920 17:42:42.023502   46025 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0920 17:42:42.023613   46025 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0920 17:42:42.023630   46025 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0920 17:42:42.023715   46025 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0920 17:42:42.023729   46025 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0920 17:42:42.023738   46025 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0920 17:42:42.023747   46025 command_runner.go:130] > [crio.runtime]
	I0920 17:42:42.023757   46025 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0920 17:42:42.023768   46025 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0920 17:42:42.023799   46025 command_runner.go:130] > # "nofile=1024:2048"
	I0920 17:42:42.023816   46025 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0920 17:42:42.023826   46025 command_runner.go:130] > # default_ulimits = [
	I0920 17:42:42.023932   46025 command_runner.go:130] > # ]
	I0920 17:42:42.023950   46025 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0920 17:42:42.024155   46025 command_runner.go:130] > # no_pivot = false
	I0920 17:42:42.024169   46025 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0920 17:42:42.024179   46025 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0920 17:42:42.024370   46025 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0920 17:42:42.024397   46025 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0920 17:42:42.024405   46025 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0920 17:42:42.024416   46025 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0920 17:42:42.024660   46025 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0920 17:42:42.024676   46025 command_runner.go:130] > # Cgroup setting for conmon
	I0920 17:42:42.024687   46025 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0920 17:42:42.024694   46025 command_runner.go:130] > conmon_cgroup = "pod"
	I0920 17:42:42.024704   46025 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0920 17:42:42.024712   46025 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0920 17:42:42.024722   46025 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0920 17:42:42.024732   46025 command_runner.go:130] > conmon_env = [
	I0920 17:42:42.024741   46025 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0920 17:42:42.024750   46025 command_runner.go:130] > ]
	I0920 17:42:42.024758   46025 command_runner.go:130] > # Additional environment variables to set for all the
	I0920 17:42:42.024767   46025 command_runner.go:130] > # containers. These are overridden if set in the
	I0920 17:42:42.024778   46025 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0920 17:42:42.024788   46025 command_runner.go:130] > # default_env = [
	I0920 17:42:42.024794   46025 command_runner.go:130] > # ]
	I0920 17:42:42.024805   46025 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0920 17:42:42.024817   46025 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0920 17:42:42.024838   46025 command_runner.go:130] > # selinux = false
	I0920 17:42:42.024850   46025 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0920 17:42:42.024863   46025 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0920 17:42:42.024877   46025 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0920 17:42:42.024888   46025 command_runner.go:130] > # seccomp_profile = ""
	I0920 17:42:42.024898   46025 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0920 17:42:42.024911   46025 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0920 17:42:42.024924   46025 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0920 17:42:42.024935   46025 command_runner.go:130] > # which might increase security.
	I0920 17:42:42.024945   46025 command_runner.go:130] > # This option is currently deprecated,
	I0920 17:42:42.024957   46025 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0920 17:42:42.024967   46025 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0920 17:42:42.024977   46025 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0920 17:42:42.024989   46025 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0920 17:42:42.024997   46025 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0920 17:42:42.025010   46025 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0920 17:42:42.025020   46025 command_runner.go:130] > # This option supports live configuration reload.
	I0920 17:42:42.025034   46025 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0920 17:42:42.025046   46025 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0920 17:42:42.025055   46025 command_runner.go:130] > # the cgroup blockio controller.
	I0920 17:42:42.025066   46025 command_runner.go:130] > # blockio_config_file = ""
	I0920 17:42:42.025079   46025 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0920 17:42:42.025086   46025 command_runner.go:130] > # blockio parameters.
	I0920 17:42:42.025096   46025 command_runner.go:130] > # blockio_reload = false
	I0920 17:42:42.025106   46025 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0920 17:42:42.025115   46025 command_runner.go:130] > # irqbalance daemon.
	I0920 17:42:42.025126   46025 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0920 17:42:42.025137   46025 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0920 17:42:42.025149   46025 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0920 17:42:42.025162   46025 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0920 17:42:42.025170   46025 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0920 17:42:42.025182   46025 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0920 17:42:42.025190   46025 command_runner.go:130] > # This option supports live configuration reload.
	I0920 17:42:42.025209   46025 command_runner.go:130] > # rdt_config_file = ""
	I0920 17:42:42.025221   46025 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0920 17:42:42.025231   46025 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0920 17:42:42.025273   46025 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0920 17:42:42.025285   46025 command_runner.go:130] > # separate_pull_cgroup = ""
	I0920 17:42:42.025295   46025 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0920 17:42:42.025305   46025 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0920 17:42:42.025314   46025 command_runner.go:130] > # will be added.
	I0920 17:42:42.025321   46025 command_runner.go:130] > # default_capabilities = [
	I0920 17:42:42.025329   46025 command_runner.go:130] > # 	"CHOWN",
	I0920 17:42:42.025336   46025 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0920 17:42:42.025345   46025 command_runner.go:130] > # 	"FSETID",
	I0920 17:42:42.025350   46025 command_runner.go:130] > # 	"FOWNER",
	I0920 17:42:42.025357   46025 command_runner.go:130] > # 	"SETGID",
	I0920 17:42:42.025365   46025 command_runner.go:130] > # 	"SETUID",
	I0920 17:42:42.025370   46025 command_runner.go:130] > # 	"SETPCAP",
	I0920 17:42:42.025377   46025 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0920 17:42:42.025383   46025 command_runner.go:130] > # 	"KILL",
	I0920 17:42:42.025390   46025 command_runner.go:130] > # ]
	I0920 17:42:42.025402   46025 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0920 17:42:42.025415   46025 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0920 17:42:42.025426   46025 command_runner.go:130] > # add_inheritable_capabilities = false
	I0920 17:42:42.025437   46025 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0920 17:42:42.025448   46025 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0920 17:42:42.025455   46025 command_runner.go:130] > default_sysctls = [
	I0920 17:42:42.025469   46025 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0920 17:42:42.025477   46025 command_runner.go:130] > ]
	I0920 17:42:42.025485   46025 command_runner.go:130] > # List of devices on the host that a
	I0920 17:42:42.025494   46025 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0920 17:42:42.025503   46025 command_runner.go:130] > # allowed_devices = [
	I0920 17:42:42.025517   46025 command_runner.go:130] > # 	"/dev/fuse",
	I0920 17:42:42.025525   46025 command_runner.go:130] > # ]
	I0920 17:42:42.025533   46025 command_runner.go:130] > # List of additional devices. specified as
	I0920 17:42:42.025555   46025 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0920 17:42:42.025567   46025 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0920 17:42:42.025579   46025 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0920 17:42:42.025589   46025 command_runner.go:130] > # additional_devices = [
	I0920 17:42:42.025595   46025 command_runner.go:130] > # ]
	I0920 17:42:42.025605   46025 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0920 17:42:42.025614   46025 command_runner.go:130] > # cdi_spec_dirs = [
	I0920 17:42:42.025619   46025 command_runner.go:130] > # 	"/etc/cdi",
	I0920 17:42:42.025627   46025 command_runner.go:130] > # 	"/var/run/cdi",
	I0920 17:42:42.025632   46025 command_runner.go:130] > # ]
	I0920 17:42:42.025645   46025 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0920 17:42:42.025654   46025 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0920 17:42:42.025664   46025 command_runner.go:130] > # Defaults to false.
	I0920 17:42:42.025673   46025 command_runner.go:130] > # device_ownership_from_security_context = false
	I0920 17:42:42.025686   46025 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0920 17:42:42.025700   46025 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0920 17:42:42.025709   46025 command_runner.go:130] > # hooks_dir = [
	I0920 17:42:42.025716   46025 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0920 17:42:42.025724   46025 command_runner.go:130] > # ]
	I0920 17:42:42.025734   46025 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0920 17:42:42.025747   46025 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0920 17:42:42.025757   46025 command_runner.go:130] > # its default mounts from the following two files:
	I0920 17:42:42.025765   46025 command_runner.go:130] > #
	I0920 17:42:42.025775   46025 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0920 17:42:42.025787   46025 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0920 17:42:42.025799   46025 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0920 17:42:42.025807   46025 command_runner.go:130] > #
	I0920 17:42:42.025817   46025 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0920 17:42:42.025830   46025 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0920 17:42:42.025861   46025 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0920 17:42:42.025869   46025 command_runner.go:130] > #      only add mounts it finds in this file.
	I0920 17:42:42.025877   46025 command_runner.go:130] > #
	I0920 17:42:42.025884   46025 command_runner.go:130] > # default_mounts_file = ""
	I0920 17:42:42.025903   46025 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0920 17:42:42.025921   46025 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0920 17:42:42.025930   46025 command_runner.go:130] > pids_limit = 1024
	I0920 17:42:42.025938   46025 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0920 17:42:42.025949   46025 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0920 17:42:42.025959   46025 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0920 17:42:42.025974   46025 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0920 17:42:42.025983   46025 command_runner.go:130] > # log_size_max = -1
	I0920 17:42:42.025993   46025 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0920 17:42:42.026004   46025 command_runner.go:130] > # log_to_journald = false
	I0920 17:42:42.026014   46025 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0920 17:42:42.026027   46025 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0920 17:42:42.026038   46025 command_runner.go:130] > # Path to directory for container attach sockets.
	I0920 17:42:42.026045   46025 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0920 17:42:42.026057   46025 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0920 17:42:42.026064   46025 command_runner.go:130] > # bind_mount_prefix = ""
	I0920 17:42:42.026071   46025 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0920 17:42:42.026080   46025 command_runner.go:130] > # read_only = false
	I0920 17:42:42.026090   46025 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0920 17:42:42.026102   46025 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0920 17:42:42.026112   46025 command_runner.go:130] > # live configuration reload.
	I0920 17:42:42.026118   46025 command_runner.go:130] > # log_level = "info"
	I0920 17:42:42.026130   46025 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0920 17:42:42.026137   46025 command_runner.go:130] > # This option supports live configuration reload.
	I0920 17:42:42.026147   46025 command_runner.go:130] > # log_filter = ""
	I0920 17:42:42.026157   46025 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0920 17:42:42.026170   46025 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0920 17:42:42.026178   46025 command_runner.go:130] > # separated by comma.
	I0920 17:42:42.026188   46025 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0920 17:42:42.026198   46025 command_runner.go:130] > # uid_mappings = ""
	I0920 17:42:42.026208   46025 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0920 17:42:42.026220   46025 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0920 17:42:42.026230   46025 command_runner.go:130] > # separated by comma.
	I0920 17:42:42.026246   46025 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0920 17:42:42.026256   46025 command_runner.go:130] > # gid_mappings = ""
	I0920 17:42:42.026265   46025 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0920 17:42:42.026278   46025 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0920 17:42:42.026290   46025 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0920 17:42:42.026307   46025 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0920 17:42:42.026316   46025 command_runner.go:130] > # minimum_mappable_uid = -1
	I0920 17:42:42.026326   46025 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0920 17:42:42.026337   46025 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0920 17:42:42.026349   46025 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0920 17:42:42.026363   46025 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0920 17:42:42.026372   46025 command_runner.go:130] > # minimum_mappable_gid = -1
	I0920 17:42:42.026381   46025 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0920 17:42:42.026393   46025 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0920 17:42:42.026405   46025 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0920 17:42:42.026414   46025 command_runner.go:130] > # ctr_stop_timeout = 30
	I0920 17:42:42.026423   46025 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0920 17:42:42.026436   46025 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0920 17:42:42.026447   46025 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0920 17:42:42.026459   46025 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0920 17:42:42.026466   46025 command_runner.go:130] > drop_infra_ctr = false
	I0920 17:42:42.026479   46025 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0920 17:42:42.026492   46025 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0920 17:42:42.026503   46025 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0920 17:42:42.026518   46025 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0920 17:42:42.026531   46025 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0920 17:42:42.026541   46025 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0920 17:42:42.026552   46025 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0920 17:42:42.026563   46025 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0920 17:42:42.026573   46025 command_runner.go:130] > # shared_cpuset = ""
	I0920 17:42:42.026582   46025 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0920 17:42:42.026593   46025 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0920 17:42:42.026603   46025 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0920 17:42:42.026617   46025 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0920 17:42:42.026626   46025 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0920 17:42:42.026634   46025 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0920 17:42:42.026646   46025 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0920 17:42:42.026655   46025 command_runner.go:130] > # enable_criu_support = false
	I0920 17:42:42.026662   46025 command_runner.go:130] > # Enable/disable the generation of the container,
	I0920 17:42:42.026679   46025 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0920 17:42:42.026689   46025 command_runner.go:130] > # enable_pod_events = false
	I0920 17:42:42.026699   46025 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0920 17:42:42.026711   46025 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0920 17:42:42.026723   46025 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0920 17:42:42.026732   46025 command_runner.go:130] > # default_runtime = "runc"
	I0920 17:42:42.026740   46025 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0920 17:42:42.026754   46025 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0920 17:42:42.026773   46025 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0920 17:42:42.026784   46025 command_runner.go:130] > # creation as a file is not desired either.
	I0920 17:42:42.026798   46025 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0920 17:42:42.026808   46025 command_runner.go:130] > # the hostname is being managed dynamically.
	I0920 17:42:42.026815   46025 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0920 17:42:42.026823   46025 command_runner.go:130] > # ]
	I0920 17:42:42.026833   46025 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0920 17:42:42.026845   46025 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0920 17:42:42.026857   46025 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0920 17:42:42.026865   46025 command_runner.go:130] > # Each entry in the table should follow the format:
	I0920 17:42:42.026873   46025 command_runner.go:130] > #
	I0920 17:42:42.026884   46025 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0920 17:42:42.026891   46025 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0920 17:42:42.026918   46025 command_runner.go:130] > # runtime_type = "oci"
	I0920 17:42:42.026929   46025 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0920 17:42:42.026938   46025 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0920 17:42:42.026948   46025 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0920 17:42:42.026960   46025 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0920 17:42:42.026967   46025 command_runner.go:130] > # monitor_env = []
	I0920 17:42:42.026979   46025 command_runner.go:130] > # privileged_without_host_devices = false
	I0920 17:42:42.026986   46025 command_runner.go:130] > # allowed_annotations = []
	I0920 17:42:42.026997   46025 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0920 17:42:42.027003   46025 command_runner.go:130] > # Where:
	I0920 17:42:42.027013   46025 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0920 17:42:42.027025   46025 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0920 17:42:42.027037   46025 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0920 17:42:42.027050   46025 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0920 17:42:42.027059   46025 command_runner.go:130] > #   in $PATH.
	I0920 17:42:42.027070   46025 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0920 17:42:42.027082   46025 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0920 17:42:42.027100   46025 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0920 17:42:42.027109   46025 command_runner.go:130] > #   state.
	I0920 17:42:42.027120   46025 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0920 17:42:42.027133   46025 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0920 17:42:42.027145   46025 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0920 17:42:42.027156   46025 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0920 17:42:42.027168   46025 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0920 17:42:42.027181   46025 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0920 17:42:42.027189   46025 command_runner.go:130] > #   The currently recognized values are:
	I0920 17:42:42.027202   46025 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0920 17:42:42.027217   46025 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0920 17:42:42.027230   46025 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0920 17:42:42.027246   46025 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0920 17:42:42.027261   46025 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0920 17:42:42.027275   46025 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0920 17:42:42.027289   46025 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0920 17:42:42.027302   46025 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0920 17:42:42.027314   46025 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0920 17:42:42.027325   46025 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0920 17:42:42.027335   46025 command_runner.go:130] > #   deprecated option "conmon".
	I0920 17:42:42.027345   46025 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0920 17:42:42.027356   46025 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0920 17:42:42.027367   46025 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0920 17:42:42.027378   46025 command_runner.go:130] > #   should be moved to the container's cgroup
	I0920 17:42:42.027390   46025 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0920 17:42:42.027401   46025 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0920 17:42:42.027416   46025 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0920 17:42:42.027427   46025 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0920 17:42:42.027435   46025 command_runner.go:130] > #
	I0920 17:42:42.027443   46025 command_runner.go:130] > # Using the seccomp notifier feature:
	I0920 17:42:42.027451   46025 command_runner.go:130] > #
	I0920 17:42:42.027461   46025 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0920 17:42:42.027473   46025 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0920 17:42:42.027481   46025 command_runner.go:130] > #
	I0920 17:42:42.027496   46025 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0920 17:42:42.027515   46025 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0920 17:42:42.027524   46025 command_runner.go:130] > #
	I0920 17:42:42.027535   46025 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0920 17:42:42.027543   46025 command_runner.go:130] > # feature.
	I0920 17:42:42.027549   46025 command_runner.go:130] > #
	I0920 17:42:42.027562   46025 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0920 17:42:42.027576   46025 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0920 17:42:42.027588   46025 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0920 17:42:42.027602   46025 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0920 17:42:42.027614   46025 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0920 17:42:42.027618   46025 command_runner.go:130] > #
	I0920 17:42:42.027631   46025 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0920 17:42:42.027643   46025 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0920 17:42:42.027652   46025 command_runner.go:130] > #
	I0920 17:42:42.027680   46025 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0920 17:42:42.027696   46025 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0920 17:42:42.027701   46025 command_runner.go:130] > #
	I0920 17:42:42.027711   46025 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0920 17:42:42.027722   46025 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0920 17:42:42.027727   46025 command_runner.go:130] > # limitation.
	I0920 17:42:42.027736   46025 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0920 17:42:42.027745   46025 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0920 17:42:42.027752   46025 command_runner.go:130] > runtime_type = "oci"
	I0920 17:42:42.027760   46025 command_runner.go:130] > runtime_root = "/run/runc"
	I0920 17:42:42.027769   46025 command_runner.go:130] > runtime_config_path = ""
	I0920 17:42:42.027781   46025 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0920 17:42:42.027791   46025 command_runner.go:130] > monitor_cgroup = "pod"
	I0920 17:42:42.027797   46025 command_runner.go:130] > monitor_exec_cgroup = ""
	I0920 17:42:42.027806   46025 command_runner.go:130] > monitor_env = [
	I0920 17:42:42.027815   46025 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0920 17:42:42.027822   46025 command_runner.go:130] > ]
	I0920 17:42:42.027828   46025 command_runner.go:130] > privileged_without_host_devices = false
	I0920 17:42:42.027840   46025 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0920 17:42:42.027851   46025 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0920 17:42:42.027863   46025 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0920 17:42:42.027880   46025 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0920 17:42:42.027895   46025 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0920 17:42:42.027906   46025 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0920 17:42:42.027930   46025 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0920 17:42:42.027945   46025 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0920 17:42:42.027957   46025 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0920 17:42:42.027971   46025 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0920 17:42:42.027980   46025 command_runner.go:130] > # Example:
	I0920 17:42:42.027988   46025 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0920 17:42:42.027999   46025 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0920 17:42:42.028008   46025 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0920 17:42:42.028016   46025 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0920 17:42:42.028025   46025 command_runner.go:130] > # cpuset = 0
	I0920 17:42:42.028031   46025 command_runner.go:130] > # cpushares = "0-1"
	I0920 17:42:42.028040   46025 command_runner.go:130] > # Where:
	I0920 17:42:42.028048   46025 command_runner.go:130] > # The workload name is workload-type.
	I0920 17:42:42.028063   46025 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0920 17:42:42.028078   46025 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0920 17:42:42.028092   46025 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0920 17:42:42.028108   46025 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0920 17:42:42.028119   46025 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0920 17:42:42.028131   46025 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0920 17:42:42.028144   46025 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0920 17:42:42.028155   46025 command_runner.go:130] > # Default value is set to true
	I0920 17:42:42.028163   46025 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0920 17:42:42.028174   46025 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0920 17:42:42.028182   46025 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0920 17:42:42.028193   46025 command_runner.go:130] > # Default value is set to 'false'
	I0920 17:42:42.028201   46025 command_runner.go:130] > # disable_hostport_mapping = false
	I0920 17:42:42.028215   46025 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0920 17:42:42.028223   46025 command_runner.go:130] > #
	I0920 17:42:42.028233   46025 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0920 17:42:42.028246   46025 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0920 17:42:42.028259   46025 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0920 17:42:42.028268   46025 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0920 17:42:42.028277   46025 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0920 17:42:42.028283   46025 command_runner.go:130] > [crio.image]
	I0920 17:42:42.028291   46025 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0920 17:42:42.028298   46025 command_runner.go:130] > # default_transport = "docker://"
	I0920 17:42:42.028312   46025 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0920 17:42:42.028322   46025 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0920 17:42:42.028329   46025 command_runner.go:130] > # global_auth_file = ""
	I0920 17:42:42.028336   46025 command_runner.go:130] > # The image used to instantiate infra containers.
	I0920 17:42:42.028343   46025 command_runner.go:130] > # This option supports live configuration reload.
	I0920 17:42:42.028349   46025 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0920 17:42:42.028357   46025 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0920 17:42:42.028365   46025 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0920 17:42:42.028373   46025 command_runner.go:130] > # This option supports live configuration reload.
	I0920 17:42:42.028379   46025 command_runner.go:130] > # pause_image_auth_file = ""
	I0920 17:42:42.028387   46025 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0920 17:42:42.028396   46025 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0920 17:42:42.028407   46025 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0920 17:42:42.028416   46025 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0920 17:42:42.028423   46025 command_runner.go:130] > # pause_command = "/pause"
	I0920 17:42:42.028432   46025 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0920 17:42:42.028441   46025 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0920 17:42:42.028450   46025 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0920 17:42:42.028459   46025 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0920 17:42:42.028467   46025 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0920 17:42:42.028476   46025 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0920 17:42:42.028482   46025 command_runner.go:130] > # pinned_images = [
	I0920 17:42:42.028486   46025 command_runner.go:130] > # ]
	I0920 17:42:42.028494   46025 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0920 17:42:42.028504   46025 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0920 17:42:42.028522   46025 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0920 17:42:42.028534   46025 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0920 17:42:42.028546   46025 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0920 17:42:42.028556   46025 command_runner.go:130] > # signature_policy = ""
	I0920 17:42:42.028565   46025 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0920 17:42:42.028579   46025 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0920 17:42:42.028592   46025 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0920 17:42:42.028602   46025 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0920 17:42:42.028614   46025 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0920 17:42:42.028624   46025 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0920 17:42:42.028636   46025 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0920 17:42:42.028655   46025 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0920 17:42:42.028664   46025 command_runner.go:130] > # changing them here.
	I0920 17:42:42.028670   46025 command_runner.go:130] > # insecure_registries = [
	I0920 17:42:42.028677   46025 command_runner.go:130] > # ]
	I0920 17:42:42.028686   46025 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0920 17:42:42.028696   46025 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0920 17:42:42.028706   46025 command_runner.go:130] > # image_volumes = "mkdir"
	I0920 17:42:42.028714   46025 command_runner.go:130] > # Temporary directory to use for storing big files
	I0920 17:42:42.028723   46025 command_runner.go:130] > # big_files_temporary_dir = ""
	I0920 17:42:42.028734   46025 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0920 17:42:42.028742   46025 command_runner.go:130] > # CNI plugins.
	I0920 17:42:42.028747   46025 command_runner.go:130] > [crio.network]
	I0920 17:42:42.028756   46025 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0920 17:42:42.028771   46025 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0920 17:42:42.028780   46025 command_runner.go:130] > # cni_default_network = ""
	I0920 17:42:42.028789   46025 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0920 17:42:42.028798   46025 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0920 17:42:42.028805   46025 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0920 17:42:42.028813   46025 command_runner.go:130] > # plugin_dirs = [
	I0920 17:42:42.028820   46025 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0920 17:42:42.028827   46025 command_runner.go:130] > # ]
	I0920 17:42:42.028836   46025 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0920 17:42:42.028844   46025 command_runner.go:130] > [crio.metrics]
	I0920 17:42:42.028852   46025 command_runner.go:130] > # Globally enable or disable metrics support.
	I0920 17:42:42.028860   46025 command_runner.go:130] > enable_metrics = true
	I0920 17:42:42.028868   46025 command_runner.go:130] > # Specify enabled metrics collectors.
	I0920 17:42:42.028877   46025 command_runner.go:130] > # Per default all metrics are enabled.
	I0920 17:42:42.028887   46025 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0920 17:42:42.028898   46025 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0920 17:42:42.028907   46025 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0920 17:42:42.028915   46025 command_runner.go:130] > # metrics_collectors = [
	I0920 17:42:42.028924   46025 command_runner.go:130] > # 	"operations",
	I0920 17:42:42.028931   46025 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0920 17:42:42.028941   46025 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0920 17:42:42.028948   46025 command_runner.go:130] > # 	"operations_errors",
	I0920 17:42:42.028957   46025 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0920 17:42:42.028964   46025 command_runner.go:130] > # 	"image_pulls_by_name",
	I0920 17:42:42.028973   46025 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0920 17:42:42.028980   46025 command_runner.go:130] > # 	"image_pulls_failures",
	I0920 17:42:42.028989   46025 command_runner.go:130] > # 	"image_pulls_successes",
	I0920 17:42:42.028996   46025 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0920 17:42:42.029005   46025 command_runner.go:130] > # 	"image_layer_reuse",
	I0920 17:42:42.029018   46025 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0920 17:42:42.029031   46025 command_runner.go:130] > # 	"containers_oom_total",
	I0920 17:42:42.029039   46025 command_runner.go:130] > # 	"containers_oom",
	I0920 17:42:42.029045   46025 command_runner.go:130] > # 	"processes_defunct",
	I0920 17:42:42.029054   46025 command_runner.go:130] > # 	"operations_total",
	I0920 17:42:42.029061   46025 command_runner.go:130] > # 	"operations_latency_seconds",
	I0920 17:42:42.029071   46025 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0920 17:42:42.029077   46025 command_runner.go:130] > # 	"operations_errors_total",
	I0920 17:42:42.029088   46025 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0920 17:42:42.029108   46025 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0920 17:42:42.029118   46025 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0920 17:42:42.029129   46025 command_runner.go:130] > # 	"image_pulls_success_total",
	I0920 17:42:42.029138   46025 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0920 17:42:42.029145   46025 command_runner.go:130] > # 	"containers_oom_count_total",
	I0920 17:42:42.029155   46025 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0920 17:42:42.029163   46025 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0920 17:42:42.029170   46025 command_runner.go:130] > # ]
	I0920 17:42:42.029177   46025 command_runner.go:130] > # The port on which the metrics server will listen.
	I0920 17:42:42.029186   46025 command_runner.go:130] > # metrics_port = 9090
	I0920 17:42:42.029193   46025 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0920 17:42:42.029202   46025 command_runner.go:130] > # metrics_socket = ""
	I0920 17:42:42.029211   46025 command_runner.go:130] > # The certificate for the secure metrics server.
	I0920 17:42:42.029223   46025 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0920 17:42:42.029236   46025 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0920 17:42:42.029245   46025 command_runner.go:130] > # certificate on any modification event.
	I0920 17:42:42.029251   46025 command_runner.go:130] > # metrics_cert = ""
	I0920 17:42:42.029260   46025 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0920 17:42:42.029269   46025 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0920 17:42:42.029278   46025 command_runner.go:130] > # metrics_key = ""
	I0920 17:42:42.029289   46025 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0920 17:42:42.029299   46025 command_runner.go:130] > [crio.tracing]
	I0920 17:42:42.029311   46025 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0920 17:42:42.029321   46025 command_runner.go:130] > # enable_tracing = false
	I0920 17:42:42.029340   46025 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0920 17:42:42.029350   46025 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0920 17:42:42.029362   46025 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0920 17:42:42.029373   46025 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0920 17:42:42.029381   46025 command_runner.go:130] > # CRI-O NRI configuration.
	I0920 17:42:42.029389   46025 command_runner.go:130] > [crio.nri]
	I0920 17:42:42.029399   46025 command_runner.go:130] > # Globally enable or disable NRI.
	I0920 17:42:42.029407   46025 command_runner.go:130] > # enable_nri = false
	I0920 17:42:42.029414   46025 command_runner.go:130] > # NRI socket to listen on.
	I0920 17:42:42.029423   46025 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0920 17:42:42.029429   46025 command_runner.go:130] > # NRI plugin directory to use.
	I0920 17:42:42.029438   46025 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0920 17:42:42.029452   46025 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0920 17:42:42.029463   46025 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0920 17:42:42.029475   46025 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0920 17:42:42.029485   46025 command_runner.go:130] > # nri_disable_connections = false
	I0920 17:42:42.029492   46025 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0920 17:42:42.029501   46025 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0920 17:42:42.029515   46025 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0920 17:42:42.029524   46025 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0920 17:42:42.029537   46025 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0920 17:42:42.029545   46025 command_runner.go:130] > [crio.stats]
	I0920 17:42:42.029555   46025 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0920 17:42:42.029566   46025 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0920 17:42:42.029575   46025 command_runner.go:130] > # stats_collection_period = 0
	I0920 17:42:42.029883   46025 command_runner.go:130] ! time="2024-09-20 17:42:41.985103444Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0920 17:42:42.029910   46025 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0920 17:42:42.029987   46025 cni.go:84] Creating CNI manager for ""
	I0920 17:42:42.029999   46025 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0920 17:42:42.030051   46025 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 17:42:42.030086   46025 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.115 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-592246 NodeName:multinode-592246 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.115"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.115 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 17:42:42.030261   46025 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.115
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-592246"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.115
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.115"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 17:42:42.030338   46025 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 17:42:42.041470   46025 command_runner.go:130] > kubeadm
	I0920 17:42:42.041497   46025 command_runner.go:130] > kubectl
	I0920 17:42:42.041501   46025 command_runner.go:130] > kubelet
	I0920 17:42:42.041522   46025 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 17:42:42.041576   46025 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 17:42:42.051550   46025 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0920 17:42:42.069658   46025 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 17:42:42.087059   46025 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0920 17:42:42.104520   46025 ssh_runner.go:195] Run: grep 192.168.39.115	control-plane.minikube.internal$ /etc/hosts
	I0920 17:42:42.108429   46025 command_runner.go:130] > 192.168.39.115	control-plane.minikube.internal
	I0920 17:42:42.108518   46025 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:42:42.257045   46025 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 17:42:42.272581   46025 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/multinode-592246 for IP: 192.168.39.115
	I0920 17:42:42.272607   46025 certs.go:194] generating shared ca certs ...
	I0920 17:42:42.272623   46025 certs.go:226] acquiring lock for ca certs: {Name:mkc7ef6c737c6bdc3fdd9dcff8f57029c020d8f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:42:42.272775   46025 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key
	I0920 17:42:42.272815   46025 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key
	I0920 17:42:42.272824   46025 certs.go:256] generating profile certs ...
	I0920 17:42:42.272898   46025 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/multinode-592246/client.key
	I0920 17:42:42.272955   46025 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/multinode-592246/apiserver.key.bdc96fd7
	I0920 17:42:42.272989   46025 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/multinode-592246/proxy-client.key
	I0920 17:42:42.272999   46025 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0920 17:42:42.273018   46025 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0920 17:42:42.273033   46025 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0920 17:42:42.273047   46025 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0920 17:42:42.273061   46025 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/multinode-592246/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0920 17:42:42.273071   46025 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/multinode-592246/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0920 17:42:42.273081   46025 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/multinode-592246/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0920 17:42:42.273090   46025 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/multinode-592246/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0920 17:42:42.273140   46025 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem (1338 bytes)
	W0920 17:42:42.273163   46025 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973_empty.pem, impossibly tiny 0 bytes
	I0920 17:42:42.273171   46025 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 17:42:42.273247   46025 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem (1082 bytes)
	I0920 17:42:42.273283   46025 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem (1123 bytes)
	I0920 17:42:42.273309   46025 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem (1675 bytes)
	I0920 17:42:42.273349   46025 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem (1708 bytes)
	I0920 17:42:42.273377   46025 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem -> /usr/share/ca-certificates/15973.pem
	I0920 17:42:42.273392   46025 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem -> /usr/share/ca-certificates/159732.pem
	I0920 17:42:42.273405   46025 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:42:42.274045   46025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 17:42:42.300027   46025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 17:42:42.325247   46025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 17:42:42.350349   46025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0920 17:42:42.376989   46025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/multinode-592246/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0920 17:42:42.402493   46025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/multinode-592246/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 17:42:42.427495   46025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/multinode-592246/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 17:42:42.452423   46025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/multinode-592246/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 17:42:42.478154   46025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem --> /usr/share/ca-certificates/15973.pem (1338 bytes)
	I0920 17:42:42.503412   46025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /usr/share/ca-certificates/159732.pem (1708 bytes)
	I0920 17:42:42.527851   46025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 17:42:42.553486   46025 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 17:42:42.584354   46025 ssh_runner.go:195] Run: openssl version
	I0920 17:42:42.603197   46025 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0920 17:42:42.603423   46025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/159732.pem && ln -fs /usr/share/ca-certificates/159732.pem /etc/ssl/certs/159732.pem"
	I0920 17:42:42.649844   46025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/159732.pem
	I0920 17:42:42.661820   46025 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 20 17:04 /usr/share/ca-certificates/159732.pem
	I0920 17:42:42.661881   46025 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:04 /usr/share/ca-certificates/159732.pem
	I0920 17:42:42.661991   46025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/159732.pem
	I0920 17:42:42.671437   46025 command_runner.go:130] > 3ec20f2e
	I0920 17:42:42.671852   46025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/159732.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 17:42:42.687660   46025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 17:42:42.701020   46025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:42:42.706033   46025 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 20 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:42:42.706243   46025 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:42:42.706313   46025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:42:42.715744   46025 command_runner.go:130] > b5213941
	I0920 17:42:42.716007   46025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 17:42:42.729389   46025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15973.pem && ln -fs /usr/share/ca-certificates/15973.pem /etc/ssl/certs/15973.pem"
	I0920 17:42:42.744957   46025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15973.pem
	I0920 17:42:42.750133   46025 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 20 17:04 /usr/share/ca-certificates/15973.pem
	I0920 17:42:42.750404   46025 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:04 /usr/share/ca-certificates/15973.pem
	I0920 17:42:42.750455   46025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15973.pem
	I0920 17:42:42.756871   46025 command_runner.go:130] > 51391683
	I0920 17:42:42.757156   46025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15973.pem /etc/ssl/certs/51391683.0"
	I0920 17:42:42.773414   46025 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 17:42:42.778668   46025 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 17:42:42.778698   46025 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0920 17:42:42.778708   46025 command_runner.go:130] > Device: 253,1	Inode: 7337000     Links: 1
	I0920 17:42:42.778718   46025 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0920 17:42:42.778730   46025 command_runner.go:130] > Access: 2024-09-20 17:35:53.092700305 +0000
	I0920 17:42:42.778737   46025 command_runner.go:130] > Modify: 2024-09-20 17:35:53.092700305 +0000
	I0920 17:42:42.778747   46025 command_runner.go:130] > Change: 2024-09-20 17:35:53.092700305 +0000
	I0920 17:42:42.778753   46025 command_runner.go:130] >  Birth: 2024-09-20 17:35:53.092700305 +0000
	I0920 17:42:42.779015   46025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 17:42:42.786116   46025 command_runner.go:130] > Certificate will not expire
	I0920 17:42:42.786459   46025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 17:42:42.793991   46025 command_runner.go:130] > Certificate will not expire
	I0920 17:42:42.794089   46025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 17:42:42.802589   46025 command_runner.go:130] > Certificate will not expire
	I0920 17:42:42.802874   46025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 17:42:42.810923   46025 command_runner.go:130] > Certificate will not expire
	I0920 17:42:42.811068   46025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 17:42:42.824274   46025 command_runner.go:130] > Certificate will not expire
	I0920 17:42:42.824367   46025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 17:42:42.834371   46025 command_runner.go:130] > Certificate will not expire
	I0920 17:42:42.834899   46025 kubeadm.go:392] StartCluster: {Name:multinode-592246 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:multinode-592246 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.115 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.170 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.38 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:f
alse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 17:42:42.835027   46025 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 17:42:42.835093   46025 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 17:42:42.913410   46025 command_runner.go:130] > 06e751c0e9afd49236f121119e05987ce6dec537fd429e6ca9bfafba0bb41a64
	I0920 17:42:42.913433   46025 command_runner.go:130] > aebea8aa76badc1c9b60fc60756c59dd82a7f8fbbc1e86ced5dc5516bf961e35
	I0920 17:42:42.913439   46025 command_runner.go:130] > c45d2aa0f63816d71c71ef16db9b7b26dc627f72d6ef7258b93c23005c445f9d
	I0920 17:42:42.913449   46025 command_runner.go:130] > 33cbb2c4dce5862b3f2c4ce603b1d94a12c4e89fe1a398e64490f33c2a491720
	I0920 17:42:42.913457   46025 command_runner.go:130] > 43d580bf9876b78f6a151a49a9fc863e01628a121c31c8106030c76a6949e273
	I0920 17:42:42.913474   46025 command_runner.go:130] > 18612a28ae5022d41607cfde2996aff0acffaa060ef96c055b522b167dc900d3
	I0920 17:42:42.913483   46025 command_runner.go:130] > 9a4606e22266020766dc8dbc189c0f65cdd349f031c39b0a7d106cf0781cbee8
	I0920 17:42:42.913651   46025 command_runner.go:130] > 33ec6262554fcd5394f13c012ea27e8a9bf53faf30d5cb85237b4a3496811c0e
	I0920 17:42:42.913866   46025 command_runner.go:130] > ca5e246374ae005eb084399c79df21d8dc78d6218c643826ddfec1d4d1ff26d8
	I0920 17:42:42.916558   46025 cri.go:89] found id: "06e751c0e9afd49236f121119e05987ce6dec537fd429e6ca9bfafba0bb41a64"
	I0920 17:42:42.916585   46025 cri.go:89] found id: "aebea8aa76badc1c9b60fc60756c59dd82a7f8fbbc1e86ced5dc5516bf961e35"
	I0920 17:42:42.916592   46025 cri.go:89] found id: "c45d2aa0f63816d71c71ef16db9b7b26dc627f72d6ef7258b93c23005c445f9d"
	I0920 17:42:42.916598   46025 cri.go:89] found id: "33cbb2c4dce5862b3f2c4ce603b1d94a12c4e89fe1a398e64490f33c2a491720"
	I0920 17:42:42.916604   46025 cri.go:89] found id: "43d580bf9876b78f6a151a49a9fc863e01628a121c31c8106030c76a6949e273"
	I0920 17:42:42.916610   46025 cri.go:89] found id: "18612a28ae5022d41607cfde2996aff0acffaa060ef96c055b522b167dc900d3"
	I0920 17:42:42.916615   46025 cri.go:89] found id: "9a4606e22266020766dc8dbc189c0f65cdd349f031c39b0a7d106cf0781cbee8"
	I0920 17:42:42.916621   46025 cri.go:89] found id: "33ec6262554fcd5394f13c012ea27e8a9bf53faf30d5cb85237b4a3496811c0e"
	I0920 17:42:42.916627   46025 cri.go:89] found id: "ca5e246374ae005eb084399c79df21d8dc78d6218c643826ddfec1d4d1ff26d8"
	I0920 17:42:42.916637   46025 cri.go:89] found id: ""
	I0920 17:42:42.916701   46025 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 20 17:47:01 multinode-592246 crio[2694]: time="2024-09-20 17:47:01.696361766Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726854421696331651,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ccf37d44-5935-4e55-ade2-576d8bb22e5a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 17:47:01 multinode-592246 crio[2694]: time="2024-09-20 17:47:01.696948595Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ecb79573-46a8-426a-bffa-75f058931e84 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:47:01 multinode-592246 crio[2694]: time="2024-09-20 17:47:01.697022617Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ecb79573-46a8-426a-bffa-75f058931e84 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:47:01 multinode-592246 crio[2694]: time="2024-09-20 17:47:01.702815441Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b19b5d96c04f0998e9fb45f7ad94c150df0f2ab59724158e66ebb26e7f499c8e,PodSandboxId:5a993de08f9bef4380dbc45284a76dc665caa9a5a496c66fcc7b9f0ae83b9b8c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726854202702510290,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-wpfrr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 05422264-8765-46ab-bdf4-b78921ada4a5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cbddb2e724ca8bd2c53e9dd5af73952dfe43e5d4ac79bd580e041a6d7aede69,PodSandboxId:ede81570dbdce75634a199a255cb7dd0b3c5c88896631a79b66dc0beece1baae,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726854176395029369,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zfr9g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f2f8be4-982b-4da8-a0fa-321348cd1a9b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29feefea72c1ac5a313ba5ab4a409ea549c57c7b56bd1658bb702180ce6bc03e,PodSandboxId:24d3fe772b46d9ad291951c5d621c9cc41a42a1f1fb74190199b21694e18206e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726854169394764230,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d66d7bc
d-aa8f-480e-8fd3-fb291ed97a09,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ccbb8ac9634336f88fdd6b655a976f2ff3bf355d24c77128abb0afa2d10e57b,PodSandboxId:7262b866f175e3e9cf9fe9ec65c89d8dec9f99d4da80612b4337c9162f7bf3cc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726854169214990857,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cknvs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd34a408-0e2d-4f85-819f-d99de8948804,},A
nnotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79bea98b7a32af8003c2b53e82f3bcedd25d46534ba4c58b1562a1d4dd62899f,PodSandboxId:6b0a562ee52e5ac9a7f5f030e301e935e51cbda8efd24205ccccc430f531ffdb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726854169317714960,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-sggtt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 116fdcff-31e2-4138-907e-17265f19795a,},Annotations:map[string]string{io.ku
bernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a49b273fff83658fbf7e813b47a71246aa5dc45e49d5cc2db55d579f621cd95,PodSandboxId:842ed051ffcd16391da8d46e90365e0a91454cc5a12a35c6368692e68b72d517,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726854169164374591,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-592246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d64fadc298f2cbad9993623dd59110d,},Annotations:map[string
]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d35c011af5ed95ef938176fd7f24330921876bc721a2c8d4a93fd8a65d020d0,PodSandboxId:63174a48a0af89838bfd988930e0e65dfece3fa5159362c3b81a7ab54b71b226,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726854169142256115,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-592246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ea9e608bd76e5a2abeb0f2985e4ffd4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3
fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa7a4ac935e0470da3205f280946a5075ad00d4d398edf2ecbd0d6b72f3fa634,PodSandboxId:9fc32f59cbb6f67f9be565b8132e9a611918e5dcaea3f931e987cbb0e3a02bab,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726854169108363071,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-592246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b329d23e42855b3fc45631c897e94259,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:203ca48efd7a7e366da2e2947cb259f02447634e8f2a44df78dcefd8909c9a0e,PodSandboxId:be318076a6d8ffd3459a6204eccf5cc88b4fe5fa0bfafd4ef5eb8a25b8f9a893,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726854169019232745,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-592246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab3f648581b2c5aff8dec8b8093fa25a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06e751c0e9afd49236f121119e05987ce6dec537fd429e6ca9bfafba0bb41a64,PodSandboxId:ede81570dbdce75634a199a255cb7dd0b3c5c88896631a79b66dc0beece1baae,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726854162778718962,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zfr9g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f2f8be4-982b-4da8-a0fa-321348cd1a9b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b77d864bd8491e5247f3ce9904092adc651e1ff2977b3112136c18674b7a794,PodSandboxId:fec0ed249777dfa09c803c80bea9d91d294a5f73d90a5915f377d90589c47027,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726853836843972851,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-wpfrr,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 05422264-8765-46ab-bdf4-b78921ada4a5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c45d2aa0f63816d71c71ef16db9b7b26dc627f72d6ef7258b93c23005c445f9d,PodSandboxId:6b207f97a82ec1ea60f569be1a050c8bceeab08e59757c223205f217fd460602,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726853779304248510,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: d66d7bcd-aa8f-480e-8fd3-fb291ed97a09,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33cbb2c4dce5862b3f2c4ce603b1d94a12c4e89fe1a398e64490f33c2a491720,PodSandboxId:040dddb9b8699e642d434322e6c9311ab50710ae6e1dcfb49cd5573b21e39150,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726853767220444391,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-sggtt,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 116fdcff-31e2-4138-907e-17265f19795a,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43d580bf9876b78f6a151a49a9fc863e01628a121c31c8106030c76a6949e273,PodSandboxId:2ae710703257fa4fbfa8124e9f96c4da3483703c856c19977e12bbd696a7cff6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726853767060741978,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cknvs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd34a408-0e2d-4f85-819f
-d99de8948804,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18612a28ae5022d41607cfde2996aff0acffaa060ef96c055b522b167dc900d3,PodSandboxId:6dda31a6542e3654d10490db5310878e1401b0733a541a4cd64da9a521c5496a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726853756103764943,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-592246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b329d23e42855b3fc45631c897e94259,}
,Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a4606e22266020766dc8dbc189c0f65cdd349f031c39b0a7d106cf0781cbee8,PodSandboxId:26746e4572518a57357496479b39372cf85666fb52fa78be30f90c4df9eba034,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726853756093875934,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-592246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ea9e608bd76e5a2abeb0f2985e4ffd4,},Annotations:map[string]string{io.kubernetes.
container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33ec6262554fcd5394f13c012ea27e8a9bf53faf30d5cb85237b4a3496811c0e,PodSandboxId:f2e6ce6e57f368303f81a132a6701cff72d48204930ad307391eb837744c1a41,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726853756031895520,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-592246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab3f648581b2c5aff8dec8b8093fa25a,},Annotations:map[string]string{io.kubernetes.container.hash:
7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca5e246374ae005eb084399c79df21d8dc78d6218c643826ddfec1d4d1ff26d8,PodSandboxId:d1137e733ede785ed20953767b9d14df484c80311c56d28fa78c81bed27466a0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726853755986219359,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-592246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d64fadc298f2cbad9993623dd59110d,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ecb79573-46a8-426a-bffa-75f058931e84 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:47:01 multinode-592246 crio[2694]: time="2024-09-20 17:47:01.745724859Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6fa64187-5910-4528-b3c2-09874b30f72c name=/runtime.v1.RuntimeService/Version
	Sep 20 17:47:01 multinode-592246 crio[2694]: time="2024-09-20 17:47:01.745810656Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6fa64187-5910-4528-b3c2-09874b30f72c name=/runtime.v1.RuntimeService/Version
	Sep 20 17:47:01 multinode-592246 crio[2694]: time="2024-09-20 17:47:01.747352158Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4034476d-6be4-4051-a8db-c54c82bcac50 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 17:47:01 multinode-592246 crio[2694]: time="2024-09-20 17:47:01.747779530Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726854421747751821,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4034476d-6be4-4051-a8db-c54c82bcac50 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 17:47:01 multinode-592246 crio[2694]: time="2024-09-20 17:47:01.748294414Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ff585979-9572-4e9e-8f67-9c50a844186e name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:47:01 multinode-592246 crio[2694]: time="2024-09-20 17:47:01.748350810Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ff585979-9572-4e9e-8f67-9c50a844186e name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:47:01 multinode-592246 crio[2694]: time="2024-09-20 17:47:01.748703996Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b19b5d96c04f0998e9fb45f7ad94c150df0f2ab59724158e66ebb26e7f499c8e,PodSandboxId:5a993de08f9bef4380dbc45284a76dc665caa9a5a496c66fcc7b9f0ae83b9b8c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726854202702510290,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-wpfrr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 05422264-8765-46ab-bdf4-b78921ada4a5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cbddb2e724ca8bd2c53e9dd5af73952dfe43e5d4ac79bd580e041a6d7aede69,PodSandboxId:ede81570dbdce75634a199a255cb7dd0b3c5c88896631a79b66dc0beece1baae,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726854176395029369,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zfr9g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f2f8be4-982b-4da8-a0fa-321348cd1a9b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29feefea72c1ac5a313ba5ab4a409ea549c57c7b56bd1658bb702180ce6bc03e,PodSandboxId:24d3fe772b46d9ad291951c5d621c9cc41a42a1f1fb74190199b21694e18206e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726854169394764230,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d66d7bc
d-aa8f-480e-8fd3-fb291ed97a09,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ccbb8ac9634336f88fdd6b655a976f2ff3bf355d24c77128abb0afa2d10e57b,PodSandboxId:7262b866f175e3e9cf9fe9ec65c89d8dec9f99d4da80612b4337c9162f7bf3cc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726854169214990857,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cknvs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd34a408-0e2d-4f85-819f-d99de8948804,},A
nnotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79bea98b7a32af8003c2b53e82f3bcedd25d46534ba4c58b1562a1d4dd62899f,PodSandboxId:6b0a562ee52e5ac9a7f5f030e301e935e51cbda8efd24205ccccc430f531ffdb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726854169317714960,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-sggtt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 116fdcff-31e2-4138-907e-17265f19795a,},Annotations:map[string]string{io.ku
bernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a49b273fff83658fbf7e813b47a71246aa5dc45e49d5cc2db55d579f621cd95,PodSandboxId:842ed051ffcd16391da8d46e90365e0a91454cc5a12a35c6368692e68b72d517,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726854169164374591,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-592246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d64fadc298f2cbad9993623dd59110d,},Annotations:map[string
]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d35c011af5ed95ef938176fd7f24330921876bc721a2c8d4a93fd8a65d020d0,PodSandboxId:63174a48a0af89838bfd988930e0e65dfece3fa5159362c3b81a7ab54b71b226,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726854169142256115,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-592246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ea9e608bd76e5a2abeb0f2985e4ffd4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3
fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa7a4ac935e0470da3205f280946a5075ad00d4d398edf2ecbd0d6b72f3fa634,PodSandboxId:9fc32f59cbb6f67f9be565b8132e9a611918e5dcaea3f931e987cbb0e3a02bab,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726854169108363071,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-592246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b329d23e42855b3fc45631c897e94259,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:203ca48efd7a7e366da2e2947cb259f02447634e8f2a44df78dcefd8909c9a0e,PodSandboxId:be318076a6d8ffd3459a6204eccf5cc88b4fe5fa0bfafd4ef5eb8a25b8f9a893,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726854169019232745,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-592246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab3f648581b2c5aff8dec8b8093fa25a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06e751c0e9afd49236f121119e05987ce6dec537fd429e6ca9bfafba0bb41a64,PodSandboxId:ede81570dbdce75634a199a255cb7dd0b3c5c88896631a79b66dc0beece1baae,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726854162778718962,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zfr9g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f2f8be4-982b-4da8-a0fa-321348cd1a9b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b77d864bd8491e5247f3ce9904092adc651e1ff2977b3112136c18674b7a794,PodSandboxId:fec0ed249777dfa09c803c80bea9d91d294a5f73d90a5915f377d90589c47027,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726853836843972851,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-wpfrr,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 05422264-8765-46ab-bdf4-b78921ada4a5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c45d2aa0f63816d71c71ef16db9b7b26dc627f72d6ef7258b93c23005c445f9d,PodSandboxId:6b207f97a82ec1ea60f569be1a050c8bceeab08e59757c223205f217fd460602,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726853779304248510,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: d66d7bcd-aa8f-480e-8fd3-fb291ed97a09,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33cbb2c4dce5862b3f2c4ce603b1d94a12c4e89fe1a398e64490f33c2a491720,PodSandboxId:040dddb9b8699e642d434322e6c9311ab50710ae6e1dcfb49cd5573b21e39150,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726853767220444391,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-sggtt,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 116fdcff-31e2-4138-907e-17265f19795a,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43d580bf9876b78f6a151a49a9fc863e01628a121c31c8106030c76a6949e273,PodSandboxId:2ae710703257fa4fbfa8124e9f96c4da3483703c856c19977e12bbd696a7cff6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726853767060741978,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cknvs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd34a408-0e2d-4f85-819f
-d99de8948804,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18612a28ae5022d41607cfde2996aff0acffaa060ef96c055b522b167dc900d3,PodSandboxId:6dda31a6542e3654d10490db5310878e1401b0733a541a4cd64da9a521c5496a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726853756103764943,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-592246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b329d23e42855b3fc45631c897e94259,}
,Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a4606e22266020766dc8dbc189c0f65cdd349f031c39b0a7d106cf0781cbee8,PodSandboxId:26746e4572518a57357496479b39372cf85666fb52fa78be30f90c4df9eba034,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726853756093875934,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-592246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ea9e608bd76e5a2abeb0f2985e4ffd4,},Annotations:map[string]string{io.kubernetes.
container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33ec6262554fcd5394f13c012ea27e8a9bf53faf30d5cb85237b4a3496811c0e,PodSandboxId:f2e6ce6e57f368303f81a132a6701cff72d48204930ad307391eb837744c1a41,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726853756031895520,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-592246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab3f648581b2c5aff8dec8b8093fa25a,},Annotations:map[string]string{io.kubernetes.container.hash:
7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca5e246374ae005eb084399c79df21d8dc78d6218c643826ddfec1d4d1ff26d8,PodSandboxId:d1137e733ede785ed20953767b9d14df484c80311c56d28fa78c81bed27466a0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726853755986219359,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-592246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d64fadc298f2cbad9993623dd59110d,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ff585979-9572-4e9e-8f67-9c50a844186e name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:47:01 multinode-592246 crio[2694]: time="2024-09-20 17:47:01.790981904Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=461d1622-e4aa-43d2-92e0-13573b310fc9 name=/runtime.v1.RuntimeService/Version
	Sep 20 17:47:01 multinode-592246 crio[2694]: time="2024-09-20 17:47:01.791109209Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=461d1622-e4aa-43d2-92e0-13573b310fc9 name=/runtime.v1.RuntimeService/Version
	Sep 20 17:47:01 multinode-592246 crio[2694]: time="2024-09-20 17:47:01.792637414Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bb289fce-0390-4ed1-8f41-b0650c8dc383 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 17:47:01 multinode-592246 crio[2694]: time="2024-09-20 17:47:01.793094560Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726854421793065919,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bb289fce-0390-4ed1-8f41-b0650c8dc383 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 17:47:01 multinode-592246 crio[2694]: time="2024-09-20 17:47:01.793742042Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=60ab245d-ae90-4c99-860f-36bbafbcf435 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:47:01 multinode-592246 crio[2694]: time="2024-09-20 17:47:01.793818205Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=60ab245d-ae90-4c99-860f-36bbafbcf435 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:47:01 multinode-592246 crio[2694]: time="2024-09-20 17:47:01.794276141Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b19b5d96c04f0998e9fb45f7ad94c150df0f2ab59724158e66ebb26e7f499c8e,PodSandboxId:5a993de08f9bef4380dbc45284a76dc665caa9a5a496c66fcc7b9f0ae83b9b8c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726854202702510290,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-wpfrr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 05422264-8765-46ab-bdf4-b78921ada4a5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cbddb2e724ca8bd2c53e9dd5af73952dfe43e5d4ac79bd580e041a6d7aede69,PodSandboxId:ede81570dbdce75634a199a255cb7dd0b3c5c88896631a79b66dc0beece1baae,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726854176395029369,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zfr9g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f2f8be4-982b-4da8-a0fa-321348cd1a9b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29feefea72c1ac5a313ba5ab4a409ea549c57c7b56bd1658bb702180ce6bc03e,PodSandboxId:24d3fe772b46d9ad291951c5d621c9cc41a42a1f1fb74190199b21694e18206e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726854169394764230,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d66d7bc
d-aa8f-480e-8fd3-fb291ed97a09,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ccbb8ac9634336f88fdd6b655a976f2ff3bf355d24c77128abb0afa2d10e57b,PodSandboxId:7262b866f175e3e9cf9fe9ec65c89d8dec9f99d4da80612b4337c9162f7bf3cc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726854169214990857,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cknvs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd34a408-0e2d-4f85-819f-d99de8948804,},A
nnotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79bea98b7a32af8003c2b53e82f3bcedd25d46534ba4c58b1562a1d4dd62899f,PodSandboxId:6b0a562ee52e5ac9a7f5f030e301e935e51cbda8efd24205ccccc430f531ffdb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726854169317714960,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-sggtt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 116fdcff-31e2-4138-907e-17265f19795a,},Annotations:map[string]string{io.ku
bernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a49b273fff83658fbf7e813b47a71246aa5dc45e49d5cc2db55d579f621cd95,PodSandboxId:842ed051ffcd16391da8d46e90365e0a91454cc5a12a35c6368692e68b72d517,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726854169164374591,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-592246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d64fadc298f2cbad9993623dd59110d,},Annotations:map[string
]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d35c011af5ed95ef938176fd7f24330921876bc721a2c8d4a93fd8a65d020d0,PodSandboxId:63174a48a0af89838bfd988930e0e65dfece3fa5159362c3b81a7ab54b71b226,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726854169142256115,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-592246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ea9e608bd76e5a2abeb0f2985e4ffd4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3
fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa7a4ac935e0470da3205f280946a5075ad00d4d398edf2ecbd0d6b72f3fa634,PodSandboxId:9fc32f59cbb6f67f9be565b8132e9a611918e5dcaea3f931e987cbb0e3a02bab,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726854169108363071,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-592246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b329d23e42855b3fc45631c897e94259,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:203ca48efd7a7e366da2e2947cb259f02447634e8f2a44df78dcefd8909c9a0e,PodSandboxId:be318076a6d8ffd3459a6204eccf5cc88b4fe5fa0bfafd4ef5eb8a25b8f9a893,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726854169019232745,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-592246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab3f648581b2c5aff8dec8b8093fa25a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06e751c0e9afd49236f121119e05987ce6dec537fd429e6ca9bfafba0bb41a64,PodSandboxId:ede81570dbdce75634a199a255cb7dd0b3c5c88896631a79b66dc0beece1baae,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726854162778718962,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zfr9g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f2f8be4-982b-4da8-a0fa-321348cd1a9b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b77d864bd8491e5247f3ce9904092adc651e1ff2977b3112136c18674b7a794,PodSandboxId:fec0ed249777dfa09c803c80bea9d91d294a5f73d90a5915f377d90589c47027,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726853836843972851,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-wpfrr,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 05422264-8765-46ab-bdf4-b78921ada4a5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c45d2aa0f63816d71c71ef16db9b7b26dc627f72d6ef7258b93c23005c445f9d,PodSandboxId:6b207f97a82ec1ea60f569be1a050c8bceeab08e59757c223205f217fd460602,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726853779304248510,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: d66d7bcd-aa8f-480e-8fd3-fb291ed97a09,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33cbb2c4dce5862b3f2c4ce603b1d94a12c4e89fe1a398e64490f33c2a491720,PodSandboxId:040dddb9b8699e642d434322e6c9311ab50710ae6e1dcfb49cd5573b21e39150,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726853767220444391,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-sggtt,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 116fdcff-31e2-4138-907e-17265f19795a,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43d580bf9876b78f6a151a49a9fc863e01628a121c31c8106030c76a6949e273,PodSandboxId:2ae710703257fa4fbfa8124e9f96c4da3483703c856c19977e12bbd696a7cff6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726853767060741978,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cknvs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd34a408-0e2d-4f85-819f
-d99de8948804,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18612a28ae5022d41607cfde2996aff0acffaa060ef96c055b522b167dc900d3,PodSandboxId:6dda31a6542e3654d10490db5310878e1401b0733a541a4cd64da9a521c5496a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726853756103764943,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-592246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b329d23e42855b3fc45631c897e94259,}
,Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a4606e22266020766dc8dbc189c0f65cdd349f031c39b0a7d106cf0781cbee8,PodSandboxId:26746e4572518a57357496479b39372cf85666fb52fa78be30f90c4df9eba034,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726853756093875934,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-592246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ea9e608bd76e5a2abeb0f2985e4ffd4,},Annotations:map[string]string{io.kubernetes.
container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33ec6262554fcd5394f13c012ea27e8a9bf53faf30d5cb85237b4a3496811c0e,PodSandboxId:f2e6ce6e57f368303f81a132a6701cff72d48204930ad307391eb837744c1a41,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726853756031895520,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-592246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab3f648581b2c5aff8dec8b8093fa25a,},Annotations:map[string]string{io.kubernetes.container.hash:
7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca5e246374ae005eb084399c79df21d8dc78d6218c643826ddfec1d4d1ff26d8,PodSandboxId:d1137e733ede785ed20953767b9d14df484c80311c56d28fa78c81bed27466a0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726853755986219359,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-592246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d64fadc298f2cbad9993623dd59110d,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=60ab245d-ae90-4c99-860f-36bbafbcf435 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:47:01 multinode-592246 crio[2694]: time="2024-09-20 17:47:01.836682787Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b83f30ca-bb4c-4a28-9e6f-bfb2cb1d9fbf name=/runtime.v1.RuntimeService/Version
	Sep 20 17:47:01 multinode-592246 crio[2694]: time="2024-09-20 17:47:01.836788279Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b83f30ca-bb4c-4a28-9e6f-bfb2cb1d9fbf name=/runtime.v1.RuntimeService/Version
	Sep 20 17:47:01 multinode-592246 crio[2694]: time="2024-09-20 17:47:01.837974385Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7ed7b29e-1750-4dc6-a69c-f93fab1ef2e5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 17:47:01 multinode-592246 crio[2694]: time="2024-09-20 17:47:01.838459699Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726854421838435339,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7ed7b29e-1750-4dc6-a69c-f93fab1ef2e5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 17:47:01 multinode-592246 crio[2694]: time="2024-09-20 17:47:01.839226200Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7c440ce8-e269-4d0c-8683-e472b2f03c0d name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:47:01 multinode-592246 crio[2694]: time="2024-09-20 17:47:01.839308557Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7c440ce8-e269-4d0c-8683-e472b2f03c0d name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:47:01 multinode-592246 crio[2694]: time="2024-09-20 17:47:01.839742476Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b19b5d96c04f0998e9fb45f7ad94c150df0f2ab59724158e66ebb26e7f499c8e,PodSandboxId:5a993de08f9bef4380dbc45284a76dc665caa9a5a496c66fcc7b9f0ae83b9b8c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726854202702510290,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-wpfrr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 05422264-8765-46ab-bdf4-b78921ada4a5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cbddb2e724ca8bd2c53e9dd5af73952dfe43e5d4ac79bd580e041a6d7aede69,PodSandboxId:ede81570dbdce75634a199a255cb7dd0b3c5c88896631a79b66dc0beece1baae,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726854176395029369,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zfr9g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f2f8be4-982b-4da8-a0fa-321348cd1a9b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29feefea72c1ac5a313ba5ab4a409ea549c57c7b56bd1658bb702180ce6bc03e,PodSandboxId:24d3fe772b46d9ad291951c5d621c9cc41a42a1f1fb74190199b21694e18206e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726854169394764230,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d66d7bc
d-aa8f-480e-8fd3-fb291ed97a09,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ccbb8ac9634336f88fdd6b655a976f2ff3bf355d24c77128abb0afa2d10e57b,PodSandboxId:7262b866f175e3e9cf9fe9ec65c89d8dec9f99d4da80612b4337c9162f7bf3cc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726854169214990857,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cknvs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd34a408-0e2d-4f85-819f-d99de8948804,},A
nnotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79bea98b7a32af8003c2b53e82f3bcedd25d46534ba4c58b1562a1d4dd62899f,PodSandboxId:6b0a562ee52e5ac9a7f5f030e301e935e51cbda8efd24205ccccc430f531ffdb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726854169317714960,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-sggtt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 116fdcff-31e2-4138-907e-17265f19795a,},Annotations:map[string]string{io.ku
bernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a49b273fff83658fbf7e813b47a71246aa5dc45e49d5cc2db55d579f621cd95,PodSandboxId:842ed051ffcd16391da8d46e90365e0a91454cc5a12a35c6368692e68b72d517,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726854169164374591,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-592246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d64fadc298f2cbad9993623dd59110d,},Annotations:map[string
]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d35c011af5ed95ef938176fd7f24330921876bc721a2c8d4a93fd8a65d020d0,PodSandboxId:63174a48a0af89838bfd988930e0e65dfece3fa5159362c3b81a7ab54b71b226,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726854169142256115,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-592246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ea9e608bd76e5a2abeb0f2985e4ffd4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3
fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa7a4ac935e0470da3205f280946a5075ad00d4d398edf2ecbd0d6b72f3fa634,PodSandboxId:9fc32f59cbb6f67f9be565b8132e9a611918e5dcaea3f931e987cbb0e3a02bab,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726854169108363071,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-592246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b329d23e42855b3fc45631c897e94259,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:203ca48efd7a7e366da2e2947cb259f02447634e8f2a44df78dcefd8909c9a0e,PodSandboxId:be318076a6d8ffd3459a6204eccf5cc88b4fe5fa0bfafd4ef5eb8a25b8f9a893,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726854169019232745,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-592246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab3f648581b2c5aff8dec8b8093fa25a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06e751c0e9afd49236f121119e05987ce6dec537fd429e6ca9bfafba0bb41a64,PodSandboxId:ede81570dbdce75634a199a255cb7dd0b3c5c88896631a79b66dc0beece1baae,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726854162778718962,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zfr9g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f2f8be4-982b-4da8-a0fa-321348cd1a9b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b77d864bd8491e5247f3ce9904092adc651e1ff2977b3112136c18674b7a794,PodSandboxId:fec0ed249777dfa09c803c80bea9d91d294a5f73d90a5915f377d90589c47027,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726853836843972851,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-wpfrr,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 05422264-8765-46ab-bdf4-b78921ada4a5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c45d2aa0f63816d71c71ef16db9b7b26dc627f72d6ef7258b93c23005c445f9d,PodSandboxId:6b207f97a82ec1ea60f569be1a050c8bceeab08e59757c223205f217fd460602,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726853779304248510,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: d66d7bcd-aa8f-480e-8fd3-fb291ed97a09,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33cbb2c4dce5862b3f2c4ce603b1d94a12c4e89fe1a398e64490f33c2a491720,PodSandboxId:040dddb9b8699e642d434322e6c9311ab50710ae6e1dcfb49cd5573b21e39150,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726853767220444391,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-sggtt,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 116fdcff-31e2-4138-907e-17265f19795a,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43d580bf9876b78f6a151a49a9fc863e01628a121c31c8106030c76a6949e273,PodSandboxId:2ae710703257fa4fbfa8124e9f96c4da3483703c856c19977e12bbd696a7cff6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726853767060741978,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cknvs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd34a408-0e2d-4f85-819f
-d99de8948804,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18612a28ae5022d41607cfde2996aff0acffaa060ef96c055b522b167dc900d3,PodSandboxId:6dda31a6542e3654d10490db5310878e1401b0733a541a4cd64da9a521c5496a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726853756103764943,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-592246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b329d23e42855b3fc45631c897e94259,}
,Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a4606e22266020766dc8dbc189c0f65cdd349f031c39b0a7d106cf0781cbee8,PodSandboxId:26746e4572518a57357496479b39372cf85666fb52fa78be30f90c4df9eba034,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726853756093875934,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-592246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ea9e608bd76e5a2abeb0f2985e4ffd4,},Annotations:map[string]string{io.kubernetes.
container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33ec6262554fcd5394f13c012ea27e8a9bf53faf30d5cb85237b4a3496811c0e,PodSandboxId:f2e6ce6e57f368303f81a132a6701cff72d48204930ad307391eb837744c1a41,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726853756031895520,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-592246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab3f648581b2c5aff8dec8b8093fa25a,},Annotations:map[string]string{io.kubernetes.container.hash:
7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca5e246374ae005eb084399c79df21d8dc78d6218c643826ddfec1d4d1ff26d8,PodSandboxId:d1137e733ede785ed20953767b9d14df484c80311c56d28fa78c81bed27466a0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726853755986219359,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-592246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d64fadc298f2cbad9993623dd59110d,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7c440ce8-e269-4d0c-8683-e472b2f03c0d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b19b5d96c04f0       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   5a993de08f9be       busybox-7dff88458-wpfrr
	3cbddb2e724ca       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      4 minutes ago       Running             coredns                   2                   ede81570dbdce       coredns-7c65d6cfc9-zfr9g
	29feefea72c1a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       1                   24d3fe772b46d       storage-provisioner
	79bea98b7a32a       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      4 minutes ago       Running             kindnet-cni               1                   6b0a562ee52e5       kindnet-sggtt
	8ccbb8ac96343       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      4 minutes ago       Running             kube-proxy                1                   7262b866f175e       kube-proxy-cknvs
	2a49b273fff83       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      4 minutes ago       Running             kube-controller-manager   1                   842ed051ffcd1       kube-controller-manager-multinode-592246
	1d35c011af5ed       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      4 minutes ago       Running             etcd                      1                   63174a48a0af8       etcd-multinode-592246
	aa7a4ac935e04       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      4 minutes ago       Running             kube-scheduler            1                   9fc32f59cbb6f       kube-scheduler-multinode-592246
	203ca48efd7a7       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      4 minutes ago       Running             kube-apiserver            1                   be318076a6d8f       kube-apiserver-multinode-592246
	06e751c0e9afd       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      4 minutes ago       Exited              coredns                   1                   ede81570dbdce       coredns-7c65d6cfc9-zfr9g
	5b77d864bd849       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   fec0ed249777d       busybox-7dff88458-wpfrr
	c45d2aa0f6381       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       0                   6b207f97a82ec       storage-provisioner
	33cbb2c4dce58       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      10 minutes ago      Exited              kindnet-cni               0                   040dddb9b8699       kindnet-sggtt
	43d580bf9876b       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      10 minutes ago      Exited              kube-proxy                0                   2ae710703257f       kube-proxy-cknvs
	18612a28ae502       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      11 minutes ago      Exited              kube-scheduler            0                   6dda31a6542e3       kube-scheduler-multinode-592246
	9a4606e222660       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      11 minutes ago      Exited              etcd                      0                   26746e4572518       etcd-multinode-592246
	33ec6262554fc       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      11 minutes ago      Exited              kube-apiserver            0                   f2e6ce6e57f36       kube-apiserver-multinode-592246
	ca5e246374ae0       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      11 minutes ago      Exited              kube-controller-manager   0                   d1137e733ede7       kube-controller-manager-multinode-592246
	
	
	==> coredns [06e751c0e9afd49236f121119e05987ce6dec537fd429e6ca9bfafba0bb41a64] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:38238 - 10616 "HINFO IN 8267272908449185641.7413770237771884312. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012775253s
	
	
	==> coredns [3cbddb2e724ca8bd2c53e9dd5af73952dfe43e5d4ac79bd580e041a6d7aede69] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:54171 - 9641 "HINFO IN 8912997293967558746.8180543232414713051. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013222727s
	
	
	==> describe nodes <==
	Name:               multinode-592246
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-592246
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0626f22cf0d915d75e291a5bce701f94395056e1
	                    minikube.k8s.io/name=multinode-592246
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T17_36_02_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 17:35:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-592246
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 17:47:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 17:42:55 +0000   Fri, 20 Sep 2024 17:35:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 17:42:55 +0000   Fri, 20 Sep 2024 17:35:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 17:42:55 +0000   Fri, 20 Sep 2024 17:35:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 17:42:55 +0000   Fri, 20 Sep 2024 17:36:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.115
	  Hostname:    multinode-592246
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8f4fab7aa1d046888118c1e3e32ba809
	  System UUID:                8f4fab7a-a1d0-4688-8118-c1e3e32ba809
	  Boot ID:                    9c4b19b5-bf62-47d5-aca0-a031c994d070
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-wpfrr                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m49s
	  kube-system                 coredns-7c65d6cfc9-zfr9g                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     10m
	  kube-system                 etcd-multinode-592246                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         11m
	  kube-system                 kindnet-sggtt                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-apiserver-multinode-592246             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-multinode-592246    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-cknvs                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-multinode-592246             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 10m                kube-proxy       
	  Normal  Starting                 4m9s               kube-proxy       
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node multinode-592246 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node multinode-592246 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node multinode-592246 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    11m                kubelet          Node multinode-592246 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  11m                kubelet          Node multinode-592246 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     11m                kubelet          Node multinode-592246 status is now: NodeHasSufficientPID
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                node-controller  Node multinode-592246 event: Registered Node multinode-592246 in Controller
	  Normal  NodeReady                10m                kubelet          Node multinode-592246 status is now: NodeReady
	  Normal  RegisteredNode           4m7s               node-controller  Node multinode-592246 event: Registered Node multinode-592246 in Controller
	  Normal  Starting                 4m7s               kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m7s               kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m7s               kubelet          Node multinode-592246 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m7s               kubelet          Node multinode-592246 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m7s               kubelet          Node multinode-592246 status is now: NodeHasSufficientPID
	
	
	Name:               multinode-592246-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-592246-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0626f22cf0d915d75e291a5bce701f94395056e1
	                    minikube.k8s.io/name=multinode-592246
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_20T17_43_33_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 17:43:32 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-592246-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 17:44:34 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 20 Sep 2024 17:44:03 +0000   Fri, 20 Sep 2024 17:45:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 20 Sep 2024 17:44:03 +0000   Fri, 20 Sep 2024 17:45:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 20 Sep 2024 17:44:03 +0000   Fri, 20 Sep 2024 17:45:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 20 Sep 2024 17:44:03 +0000   Fri, 20 Sep 2024 17:45:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.170
	  Hostname:    multinode-592246-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b9ba005550b34d8fa33427ca08d3b8fb
	  System UUID:                b9ba0055-50b3-4d8f-a334-27ca08d3b8fb
	  Boot ID:                    c8a8322b-e940-4971-b2dc-f9147d893d89
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-7854z    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m35s
	  kube-system                 kindnet-w5zt6              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-proxy-v8z58           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m24s                  kube-proxy       
	  Normal  Starting                 10m                    kube-proxy       
	  Normal  NodeHasSufficientMemory  10m (x2 over 10m)      kubelet          Node multinode-592246-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x2 over 10m)      kubelet          Node multinode-592246-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x2 over 10m)      kubelet          Node multinode-592246-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m52s                  kubelet          Node multinode-592246-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m30s (x2 over 3m30s)  kubelet          Node multinode-592246-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m30s (x2 over 3m30s)  kubelet          Node multinode-592246-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m30s (x2 over 3m30s)  kubelet          Node multinode-592246-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m30s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m10s                  kubelet          Node multinode-592246-m02 status is now: NodeReady
	  Normal  NodeNotReady             107s                   node-controller  Node multinode-592246-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.160772] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.152669] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.293329] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +4.186472] systemd-fstab-generator[745]: Ignoring "noauto" option for root device
	[  +3.821716] systemd-fstab-generator[877]: Ignoring "noauto" option for root device
	[  +0.067034] kauditd_printk_skb: 158 callbacks suppressed
	[Sep20 17:36] systemd-fstab-generator[1213]: Ignoring "noauto" option for root device
	[  +0.094284] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.125591] systemd-fstab-generator[1316]: Ignoring "noauto" option for root device
	[  +0.147888] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.772092] kauditd_printk_skb: 60 callbacks suppressed
	[Sep20 17:37] kauditd_printk_skb: 14 callbacks suppressed
	[Sep20 17:42] systemd-fstab-generator[2620]: Ignoring "noauto" option for root device
	[  +0.159068] systemd-fstab-generator[2632]: Ignoring "noauto" option for root device
	[  +0.175970] systemd-fstab-generator[2646]: Ignoring "noauto" option for root device
	[  +0.140154] systemd-fstab-generator[2658]: Ignoring "noauto" option for root device
	[  +0.300253] systemd-fstab-generator[2686]: Ignoring "noauto" option for root device
	[  +4.278628] systemd-fstab-generator[2778]: Ignoring "noauto" option for root device
	[  +0.080784] kauditd_printk_skb: 100 callbacks suppressed
	[  +6.691154] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.881424] systemd-fstab-generator[3638]: Ignoring "noauto" option for root device
	[  +0.104124] kauditd_printk_skb: 62 callbacks suppressed
	[Sep20 17:43] kauditd_printk_skb: 21 callbacks suppressed
	[  +2.163575] systemd-fstab-generator[3804]: Ignoring "noauto" option for root device
	[ +15.301461] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [1d35c011af5ed95ef938176fd7f24330921876bc721a2c8d4a93fd8a65d020d0] <==
	{"level":"info","ts":"2024-09-20T17:42:49.621639Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"efb3de1b79640a9c","local-member-id":"c7abbacde39fb9a4","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T17:42:49.621684Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T17:42:49.623672Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T17:42:49.630487Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-20T17:42:49.630784Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.115:2380"}
	{"level":"info","ts":"2024-09-20T17:42:49.630802Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.115:2380"}
	{"level":"info","ts":"2024-09-20T17:42:49.630970Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"c7abbacde39fb9a4","initial-advertise-peer-urls":["https://192.168.39.115:2380"],"listen-peer-urls":["https://192.168.39.115:2380"],"advertise-client-urls":["https://192.168.39.115:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.115:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-20T17:42:49.630988Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-20T17:42:50.975624Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c7abbacde39fb9a4 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-20T17:42:50.975696Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c7abbacde39fb9a4 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-20T17:42:50.975739Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c7abbacde39fb9a4 received MsgPreVoteResp from c7abbacde39fb9a4 at term 2"}
	{"level":"info","ts":"2024-09-20T17:42:50.975765Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c7abbacde39fb9a4 became candidate at term 3"}
	{"level":"info","ts":"2024-09-20T17:42:50.975771Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c7abbacde39fb9a4 received MsgVoteResp from c7abbacde39fb9a4 at term 3"}
	{"level":"info","ts":"2024-09-20T17:42:50.975780Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c7abbacde39fb9a4 became leader at term 3"}
	{"level":"info","ts":"2024-09-20T17:42:50.975787Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c7abbacde39fb9a4 elected leader c7abbacde39fb9a4 at term 3"}
	{"level":"info","ts":"2024-09-20T17:42:50.978404Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"c7abbacde39fb9a4","local-member-attributes":"{Name:multinode-592246 ClientURLs:[https://192.168.39.115:2379]}","request-path":"/0/members/c7abbacde39fb9a4/attributes","cluster-id":"efb3de1b79640a9c","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-20T17:42:50.978605Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T17:42:50.979097Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T17:42:50.979272Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-20T17:42:50.979317Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-20T17:42:50.980083Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T17:42:50.980100Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T17:42:50.981057Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.115:2379"}
	{"level":"info","ts":"2024-09-20T17:42:50.981147Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	2024/09/20 17:42:53 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	
	
	==> etcd [9a4606e22266020766dc8dbc189c0f65cdd349f031c39b0a7d106cf0781cbee8] <==
	{"level":"info","ts":"2024-09-20T17:36:51.128353Z","caller":"traceutil/trace.go:171","msg":"trace[860051481] linearizableReadLoop","detail":"{readStateIndex:498; appliedIndex:497; }","duration":"168.793627ms","start":"2024-09-20T17:36:50.959529Z","end":"2024-09-20T17:36:51.128323Z","steps":["trace[860051481] 'read index received'  (duration: 168.558585ms)","trace[860051481] 'applied index is now lower than readState.Index'  (duration: 234.144µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-20T17:36:51.128772Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"169.219843ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-592246-m02\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T17:36:51.128881Z","caller":"traceutil/trace.go:171","msg":"trace[2020512061] range","detail":"{range_begin:/registry/minions/multinode-592246-m02; range_end:; response_count:0; response_revision:475; }","duration":"169.360757ms","start":"2024-09-20T17:36:50.959508Z","end":"2024-09-20T17:36:51.128868Z","steps":["trace[2020512061] 'agreement among raft nodes before linearized reading'  (duration: 168.929618ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T17:36:56.491340Z","caller":"traceutil/trace.go:171","msg":"trace[538718766] linearizableReadLoop","detail":"{readStateIndex:539; appliedIndex:538; }","duration":"155.431394ms","start":"2024-09-20T17:36:56.335885Z","end":"2024-09-20T17:36:56.491316Z","steps":["trace[538718766] 'read index received'  (duration: 155.132392ms)","trace[538718766] 'applied index is now lower than readState.Index'  (duration: 297.572µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-20T17:36:56.491565Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"155.646356ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T17:36:56.491608Z","caller":"traceutil/trace.go:171","msg":"trace[1760439891] transaction","detail":"{read_only:false; response_revision:514; number_of_response:1; }","duration":"236.03247ms","start":"2024-09-20T17:36:56.255561Z","end":"2024-09-20T17:36:56.491594Z","steps":["trace[1760439891] 'process raft request'  (duration: 235.547097ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T17:36:56.491629Z","caller":"traceutil/trace.go:171","msg":"trace[2102876701] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:514; }","duration":"155.740939ms","start":"2024-09-20T17:36:56.335879Z","end":"2024-09-20T17:36:56.491620Z","steps":["trace[2102876701] 'agreement among raft nodes before linearized reading'  (duration: 155.626359ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T17:36:56.781324Z","caller":"traceutil/trace.go:171","msg":"trace[828030985] linearizableReadLoop","detail":"{readStateIndex:540; appliedIndex:539; }","duration":"241.570321ms","start":"2024-09-20T17:36:56.539742Z","end":"2024-09-20T17:36:56.781312Z","steps":["trace[828030985] 'read index received'  (duration: 241.35053ms)","trace[828030985] 'applied index is now lower than readState.Index'  (duration: 218.259µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-20T17:36:56.781622Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"241.879547ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-592246-m02\" ","response":"range_response_count:1 size:2894"}
	{"level":"info","ts":"2024-09-20T17:36:56.781712Z","caller":"traceutil/trace.go:171","msg":"trace[772908242] range","detail":"{range_begin:/registry/minions/multinode-592246-m02; range_end:; response_count:1; response_revision:515; }","duration":"241.981914ms","start":"2024-09-20T17:36:56.539721Z","end":"2024-09-20T17:36:56.781703Z","steps":["trace[772908242] 'agreement among raft nodes before linearized reading'  (duration: 241.764021ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T17:36:56.782007Z","caller":"traceutil/trace.go:171","msg":"trace[1059746576] transaction","detail":"{read_only:false; response_revision:515; number_of_response:1; }","duration":"281.562442ms","start":"2024-09-20T17:36:56.500436Z","end":"2024-09-20T17:36:56.781999Z","steps":["trace[1059746576] 'process raft request'  (duration: 280.708253ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T17:37:49.485050Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"149.722919ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T17:37:49.486324Z","caller":"traceutil/trace.go:171","msg":"trace[899354476] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:612; }","duration":"151.065478ms","start":"2024-09-20T17:37:49.335246Z","end":"2024-09-20T17:37:49.486311Z","steps":["trace[899354476] 'range keys from in-memory index tree'  (duration: 149.70888ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T17:37:49.485388Z","caller":"traceutil/trace.go:171","msg":"trace[107722675] transaction","detail":"{read_only:false; response_revision:613; number_of_response:1; }","duration":"216.922529ms","start":"2024-09-20T17:37:49.268429Z","end":"2024-09-20T17:37:49.485351Z","steps":["trace[107722675] 'process raft request'  (duration: 211.382834ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T17:37:53.833399Z","caller":"traceutil/trace.go:171","msg":"trace[1442306491] transaction","detail":"{read_only:false; response_revision:647; number_of_response:1; }","duration":"214.711513ms","start":"2024-09-20T17:37:53.618674Z","end":"2024-09-20T17:37:53.833385Z","steps":["trace[1442306491] 'process raft request'  (duration: 214.577593ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T17:41:05.719111Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-20T17:41:05.719315Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-592246","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.115:2380"],"advertise-client-urls":["https://192.168.39.115:2379"]}
	{"level":"warn","ts":"2024-09-20T17:41:05.719451Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-20T17:41:05.719576Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-20T17:41:05.788737Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.115:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-20T17:41:05.788842Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.115:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-20T17:41:05.790448Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"c7abbacde39fb9a4","current-leader-member-id":"c7abbacde39fb9a4"}
	{"level":"info","ts":"2024-09-20T17:41:05.794128Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.115:2380"}
	{"level":"info","ts":"2024-09-20T17:41:05.794302Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.115:2380"}
	{"level":"info","ts":"2024-09-20T17:41:05.794313Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-592246","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.115:2380"],"advertise-client-urls":["https://192.168.39.115:2379"]}
	
	
	==> kernel <==
	 17:47:02 up 11 min,  0 users,  load average: 0.22, 0.25, 0.15
	Linux multinode-592246 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [33cbb2c4dce5862b3f2c4ce603b1d94a12c4e89fe1a398e64490f33c2a491720] <==
	I0920 17:40:18.252126       1 main.go:322] Node multinode-592246-m03 has CIDR [10.244.4.0/24] 
	I0920 17:40:28.255328       1 main.go:295] Handling node with IPs: map[192.168.39.115:{}]
	I0920 17:40:28.255392       1 main.go:299] handling current node
	I0920 17:40:28.255412       1 main.go:295] Handling node with IPs: map[192.168.39.170:{}]
	I0920 17:40:28.255418       1 main.go:322] Node multinode-592246-m02 has CIDR [10.244.1.0/24] 
	I0920 17:40:28.255578       1 main.go:295] Handling node with IPs: map[192.168.39.38:{}]
	I0920 17:40:28.255595       1 main.go:322] Node multinode-592246-m03 has CIDR [10.244.4.0/24] 
	I0920 17:40:38.248167       1 main.go:295] Handling node with IPs: map[192.168.39.115:{}]
	I0920 17:40:38.248375       1 main.go:299] handling current node
	I0920 17:40:38.248409       1 main.go:295] Handling node with IPs: map[192.168.39.170:{}]
	I0920 17:40:38.248416       1 main.go:322] Node multinode-592246-m02 has CIDR [10.244.1.0/24] 
	I0920 17:40:38.248553       1 main.go:295] Handling node with IPs: map[192.168.39.38:{}]
	I0920 17:40:38.248574       1 main.go:322] Node multinode-592246-m03 has CIDR [10.244.4.0/24] 
	I0920 17:40:48.253607       1 main.go:295] Handling node with IPs: map[192.168.39.38:{}]
	I0920 17:40:48.253716       1 main.go:322] Node multinode-592246-m03 has CIDR [10.244.4.0/24] 
	I0920 17:40:48.253927       1 main.go:295] Handling node with IPs: map[192.168.39.115:{}]
	I0920 17:40:48.253949       1 main.go:299] handling current node
	I0920 17:40:48.253972       1 main.go:295] Handling node with IPs: map[192.168.39.170:{}]
	I0920 17:40:48.253977       1 main.go:322] Node multinode-592246-m02 has CIDR [10.244.1.0/24] 
	I0920 17:40:58.256897       1 main.go:295] Handling node with IPs: map[192.168.39.115:{}]
	I0920 17:40:58.256998       1 main.go:299] handling current node
	I0920 17:40:58.257037       1 main.go:295] Handling node with IPs: map[192.168.39.170:{}]
	I0920 17:40:58.257046       1 main.go:322] Node multinode-592246-m02 has CIDR [10.244.1.0/24] 
	I0920 17:40:58.257273       1 main.go:295] Handling node with IPs: map[192.168.39.38:{}]
	I0920 17:40:58.257295       1 main.go:322] Node multinode-592246-m03 has CIDR [10.244.4.0/24] 
	
	
	==> kindnet [79bea98b7a32af8003c2b53e82f3bcedd25d46534ba4c58b1562a1d4dd62899f] <==
	I0920 17:46:00.392947       1 main.go:322] Node multinode-592246-m02 has CIDR [10.244.1.0/24] 
	I0920 17:46:10.401402       1 main.go:295] Handling node with IPs: map[192.168.39.115:{}]
	I0920 17:46:10.401469       1 main.go:299] handling current node
	I0920 17:46:10.401492       1 main.go:295] Handling node with IPs: map[192.168.39.170:{}]
	I0920 17:46:10.401501       1 main.go:322] Node multinode-592246-m02 has CIDR [10.244.1.0/24] 
	I0920 17:46:20.401489       1 main.go:295] Handling node with IPs: map[192.168.39.115:{}]
	I0920 17:46:20.401554       1 main.go:299] handling current node
	I0920 17:46:20.401571       1 main.go:295] Handling node with IPs: map[192.168.39.170:{}]
	I0920 17:46:20.401577       1 main.go:322] Node multinode-592246-m02 has CIDR [10.244.1.0/24] 
	I0920 17:46:30.399398       1 main.go:295] Handling node with IPs: map[192.168.39.115:{}]
	I0920 17:46:30.399464       1 main.go:299] handling current node
	I0920 17:46:30.399490       1 main.go:295] Handling node with IPs: map[192.168.39.170:{}]
	I0920 17:46:30.399497       1 main.go:322] Node multinode-592246-m02 has CIDR [10.244.1.0/24] 
	I0920 17:46:40.401440       1 main.go:295] Handling node with IPs: map[192.168.39.115:{}]
	I0920 17:46:40.401647       1 main.go:299] handling current node
	I0920 17:46:40.401686       1 main.go:295] Handling node with IPs: map[192.168.39.170:{}]
	I0920 17:46:40.401705       1 main.go:322] Node multinode-592246-m02 has CIDR [10.244.1.0/24] 
	I0920 17:46:50.393285       1 main.go:295] Handling node with IPs: map[192.168.39.115:{}]
	I0920 17:46:50.393478       1 main.go:299] handling current node
	I0920 17:46:50.393524       1 main.go:295] Handling node with IPs: map[192.168.39.170:{}]
	I0920 17:46:50.393543       1 main.go:322] Node multinode-592246-m02 has CIDR [10.244.1.0/24] 
	I0920 17:47:00.399247       1 main.go:295] Handling node with IPs: map[192.168.39.115:{}]
	I0920 17:47:00.399348       1 main.go:299] handling current node
	I0920 17:47:00.399388       1 main.go:295] Handling node with IPs: map[192.168.39.170:{}]
	I0920 17:47:00.399397       1 main.go:322] Node multinode-592246-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [203ca48efd7a7e366da2e2947cb259f02447634e8f2a44df78dcefd8909c9a0e] <==
	I0920 17:42:52.370555       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0920 17:42:52.375966       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0920 17:42:52.382604       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0920 17:42:52.385077       1 policy_source.go:224] refreshing policies
	I0920 17:42:52.390083       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0920 17:42:52.393460       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0920 17:42:52.403370       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0920 17:42:52.403791       1 aggregator.go:171] initial CRD sync complete...
	I0920 17:42:52.404075       1 autoregister_controller.go:144] Starting autoregister controller
	I0920 17:42:52.404128       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0920 17:42:52.404857       1 cache.go:39] Caches are synced for autoregister controller
	I0920 17:42:52.470862       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	E0920 17:42:53.122236       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E0920 17:42:53.122283       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 6.499µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E0920 17:42:53.123442       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E0920 17:42:53.124760       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0920 17:42:53.126130       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="3.886743ms" method="PATCH" path="/api/v1/namespaces/kube-system/events/etcd-multinode-592246.17f7049fdaca5f60" result=null
	I0920 17:42:53.271289       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0920 17:42:55.791782       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0920 17:42:55.812925       1 controller.go:615] quota admission added evaluator for: endpoints
	I0920 17:42:55.953631       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0920 17:42:55.963758       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0920 17:42:55.975918       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0920 17:42:56.072245       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0920 17:42:56.095224       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-apiserver [33ec6262554fcd5394f13c012ea27e8a9bf53faf30d5cb85237b4a3496811c0e] <==
	I0920 17:41:05.746928       1 apiservice_controller.go:134] Shutting down APIServiceRegistrationController
	I0920 17:41:05.746942       1 system_namespaces_controller.go:76] Shutting down system namespaces controller
	I0920 17:41:05.746963       1 customresource_discovery_controller.go:328] Shutting down DiscoveryController
	I0920 17:41:05.746993       1 autoregister_controller.go:168] Shutting down autoregister controller
	I0920 17:41:05.747017       1 remote_available_controller.go:427] Shutting down RemoteAvailability controller
	I0920 17:41:05.747113       1 crdregistration_controller.go:145] Shutting down crd-autoregister controller
	I0920 17:41:05.747136       1 crd_finalizer.go:281] Shutting down CRDFinalizer
	I0920 17:41:05.747240       1 establishing_controller.go:92] Shutting down EstablishingController
	I0920 17:41:05.747268       1 controller.go:120] Shutting down OpenAPI V3 controller
	I0920 17:41:05.747354       1 controller.go:170] Shutting down OpenAPI controller
	I0920 17:41:05.747423       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I0920 17:41:05.747434       1 local_available_controller.go:172] Shutting down LocalAvailability controller
	I0920 17:41:05.747447       1 apiapproval_controller.go:201] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
	I0920 17:41:05.747509       1 nonstructuralschema_controller.go:207] Shutting down NonStructuralSchemaConditionController
	I0920 17:41:05.747537       1 naming_controller.go:305] Shutting down NamingConditionController
	I0920 17:41:05.747554       1 apf_controller.go:389] Shutting down API Priority and Fairness config worker
	I0920 17:41:05.747805       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0920 17:41:05.747838       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0920 17:41:05.751394       1 dynamic_cafile_content.go:174] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0920 17:41:05.755050       1 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0920 17:41:05.755141       1 dynamic_cafile_content.go:174] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0920 17:41:05.756679       1 dynamic_serving_content.go:149] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0920 17:41:05.756975       1 controller.go:157] Shutting down quota evaluator
	I0920 17:41:05.757020       1 controller.go:176] quota evaluator worker shutdown
	I0920 17:41:05.757885       1 secure_serving.go:258] Stopped listening on [::]:8443
	
	
	==> kube-controller-manager [2a49b273fff83658fbf7e813b47a71246aa5dc45e49d5cc2db55d579f621cd95] <==
	I0920 17:44:12.059221       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-592246-m03" podCIDRs=["10.244.2.0/24"]
	I0920 17:44:12.059277       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-592246-m03"
	I0920 17:44:12.059342       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-592246-m03"
	I0920 17:44:12.070952       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-592246-m03"
	I0920 17:44:12.334090       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-592246-m03"
	I0920 17:44:12.676390       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-592246-m03"
	I0920 17:44:15.834574       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-592246-m03"
	I0920 17:44:22.291722       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-592246-m03"
	I0920 17:44:31.631492       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-592246-m02"
	I0920 17:44:31.631626       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-592246-m03"
	I0920 17:44:31.652814       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-592246-m03"
	I0920 17:44:35.774828       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-592246-m03"
	I0920 17:44:36.519641       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-592246-m03"
	I0920 17:44:36.534480       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-592246-m03"
	I0920 17:44:37.025103       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-592246-m03"
	I0920 17:44:37.025260       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-592246-m02"
	I0920 17:45:15.797731       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-592246-m02"
	I0920 17:45:15.835015       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-592246-m02"
	I0920 17:45:15.839482       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="23.831316ms"
	I0920 17:45:15.839642       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="58.915µs"
	I0920 17:45:20.946144       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-592246-m02"
	I0920 17:45:35.686587       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-nr2d6"
	I0920 17:45:35.721094       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-nr2d6"
	I0920 17:45:35.721278       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-mkw76"
	I0920 17:45:35.755865       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-mkw76"
	
	
	==> kube-controller-manager [ca5e246374ae005eb084399c79df21d8dc78d6218c643826ddfec1d4d1ff26d8] <==
	I0920 17:38:39.321880       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-592246-m03"
	I0920 17:38:39.322224       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-592246-m02"
	I0920 17:38:40.460454       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-592246-m02"
	I0920 17:38:40.460653       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-592246-m03\" does not exist"
	I0920 17:38:40.477844       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-592246-m03" podCIDRs=["10.244.4.0/24"]
	I0920 17:38:40.477886       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-592246-m03"
	I0920 17:38:40.477913       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-592246-m03"
	I0920 17:38:40.500958       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-592246-m03"
	I0920 17:38:40.565384       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-592246-m03"
	I0920 17:38:40.877043       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-592246-m03"
	I0920 17:38:41.195467       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-592246-m03"
	I0920 17:38:50.677031       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-592246-m03"
	I0920 17:39:00.170910       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-592246-m02"
	I0920 17:39:00.171429       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-592246-m03"
	I0920 17:39:00.184131       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-592246-m03"
	I0920 17:39:00.555815       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-592246-m03"
	I0920 17:39:45.577434       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-592246-m03"
	I0920 17:39:45.577714       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-592246-m02"
	I0920 17:39:45.579853       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-592246-m03"
	I0920 17:39:45.607460       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-592246-m02"
	I0920 17:39:45.611985       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-592246-m03"
	I0920 17:39:45.646132       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="11.500365ms"
	I0920 17:39:45.647120       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="23.068µs"
	I0920 17:39:50.735658       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-592246-m02"
	I0920 17:40:00.817761       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-592246-m03"
	
	
	==> kube-proxy [43d580bf9876b78f6a151a49a9fc863e01628a121c31c8106030c76a6949e273] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 17:36:07.426655       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0920 17:36:07.477866       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.115"]
	E0920 17:36:07.478051       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 17:36:07.577099       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 17:36:07.577148       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 17:36:07.577222       1 server_linux.go:169] "Using iptables Proxier"
	I0920 17:36:07.579737       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 17:36:07.580038       1 server.go:483] "Version info" version="v1.31.1"
	I0920 17:36:07.580061       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 17:36:07.581632       1 config.go:199] "Starting service config controller"
	I0920 17:36:07.581683       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 17:36:07.581732       1 config.go:105] "Starting endpoint slice config controller"
	I0920 17:36:07.581749       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 17:36:07.582339       1 config.go:328] "Starting node config controller"
	I0920 17:36:07.582364       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 17:36:07.682406       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 17:36:07.682494       1 shared_informer.go:320] Caches are synced for service config
	I0920 17:36:07.682584       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [8ccbb8ac9634336f88fdd6b655a976f2ff3bf355d24c77128abb0afa2d10e57b] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 17:42:50.094041       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0920 17:42:52.385916       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.115"]
	E0920 17:42:52.386551       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 17:42:52.503824       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 17:42:52.503921       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 17:42:52.503960       1 server_linux.go:169] "Using iptables Proxier"
	I0920 17:42:52.506732       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 17:42:52.507121       1 server.go:483] "Version info" version="v1.31.1"
	I0920 17:42:52.507747       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 17:42:52.509809       1 config.go:199] "Starting service config controller"
	I0920 17:42:52.509922       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 17:42:52.510028       1 config.go:105] "Starting endpoint slice config controller"
	I0920 17:42:52.510115       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 17:42:52.511644       1 config.go:328] "Starting node config controller"
	I0920 17:42:52.511716       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 17:42:52.612344       1 shared_informer.go:320] Caches are synced for node config
	I0920 17:42:52.612531       1 shared_informer.go:320] Caches are synced for service config
	I0920 17:42:52.612545       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [18612a28ae5022d41607cfde2996aff0acffaa060ef96c055b522b167dc900d3] <==
	E0920 17:35:58.675350       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 17:35:58.674082       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0920 17:35:58.675498       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0920 17:35:59.497242       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0920 17:35:59.497355       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 17:35:59.503164       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0920 17:35:59.503336       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 17:35:59.534618       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0920 17:35:59.534709       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 17:35:59.576662       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0920 17:35:59.576784       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0920 17:35:59.690862       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0920 17:35:59.690955       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 17:35:59.763228       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0920 17:35:59.763312       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 17:35:59.804468       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0920 17:35:59.804570       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 17:35:59.935401       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0920 17:35:59.935863       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0920 17:35:59.983417       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0920 17:35:59.983586       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 17:35:59.988632       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0920 17:35:59.988679       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0920 17:36:01.760804       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0920 17:41:05.718162       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [aa7a4ac935e0470da3205f280946a5075ad00d4d398edf2ecbd0d6b72f3fa634] <==
	I0920 17:42:50.515286       1 serving.go:386] Generated self-signed cert in-memory
	W0920 17:42:52.315248       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0920 17:42:52.315328       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0920 17:42:52.315338       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0920 17:42:52.315350       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0920 17:42:52.383988       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0920 17:42:52.384034       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 17:42:52.402775       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0920 17:42:52.402834       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0920 17:42:52.403107       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0920 17:42:52.403289       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0920 17:42:52.503016       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 20 17:45:45 multinode-592246 kubelet[3645]: E0920 17:45:45.294823    3645 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726854345294426161,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:45:55 multinode-592246 kubelet[3645]: E0920 17:45:55.258631    3645 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 20 17:45:55 multinode-592246 kubelet[3645]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 20 17:45:55 multinode-592246 kubelet[3645]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 20 17:45:55 multinode-592246 kubelet[3645]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 20 17:45:55 multinode-592246 kubelet[3645]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 20 17:45:55 multinode-592246 kubelet[3645]: E0920 17:45:55.296369    3645 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726854355295701335,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:45:55 multinode-592246 kubelet[3645]: E0920 17:45:55.296395    3645 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726854355295701335,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:46:05 multinode-592246 kubelet[3645]: E0920 17:46:05.298578    3645 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726854365297888317,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:46:05 multinode-592246 kubelet[3645]: E0920 17:46:05.299041    3645 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726854365297888317,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:46:15 multinode-592246 kubelet[3645]: E0920 17:46:15.301020    3645 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726854375300489747,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:46:15 multinode-592246 kubelet[3645]: E0920 17:46:15.301432    3645 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726854375300489747,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:46:25 multinode-592246 kubelet[3645]: E0920 17:46:25.305689    3645 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726854385304408243,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:46:25 multinode-592246 kubelet[3645]: E0920 17:46:25.306590    3645 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726854385304408243,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:46:35 multinode-592246 kubelet[3645]: E0920 17:46:35.308857    3645 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726854395308163610,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:46:35 multinode-592246 kubelet[3645]: E0920 17:46:35.309583    3645 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726854395308163610,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:46:45 multinode-592246 kubelet[3645]: E0920 17:46:45.311874    3645 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726854405311562462,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:46:45 multinode-592246 kubelet[3645]: E0920 17:46:45.311918    3645 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726854405311562462,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:46:55 multinode-592246 kubelet[3645]: E0920 17:46:55.258505    3645 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 20 17:46:55 multinode-592246 kubelet[3645]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 20 17:46:55 multinode-592246 kubelet[3645]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 20 17:46:55 multinode-592246 kubelet[3645]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 20 17:46:55 multinode-592246 kubelet[3645]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 20 17:46:55 multinode-592246 kubelet[3645]: E0920 17:46:55.313279    3645 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726854415312994852,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:46:55 multinode-592246 kubelet[3645]: E0920 17:46:55.313315    3645 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726854415312994852,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 17:47:01.413886   48010 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19672-8777/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-592246 -n multinode-592246
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-592246 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (144.96s)

                                                
                                    
x
+
TestPreload (271.16s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-942835 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0920 17:51:39.931570   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/functional-945494/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:52:43.196980   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-942835 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m7.406881091s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-942835 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-942835 image pull gcr.io/k8s-minikube/busybox: (3.467318142s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-942835
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-942835: exit status 82 (2m0.48745932s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-942835"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-942835 failed: exit status 82
panic.go:629: *** TestPreload FAILED at 2024-09-20 17:55:25.346963401 +0000 UTC m=+4306.991044951
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-942835 -n test-preload-942835
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-942835 -n test-preload-942835: exit status 3 (18.655920145s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 17:55:43.998230   50958 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.108:22: connect: no route to host
	E0920 17:55:43.998249   50958 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.108:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-942835" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-942835" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-942835
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-942835: (1.144644788s)
--- FAIL: TestPreload (271.16s)

                                                
                                    
x
+
TestKubernetesUpgrade (343.76s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-299508 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-299508 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m30.114687033s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-299508] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19672
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19672-8777/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-8777/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-299508" primary control-plane node in "kubernetes-upgrade-299508" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 17:57:38.691954   52041 out.go:345] Setting OutFile to fd 1 ...
	I0920 17:57:38.692239   52041 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:57:38.692250   52041 out.go:358] Setting ErrFile to fd 2...
	I0920 17:57:38.692254   52041 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:57:38.692495   52041 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-8777/.minikube/bin
	I0920 17:57:38.693053   52041 out.go:352] Setting JSON to false
	I0920 17:57:38.693987   52041 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6002,"bootTime":1726849057,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 17:57:38.694045   52041 start.go:139] virtualization: kvm guest
	I0920 17:57:38.696241   52041 out.go:177] * [kubernetes-upgrade-299508] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 17:57:38.697797   52041 notify.go:220] Checking for updates...
	I0920 17:57:38.697815   52041 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 17:57:38.699156   52041 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 17:57:38.700401   52041 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-8777/kubeconfig
	I0920 17:57:38.701495   52041 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-8777/.minikube
	I0920 17:57:38.702610   52041 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 17:57:38.704128   52041 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 17:57:38.705597   52041 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 17:57:38.743054   52041 out.go:177] * Using the kvm2 driver based on user configuration
	I0920 17:57:38.744448   52041 start.go:297] selected driver: kvm2
	I0920 17:57:38.744464   52041 start.go:901] validating driver "kvm2" against <nil>
	I0920 17:57:38.744476   52041 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 17:57:38.745168   52041 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 17:57:38.745236   52041 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19672-8777/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 17:57:38.762635   52041 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 17:57:38.762705   52041 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 17:57:38.763010   52041 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0920 17:57:38.763040   52041 cni.go:84] Creating CNI manager for ""
	I0920 17:57:38.763095   52041 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 17:57:38.763106   52041 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 17:57:38.763195   52041 start.go:340] cluster config:
	{Name:kubernetes-upgrade-299508 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-299508 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 17:57:38.763313   52041 iso.go:125] acquiring lock: {Name:mkba95ef0488e46f622333e9f317f43def93040b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 17:57:38.765006   52041 out.go:177] * Starting "kubernetes-upgrade-299508" primary control-plane node in "kubernetes-upgrade-299508" cluster
	I0920 17:57:38.766074   52041 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0920 17:57:38.766116   52041 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0920 17:57:38.766128   52041 cache.go:56] Caching tarball of preloaded images
	I0920 17:57:38.766212   52041 preload.go:172] Found /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 17:57:38.766224   52041 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0920 17:57:38.766528   52041 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/kubernetes-upgrade-299508/config.json ...
	I0920 17:57:38.766548   52041 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/kubernetes-upgrade-299508/config.json: {Name:mk7ac699b36ea4c057fe5fa43577b8c76f0c7905 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:57:38.766680   52041 start.go:360] acquireMachinesLock for kubernetes-upgrade-299508: {Name:mkfeedb385cf08b5d2aa00913e85815d02a180c2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 17:57:38.766709   52041 start.go:364] duration metric: took 15.498µs to acquireMachinesLock for "kubernetes-upgrade-299508"
	I0920 17:57:38.766726   52041 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-299508 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-299508 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 17:57:38.766770   52041 start.go:125] createHost starting for "" (driver="kvm2")
	I0920 17:57:38.768324   52041 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 17:57:38.768475   52041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:57:38.768517   52041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:57:38.784723   52041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36291
	I0920 17:57:38.785138   52041 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:57:38.785763   52041 main.go:141] libmachine: Using API Version  1
	I0920 17:57:38.785783   52041 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:57:38.786129   52041 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:57:38.786323   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetMachineName
	I0920 17:57:38.786472   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .DriverName
	I0920 17:57:38.786657   52041 start.go:159] libmachine.API.Create for "kubernetes-upgrade-299508" (driver="kvm2")
	I0920 17:57:38.786692   52041 client.go:168] LocalClient.Create starting
	I0920 17:57:38.786724   52041 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem
	I0920 17:57:38.786759   52041 main.go:141] libmachine: Decoding PEM data...
	I0920 17:57:38.786778   52041 main.go:141] libmachine: Parsing certificate...
	I0920 17:57:38.786837   52041 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem
	I0920 17:57:38.786863   52041 main.go:141] libmachine: Decoding PEM data...
	I0920 17:57:38.786871   52041 main.go:141] libmachine: Parsing certificate...
	I0920 17:57:38.786889   52041 main.go:141] libmachine: Running pre-create checks...
	I0920 17:57:38.786902   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .PreCreateCheck
	I0920 17:57:38.787235   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetConfigRaw
	I0920 17:57:38.787656   52041 main.go:141] libmachine: Creating machine...
	I0920 17:57:38.787674   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .Create
	I0920 17:57:38.787797   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Creating KVM machine...
	I0920 17:57:38.789035   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | found existing default KVM network
	I0920 17:57:38.789710   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | I0920 17:57:38.789571   52106 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000123a30}
	I0920 17:57:38.789746   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | created network xml: 
	I0920 17:57:38.789780   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | <network>
	I0920 17:57:38.789803   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG |   <name>mk-kubernetes-upgrade-299508</name>
	I0920 17:57:38.789818   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG |   <dns enable='no'/>
	I0920 17:57:38.789828   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG |   
	I0920 17:57:38.789853   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0920 17:57:38.789864   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG |     <dhcp>
	I0920 17:57:38.789874   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0920 17:57:38.789885   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG |     </dhcp>
	I0920 17:57:38.789893   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG |   </ip>
	I0920 17:57:38.789902   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG |   
	I0920 17:57:38.789912   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | </network>
	I0920 17:57:38.789937   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | 
	I0920 17:57:38.795875   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | trying to create private KVM network mk-kubernetes-upgrade-299508 192.168.39.0/24...
	I0920 17:57:38.876028   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | private KVM network mk-kubernetes-upgrade-299508 192.168.39.0/24 created
	I0920 17:57:38.876074   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | I0920 17:57:38.875985   52106 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19672-8777/.minikube
	I0920 17:57:38.876088   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Setting up store path in /home/jenkins/minikube-integration/19672-8777/.minikube/machines/kubernetes-upgrade-299508 ...
	I0920 17:57:38.876116   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Building disk image from file:///home/jenkins/minikube-integration/19672-8777/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso
	I0920 17:57:38.876138   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Downloading /home/jenkins/minikube-integration/19672-8777/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19672-8777/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso...
	I0920 17:57:39.143613   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | I0920 17:57:39.143473   52106 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/kubernetes-upgrade-299508/id_rsa...
	I0920 17:57:39.460022   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | I0920 17:57:39.459872   52106 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/kubernetes-upgrade-299508/kubernetes-upgrade-299508.rawdisk...
	I0920 17:57:39.460059   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | Writing magic tar header
	I0920 17:57:39.460080   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | Writing SSH key tar header
	I0920 17:57:39.460093   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | I0920 17:57:39.460048   52106 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19672-8777/.minikube/machines/kubernetes-upgrade-299508 ...
	I0920 17:57:39.460207   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/kubernetes-upgrade-299508
	I0920 17:57:39.460234   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-8777/.minikube/machines
	I0920 17:57:39.460249   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Setting executable bit set on /home/jenkins/minikube-integration/19672-8777/.minikube/machines/kubernetes-upgrade-299508 (perms=drwx------)
	I0920 17:57:39.460259   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-8777/.minikube
	I0920 17:57:39.460276   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Setting executable bit set on /home/jenkins/minikube-integration/19672-8777/.minikube/machines (perms=drwxr-xr-x)
	I0920 17:57:39.460294   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Setting executable bit set on /home/jenkins/minikube-integration/19672-8777/.minikube (perms=drwxr-xr-x)
	I0920 17:57:39.460304   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Setting executable bit set on /home/jenkins/minikube-integration/19672-8777 (perms=drwxrwxr-x)
	I0920 17:57:39.460314   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-8777
	I0920 17:57:39.460337   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0920 17:57:39.460349   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | Checking permissions on dir: /home/jenkins
	I0920 17:57:39.460359   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0920 17:57:39.460370   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0920 17:57:39.460385   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Creating domain...
	I0920 17:57:39.460394   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | Checking permissions on dir: /home
	I0920 17:57:39.460413   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | Skipping /home - not owner
	I0920 17:57:39.461710   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) define libvirt domain using xml: 
	I0920 17:57:39.461733   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) <domain type='kvm'>
	I0920 17:57:39.461744   52041 main.go:141] libmachine: (kubernetes-upgrade-299508)   <name>kubernetes-upgrade-299508</name>
	I0920 17:57:39.461755   52041 main.go:141] libmachine: (kubernetes-upgrade-299508)   <memory unit='MiB'>2200</memory>
	I0920 17:57:39.461776   52041 main.go:141] libmachine: (kubernetes-upgrade-299508)   <vcpu>2</vcpu>
	I0920 17:57:39.461788   52041 main.go:141] libmachine: (kubernetes-upgrade-299508)   <features>
	I0920 17:57:39.461796   52041 main.go:141] libmachine: (kubernetes-upgrade-299508)     <acpi/>
	I0920 17:57:39.461802   52041 main.go:141] libmachine: (kubernetes-upgrade-299508)     <apic/>
	I0920 17:57:39.461811   52041 main.go:141] libmachine: (kubernetes-upgrade-299508)     <pae/>
	I0920 17:57:39.461819   52041 main.go:141] libmachine: (kubernetes-upgrade-299508)     
	I0920 17:57:39.461864   52041 main.go:141] libmachine: (kubernetes-upgrade-299508)   </features>
	I0920 17:57:39.461889   52041 main.go:141] libmachine: (kubernetes-upgrade-299508)   <cpu mode='host-passthrough'>
	I0920 17:57:39.461903   52041 main.go:141] libmachine: (kubernetes-upgrade-299508)   
	I0920 17:57:39.461914   52041 main.go:141] libmachine: (kubernetes-upgrade-299508)   </cpu>
	I0920 17:57:39.461926   52041 main.go:141] libmachine: (kubernetes-upgrade-299508)   <os>
	I0920 17:57:39.461940   52041 main.go:141] libmachine: (kubernetes-upgrade-299508)     <type>hvm</type>
	I0920 17:57:39.461952   52041 main.go:141] libmachine: (kubernetes-upgrade-299508)     <boot dev='cdrom'/>
	I0920 17:57:39.461963   52041 main.go:141] libmachine: (kubernetes-upgrade-299508)     <boot dev='hd'/>
	I0920 17:57:39.461974   52041 main.go:141] libmachine: (kubernetes-upgrade-299508)     <bootmenu enable='no'/>
	I0920 17:57:39.461984   52041 main.go:141] libmachine: (kubernetes-upgrade-299508)   </os>
	I0920 17:57:39.461995   52041 main.go:141] libmachine: (kubernetes-upgrade-299508)   <devices>
	I0920 17:57:39.462012   52041 main.go:141] libmachine: (kubernetes-upgrade-299508)     <disk type='file' device='cdrom'>
	I0920 17:57:39.462030   52041 main.go:141] libmachine: (kubernetes-upgrade-299508)       <source file='/home/jenkins/minikube-integration/19672-8777/.minikube/machines/kubernetes-upgrade-299508/boot2docker.iso'/>
	I0920 17:57:39.462042   52041 main.go:141] libmachine: (kubernetes-upgrade-299508)       <target dev='hdc' bus='scsi'/>
	I0920 17:57:39.462055   52041 main.go:141] libmachine: (kubernetes-upgrade-299508)       <readonly/>
	I0920 17:57:39.462066   52041 main.go:141] libmachine: (kubernetes-upgrade-299508)     </disk>
	I0920 17:57:39.462087   52041 main.go:141] libmachine: (kubernetes-upgrade-299508)     <disk type='file' device='disk'>
	I0920 17:57:39.462100   52041 main.go:141] libmachine: (kubernetes-upgrade-299508)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0920 17:57:39.462116   52041 main.go:141] libmachine: (kubernetes-upgrade-299508)       <source file='/home/jenkins/minikube-integration/19672-8777/.minikube/machines/kubernetes-upgrade-299508/kubernetes-upgrade-299508.rawdisk'/>
	I0920 17:57:39.462123   52041 main.go:141] libmachine: (kubernetes-upgrade-299508)       <target dev='hda' bus='virtio'/>
	I0920 17:57:39.462131   52041 main.go:141] libmachine: (kubernetes-upgrade-299508)     </disk>
	I0920 17:57:39.462137   52041 main.go:141] libmachine: (kubernetes-upgrade-299508)     <interface type='network'>
	I0920 17:57:39.462146   52041 main.go:141] libmachine: (kubernetes-upgrade-299508)       <source network='mk-kubernetes-upgrade-299508'/>
	I0920 17:57:39.462153   52041 main.go:141] libmachine: (kubernetes-upgrade-299508)       <model type='virtio'/>
	I0920 17:57:39.462163   52041 main.go:141] libmachine: (kubernetes-upgrade-299508)     </interface>
	I0920 17:57:39.462171   52041 main.go:141] libmachine: (kubernetes-upgrade-299508)     <interface type='network'>
	I0920 17:57:39.462180   52041 main.go:141] libmachine: (kubernetes-upgrade-299508)       <source network='default'/>
	I0920 17:57:39.462187   52041 main.go:141] libmachine: (kubernetes-upgrade-299508)       <model type='virtio'/>
	I0920 17:57:39.462194   52041 main.go:141] libmachine: (kubernetes-upgrade-299508)     </interface>
	I0920 17:57:39.462214   52041 main.go:141] libmachine: (kubernetes-upgrade-299508)     <serial type='pty'>
	I0920 17:57:39.462228   52041 main.go:141] libmachine: (kubernetes-upgrade-299508)       <target port='0'/>
	I0920 17:57:39.462237   52041 main.go:141] libmachine: (kubernetes-upgrade-299508)     </serial>
	I0920 17:57:39.462248   52041 main.go:141] libmachine: (kubernetes-upgrade-299508)     <console type='pty'>
	I0920 17:57:39.462257   52041 main.go:141] libmachine: (kubernetes-upgrade-299508)       <target type='serial' port='0'/>
	I0920 17:57:39.462280   52041 main.go:141] libmachine: (kubernetes-upgrade-299508)     </console>
	I0920 17:57:39.462293   52041 main.go:141] libmachine: (kubernetes-upgrade-299508)     <rng model='virtio'>
	I0920 17:57:39.462305   52041 main.go:141] libmachine: (kubernetes-upgrade-299508)       <backend model='random'>/dev/random</backend>
	I0920 17:57:39.462321   52041 main.go:141] libmachine: (kubernetes-upgrade-299508)     </rng>
	I0920 17:57:39.462331   52041 main.go:141] libmachine: (kubernetes-upgrade-299508)     
	I0920 17:57:39.462340   52041 main.go:141] libmachine: (kubernetes-upgrade-299508)     
	I0920 17:57:39.462350   52041 main.go:141] libmachine: (kubernetes-upgrade-299508)   </devices>
	I0920 17:57:39.462362   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) </domain>
	I0920 17:57:39.462370   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) 
	I0920 17:57:39.466777   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | domain kubernetes-upgrade-299508 has defined MAC address 52:54:00:24:7c:f7 in network default
	I0920 17:57:39.467402   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Ensuring networks are active...
	I0920 17:57:39.467422   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | domain kubernetes-upgrade-299508 has defined MAC address 52:54:00:90:2b:40 in network mk-kubernetes-upgrade-299508
	I0920 17:57:39.468160   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Ensuring network default is active
	I0920 17:57:39.468511   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Ensuring network mk-kubernetes-upgrade-299508 is active
	I0920 17:57:39.469391   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Getting domain xml...
	I0920 17:57:39.470120   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Creating domain...
	I0920 17:57:40.905155   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Waiting to get IP...
	I0920 17:57:40.906183   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | domain kubernetes-upgrade-299508 has defined MAC address 52:54:00:90:2b:40 in network mk-kubernetes-upgrade-299508
	I0920 17:57:40.906537   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | unable to find current IP address of domain kubernetes-upgrade-299508 in network mk-kubernetes-upgrade-299508
	I0920 17:57:40.906585   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | I0920 17:57:40.906520   52106 retry.go:31] will retry after 258.330665ms: waiting for machine to come up
	I0920 17:57:41.166992   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | domain kubernetes-upgrade-299508 has defined MAC address 52:54:00:90:2b:40 in network mk-kubernetes-upgrade-299508
	I0920 17:57:41.167503   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | unable to find current IP address of domain kubernetes-upgrade-299508 in network mk-kubernetes-upgrade-299508
	I0920 17:57:41.167549   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | I0920 17:57:41.167477   52106 retry.go:31] will retry after 306.097332ms: waiting for machine to come up
	I0920 17:57:41.475607   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | domain kubernetes-upgrade-299508 has defined MAC address 52:54:00:90:2b:40 in network mk-kubernetes-upgrade-299508
	I0920 17:57:41.475944   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | unable to find current IP address of domain kubernetes-upgrade-299508 in network mk-kubernetes-upgrade-299508
	I0920 17:57:41.475969   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | I0920 17:57:41.475904   52106 retry.go:31] will retry after 421.507387ms: waiting for machine to come up
	I0920 17:57:41.899653   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | domain kubernetes-upgrade-299508 has defined MAC address 52:54:00:90:2b:40 in network mk-kubernetes-upgrade-299508
	I0920 17:57:41.900090   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | unable to find current IP address of domain kubernetes-upgrade-299508 in network mk-kubernetes-upgrade-299508
	I0920 17:57:41.900116   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | I0920 17:57:41.900051   52106 retry.go:31] will retry after 381.848884ms: waiting for machine to come up
	I0920 17:57:42.283573   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | domain kubernetes-upgrade-299508 has defined MAC address 52:54:00:90:2b:40 in network mk-kubernetes-upgrade-299508
	I0920 17:57:42.284025   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | unable to find current IP address of domain kubernetes-upgrade-299508 in network mk-kubernetes-upgrade-299508
	I0920 17:57:42.284056   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | I0920 17:57:42.283949   52106 retry.go:31] will retry after 463.74475ms: waiting for machine to come up
	I0920 17:57:42.749547   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | domain kubernetes-upgrade-299508 has defined MAC address 52:54:00:90:2b:40 in network mk-kubernetes-upgrade-299508
	I0920 17:57:42.750004   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | unable to find current IP address of domain kubernetes-upgrade-299508 in network mk-kubernetes-upgrade-299508
	I0920 17:57:42.750033   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | I0920 17:57:42.749952   52106 retry.go:31] will retry after 631.39505ms: waiting for machine to come up
	I0920 17:57:43.382677   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | domain kubernetes-upgrade-299508 has defined MAC address 52:54:00:90:2b:40 in network mk-kubernetes-upgrade-299508
	I0920 17:57:43.383103   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | unable to find current IP address of domain kubernetes-upgrade-299508 in network mk-kubernetes-upgrade-299508
	I0920 17:57:43.383138   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | I0920 17:57:43.383043   52106 retry.go:31] will retry after 840.394267ms: waiting for machine to come up
	I0920 17:57:44.225270   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | domain kubernetes-upgrade-299508 has defined MAC address 52:54:00:90:2b:40 in network mk-kubernetes-upgrade-299508
	I0920 17:57:44.225749   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | unable to find current IP address of domain kubernetes-upgrade-299508 in network mk-kubernetes-upgrade-299508
	I0920 17:57:44.225774   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | I0920 17:57:44.225705   52106 retry.go:31] will retry after 1.360881293s: waiting for machine to come up
	I0920 17:57:45.587833   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | domain kubernetes-upgrade-299508 has defined MAC address 52:54:00:90:2b:40 in network mk-kubernetes-upgrade-299508
	I0920 17:57:45.588278   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | unable to find current IP address of domain kubernetes-upgrade-299508 in network mk-kubernetes-upgrade-299508
	I0920 17:57:45.588307   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | I0920 17:57:45.588206   52106 retry.go:31] will retry after 1.524304601s: waiting for machine to come up
	I0920 17:57:47.114475   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | domain kubernetes-upgrade-299508 has defined MAC address 52:54:00:90:2b:40 in network mk-kubernetes-upgrade-299508
	I0920 17:57:47.115019   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | unable to find current IP address of domain kubernetes-upgrade-299508 in network mk-kubernetes-upgrade-299508
	I0920 17:57:47.115041   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | I0920 17:57:47.114989   52106 retry.go:31] will retry after 1.641679018s: waiting for machine to come up
	I0920 17:57:48.758624   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | domain kubernetes-upgrade-299508 has defined MAC address 52:54:00:90:2b:40 in network mk-kubernetes-upgrade-299508
	I0920 17:57:48.759051   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | unable to find current IP address of domain kubernetes-upgrade-299508 in network mk-kubernetes-upgrade-299508
	I0920 17:57:48.759070   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | I0920 17:57:48.759028   52106 retry.go:31] will retry after 2.115974928s: waiting for machine to come up
	I0920 17:57:50.876985   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | domain kubernetes-upgrade-299508 has defined MAC address 52:54:00:90:2b:40 in network mk-kubernetes-upgrade-299508
	I0920 17:57:50.877516   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | unable to find current IP address of domain kubernetes-upgrade-299508 in network mk-kubernetes-upgrade-299508
	I0920 17:57:50.877546   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | I0920 17:57:50.877466   52106 retry.go:31] will retry after 2.455479581s: waiting for machine to come up
	I0920 17:57:53.334886   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | domain kubernetes-upgrade-299508 has defined MAC address 52:54:00:90:2b:40 in network mk-kubernetes-upgrade-299508
	I0920 17:57:53.335257   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | unable to find current IP address of domain kubernetes-upgrade-299508 in network mk-kubernetes-upgrade-299508
	I0920 17:57:53.335300   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | I0920 17:57:53.335208   52106 retry.go:31] will retry after 3.667669619s: waiting for machine to come up
	I0920 17:57:57.006571   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | domain kubernetes-upgrade-299508 has defined MAC address 52:54:00:90:2b:40 in network mk-kubernetes-upgrade-299508
	I0920 17:57:57.007042   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | unable to find current IP address of domain kubernetes-upgrade-299508 in network mk-kubernetes-upgrade-299508
	I0920 17:57:57.007095   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | I0920 17:57:57.007007   52106 retry.go:31] will retry after 4.17147496s: waiting for machine to come up
	I0920 17:58:01.180950   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | domain kubernetes-upgrade-299508 has defined MAC address 52:54:00:90:2b:40 in network mk-kubernetes-upgrade-299508
	I0920 17:58:01.181316   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | domain kubernetes-upgrade-299508 has current primary IP address 192.168.39.69 and MAC address 52:54:00:90:2b:40 in network mk-kubernetes-upgrade-299508
	I0920 17:58:01.181333   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Found IP for machine: 192.168.39.69
	I0920 17:58:01.181341   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Reserving static IP address...
	I0920 17:58:01.181651   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-299508", mac: "52:54:00:90:2b:40", ip: "192.168.39.69"} in network mk-kubernetes-upgrade-299508
	I0920 17:58:01.263267   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | Getting to WaitForSSH function...
	I0920 17:58:01.263300   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Reserved static IP address: 192.168.39.69
	I0920 17:58:01.263313   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Waiting for SSH to be available...
	I0920 17:58:01.266264   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | domain kubernetes-upgrade-299508 has defined MAC address 52:54:00:90:2b:40 in network mk-kubernetes-upgrade-299508
	I0920 17:58:01.266710   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:2b:40", ip: ""} in network mk-kubernetes-upgrade-299508: {Iface:virbr1 ExpiryTime:2024-09-20 18:57:54 +0000 UTC Type:0 Mac:52:54:00:90:2b:40 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:minikube Clientid:01:52:54:00:90:2b:40}
	I0920 17:58:01.266741   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | domain kubernetes-upgrade-299508 has defined IP address 192.168.39.69 and MAC address 52:54:00:90:2b:40 in network mk-kubernetes-upgrade-299508
	I0920 17:58:01.266908   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | Using SSH client type: external
	I0920 17:58:01.266934   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/kubernetes-upgrade-299508/id_rsa (-rw-------)
	I0920 17:58:01.266963   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.69 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-8777/.minikube/machines/kubernetes-upgrade-299508/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 17:58:01.266977   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | About to run SSH command:
	I0920 17:58:01.266990   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | exit 0
	I0920 17:58:01.393991   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | SSH cmd err, output: <nil>: 
	I0920 17:58:01.394285   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) KVM machine creation complete!
	I0920 17:58:01.394581   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetConfigRaw
	I0920 17:58:01.395196   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .DriverName
	I0920 17:58:01.395414   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .DriverName
	I0920 17:58:01.395545   52041 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0920 17:58:01.395562   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetState
	I0920 17:58:01.396904   52041 main.go:141] libmachine: Detecting operating system of created instance...
	I0920 17:58:01.396918   52041 main.go:141] libmachine: Waiting for SSH to be available...
	I0920 17:58:01.396922   52041 main.go:141] libmachine: Getting to WaitForSSH function...
	I0920 17:58:01.396928   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetSSHHostname
	I0920 17:58:01.399842   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | domain kubernetes-upgrade-299508 has defined MAC address 52:54:00:90:2b:40 in network mk-kubernetes-upgrade-299508
	I0920 17:58:01.400286   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:2b:40", ip: ""} in network mk-kubernetes-upgrade-299508: {Iface:virbr1 ExpiryTime:2024-09-20 18:57:54 +0000 UTC Type:0 Mac:52:54:00:90:2b:40 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:kubernetes-upgrade-299508 Clientid:01:52:54:00:90:2b:40}
	I0920 17:58:01.400317   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | domain kubernetes-upgrade-299508 has defined IP address 192.168.39.69 and MAC address 52:54:00:90:2b:40 in network mk-kubernetes-upgrade-299508
	I0920 17:58:01.400522   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetSSHPort
	I0920 17:58:01.400696   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetSSHKeyPath
	I0920 17:58:01.400883   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetSSHKeyPath
	I0920 17:58:01.401041   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetSSHUsername
	I0920 17:58:01.401226   52041 main.go:141] libmachine: Using SSH client type: native
	I0920 17:58:01.401452   52041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I0920 17:58:01.401465   52041 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0920 17:58:01.509228   52041 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 17:58:01.509260   52041 main.go:141] libmachine: Detecting the provisioner...
	I0920 17:58:01.509271   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetSSHHostname
	I0920 17:58:01.512311   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | domain kubernetes-upgrade-299508 has defined MAC address 52:54:00:90:2b:40 in network mk-kubernetes-upgrade-299508
	I0920 17:58:01.512746   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:2b:40", ip: ""} in network mk-kubernetes-upgrade-299508: {Iface:virbr1 ExpiryTime:2024-09-20 18:57:54 +0000 UTC Type:0 Mac:52:54:00:90:2b:40 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:kubernetes-upgrade-299508 Clientid:01:52:54:00:90:2b:40}
	I0920 17:58:01.512777   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | domain kubernetes-upgrade-299508 has defined IP address 192.168.39.69 and MAC address 52:54:00:90:2b:40 in network mk-kubernetes-upgrade-299508
	I0920 17:58:01.512968   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetSSHPort
	I0920 17:58:01.513191   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetSSHKeyPath
	I0920 17:58:01.513362   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetSSHKeyPath
	I0920 17:58:01.513539   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetSSHUsername
	I0920 17:58:01.513735   52041 main.go:141] libmachine: Using SSH client type: native
	I0920 17:58:01.513989   52041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I0920 17:58:01.514002   52041 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0920 17:58:01.626726   52041 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0920 17:58:01.626808   52041 main.go:141] libmachine: found compatible host: buildroot
	I0920 17:58:01.626817   52041 main.go:141] libmachine: Provisioning with buildroot...
	I0920 17:58:01.626825   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetMachineName
	I0920 17:58:01.627057   52041 buildroot.go:166] provisioning hostname "kubernetes-upgrade-299508"
	I0920 17:58:01.627085   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetMachineName
	I0920 17:58:01.627257   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetSSHHostname
	I0920 17:58:01.630008   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | domain kubernetes-upgrade-299508 has defined MAC address 52:54:00:90:2b:40 in network mk-kubernetes-upgrade-299508
	I0920 17:58:01.630424   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:2b:40", ip: ""} in network mk-kubernetes-upgrade-299508: {Iface:virbr1 ExpiryTime:2024-09-20 18:57:54 +0000 UTC Type:0 Mac:52:54:00:90:2b:40 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:kubernetes-upgrade-299508 Clientid:01:52:54:00:90:2b:40}
	I0920 17:58:01.630452   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | domain kubernetes-upgrade-299508 has defined IP address 192.168.39.69 and MAC address 52:54:00:90:2b:40 in network mk-kubernetes-upgrade-299508
	I0920 17:58:01.630645   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetSSHPort
	I0920 17:58:01.630798   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetSSHKeyPath
	I0920 17:58:01.630967   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetSSHKeyPath
	I0920 17:58:01.631072   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetSSHUsername
	I0920 17:58:01.631218   52041 main.go:141] libmachine: Using SSH client type: native
	I0920 17:58:01.631424   52041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I0920 17:58:01.631442   52041 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-299508 && echo "kubernetes-upgrade-299508" | sudo tee /etc/hostname
	I0920 17:58:01.752171   52041 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-299508
	
	I0920 17:58:01.752204   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetSSHHostname
	I0920 17:58:01.755122   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | domain kubernetes-upgrade-299508 has defined MAC address 52:54:00:90:2b:40 in network mk-kubernetes-upgrade-299508
	I0920 17:58:01.755526   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:2b:40", ip: ""} in network mk-kubernetes-upgrade-299508: {Iface:virbr1 ExpiryTime:2024-09-20 18:57:54 +0000 UTC Type:0 Mac:52:54:00:90:2b:40 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:kubernetes-upgrade-299508 Clientid:01:52:54:00:90:2b:40}
	I0920 17:58:01.755558   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | domain kubernetes-upgrade-299508 has defined IP address 192.168.39.69 and MAC address 52:54:00:90:2b:40 in network mk-kubernetes-upgrade-299508
	I0920 17:58:01.755878   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetSSHPort
	I0920 17:58:01.756131   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetSSHKeyPath
	I0920 17:58:01.756335   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetSSHKeyPath
	I0920 17:58:01.756515   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetSSHUsername
	I0920 17:58:01.756698   52041 main.go:141] libmachine: Using SSH client type: native
	I0920 17:58:01.756895   52041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I0920 17:58:01.756915   52041 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-299508' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-299508/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-299508' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 17:58:01.880511   52041 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 17:58:01.880542   52041 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-8777/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-8777/.minikube}
	I0920 17:58:01.880585   52041 buildroot.go:174] setting up certificates
	I0920 17:58:01.880599   52041 provision.go:84] configureAuth start
	I0920 17:58:01.880615   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetMachineName
	I0920 17:58:01.880901   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetIP
	I0920 17:58:01.883601   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | domain kubernetes-upgrade-299508 has defined MAC address 52:54:00:90:2b:40 in network mk-kubernetes-upgrade-299508
	I0920 17:58:01.884052   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:2b:40", ip: ""} in network mk-kubernetes-upgrade-299508: {Iface:virbr1 ExpiryTime:2024-09-20 18:57:54 +0000 UTC Type:0 Mac:52:54:00:90:2b:40 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:kubernetes-upgrade-299508 Clientid:01:52:54:00:90:2b:40}
	I0920 17:58:01.884086   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | domain kubernetes-upgrade-299508 has defined IP address 192.168.39.69 and MAC address 52:54:00:90:2b:40 in network mk-kubernetes-upgrade-299508
	I0920 17:58:01.884323   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetSSHHostname
	I0920 17:58:01.887026   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | domain kubernetes-upgrade-299508 has defined MAC address 52:54:00:90:2b:40 in network mk-kubernetes-upgrade-299508
	I0920 17:58:01.887395   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:2b:40", ip: ""} in network mk-kubernetes-upgrade-299508: {Iface:virbr1 ExpiryTime:2024-09-20 18:57:54 +0000 UTC Type:0 Mac:52:54:00:90:2b:40 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:kubernetes-upgrade-299508 Clientid:01:52:54:00:90:2b:40}
	I0920 17:58:01.887417   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | domain kubernetes-upgrade-299508 has defined IP address 192.168.39.69 and MAC address 52:54:00:90:2b:40 in network mk-kubernetes-upgrade-299508
	I0920 17:58:01.887591   52041 provision.go:143] copyHostCerts
	I0920 17:58:01.887669   52041 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem, removing ...
	I0920 17:58:01.887691   52041 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem
	I0920 17:58:01.887778   52041 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem (1675 bytes)
	I0920 17:58:01.887890   52041 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem, removing ...
	I0920 17:58:01.887901   52041 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem
	I0920 17:58:01.887944   52041 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem (1082 bytes)
	I0920 17:58:01.888060   52041 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem, removing ...
	I0920 17:58:01.888073   52041 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem
	I0920 17:58:01.888111   52041 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem (1123 bytes)
	I0920 17:58:01.888184   52041 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-299508 san=[127.0.0.1 192.168.39.69 kubernetes-upgrade-299508 localhost minikube]
	I0920 17:58:02.049732   52041 provision.go:177] copyRemoteCerts
	I0920 17:58:02.049796   52041 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 17:58:02.049821   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetSSHHostname
	I0920 17:58:02.052994   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | domain kubernetes-upgrade-299508 has defined MAC address 52:54:00:90:2b:40 in network mk-kubernetes-upgrade-299508
	I0920 17:58:02.053549   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:2b:40", ip: ""} in network mk-kubernetes-upgrade-299508: {Iface:virbr1 ExpiryTime:2024-09-20 18:57:54 +0000 UTC Type:0 Mac:52:54:00:90:2b:40 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:kubernetes-upgrade-299508 Clientid:01:52:54:00:90:2b:40}
	I0920 17:58:02.053603   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | domain kubernetes-upgrade-299508 has defined IP address 192.168.39.69 and MAC address 52:54:00:90:2b:40 in network mk-kubernetes-upgrade-299508
	I0920 17:58:02.053998   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetSSHPort
	I0920 17:58:02.054225   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetSSHKeyPath
	I0920 17:58:02.054410   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetSSHUsername
	I0920 17:58:02.054562   52041 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/kubernetes-upgrade-299508/id_rsa Username:docker}
	I0920 17:58:02.154905   52041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0920 17:58:02.188319   52041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0920 17:58:02.222020   52041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 17:58:02.247490   52041 provision.go:87] duration metric: took 366.862619ms to configureAuth
	I0920 17:58:02.247528   52041 buildroot.go:189] setting minikube options for container-runtime
	I0920 17:58:02.247755   52041 config.go:182] Loaded profile config "kubernetes-upgrade-299508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0920 17:58:02.247893   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetSSHHostname
	I0920 17:58:02.250942   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | domain kubernetes-upgrade-299508 has defined MAC address 52:54:00:90:2b:40 in network mk-kubernetes-upgrade-299508
	I0920 17:58:02.251380   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:2b:40", ip: ""} in network mk-kubernetes-upgrade-299508: {Iface:virbr1 ExpiryTime:2024-09-20 18:57:54 +0000 UTC Type:0 Mac:52:54:00:90:2b:40 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:kubernetes-upgrade-299508 Clientid:01:52:54:00:90:2b:40}
	I0920 17:58:02.251421   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | domain kubernetes-upgrade-299508 has defined IP address 192.168.39.69 and MAC address 52:54:00:90:2b:40 in network mk-kubernetes-upgrade-299508
	I0920 17:58:02.251744   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetSSHPort
	I0920 17:58:02.251988   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetSSHKeyPath
	I0920 17:58:02.252225   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetSSHKeyPath
	I0920 17:58:02.252456   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetSSHUsername
	I0920 17:58:02.252659   52041 main.go:141] libmachine: Using SSH client type: native
	I0920 17:58:02.252911   52041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I0920 17:58:02.252947   52041 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 17:58:02.487598   52041 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 17:58:02.487658   52041 main.go:141] libmachine: Checking connection to Docker...
	I0920 17:58:02.487670   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetURL
	I0920 17:58:02.489161   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | Using libvirt version 6000000
	I0920 17:58:02.491556   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | domain kubernetes-upgrade-299508 has defined MAC address 52:54:00:90:2b:40 in network mk-kubernetes-upgrade-299508
	I0920 17:58:02.491866   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:2b:40", ip: ""} in network mk-kubernetes-upgrade-299508: {Iface:virbr1 ExpiryTime:2024-09-20 18:57:54 +0000 UTC Type:0 Mac:52:54:00:90:2b:40 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:kubernetes-upgrade-299508 Clientid:01:52:54:00:90:2b:40}
	I0920 17:58:02.491909   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | domain kubernetes-upgrade-299508 has defined IP address 192.168.39.69 and MAC address 52:54:00:90:2b:40 in network mk-kubernetes-upgrade-299508
	I0920 17:58:02.492004   52041 main.go:141] libmachine: Docker is up and running!
	I0920 17:58:02.492021   52041 main.go:141] libmachine: Reticulating splines...
	I0920 17:58:02.492029   52041 client.go:171] duration metric: took 23.705327882s to LocalClient.Create
	I0920 17:58:02.492062   52041 start.go:167] duration metric: took 23.705408537s to libmachine.API.Create "kubernetes-upgrade-299508"
	I0920 17:58:02.492072   52041 start.go:293] postStartSetup for "kubernetes-upgrade-299508" (driver="kvm2")
	I0920 17:58:02.492081   52041 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 17:58:02.492113   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .DriverName
	I0920 17:58:02.492353   52041 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 17:58:02.492377   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetSSHHostname
	I0920 17:58:02.494602   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | domain kubernetes-upgrade-299508 has defined MAC address 52:54:00:90:2b:40 in network mk-kubernetes-upgrade-299508
	I0920 17:58:02.494913   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:2b:40", ip: ""} in network mk-kubernetes-upgrade-299508: {Iface:virbr1 ExpiryTime:2024-09-20 18:57:54 +0000 UTC Type:0 Mac:52:54:00:90:2b:40 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:kubernetes-upgrade-299508 Clientid:01:52:54:00:90:2b:40}
	I0920 17:58:02.494940   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | domain kubernetes-upgrade-299508 has defined IP address 192.168.39.69 and MAC address 52:54:00:90:2b:40 in network mk-kubernetes-upgrade-299508
	I0920 17:58:02.495077   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetSSHPort
	I0920 17:58:02.495248   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetSSHKeyPath
	I0920 17:58:02.495426   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetSSHUsername
	I0920 17:58:02.495552   52041 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/kubernetes-upgrade-299508/id_rsa Username:docker}
	I0920 17:58:02.580346   52041 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 17:58:02.585183   52041 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 17:58:02.585214   52041 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/addons for local assets ...
	I0920 17:58:02.585297   52041 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/files for local assets ...
	I0920 17:58:02.585397   52041 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem -> 159732.pem in /etc/ssl/certs
	I0920 17:58:02.585522   52041 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 17:58:02.595528   52041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /etc/ssl/certs/159732.pem (1708 bytes)
	I0920 17:58:02.620657   52041 start.go:296] duration metric: took 128.572947ms for postStartSetup
	I0920 17:58:02.620710   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetConfigRaw
	I0920 17:58:02.621431   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetIP
	I0920 17:58:02.624491   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | domain kubernetes-upgrade-299508 has defined MAC address 52:54:00:90:2b:40 in network mk-kubernetes-upgrade-299508
	I0920 17:58:02.624900   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:2b:40", ip: ""} in network mk-kubernetes-upgrade-299508: {Iface:virbr1 ExpiryTime:2024-09-20 18:57:54 +0000 UTC Type:0 Mac:52:54:00:90:2b:40 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:kubernetes-upgrade-299508 Clientid:01:52:54:00:90:2b:40}
	I0920 17:58:02.624930   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | domain kubernetes-upgrade-299508 has defined IP address 192.168.39.69 and MAC address 52:54:00:90:2b:40 in network mk-kubernetes-upgrade-299508
	I0920 17:58:02.625140   52041 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/kubernetes-upgrade-299508/config.json ...
	I0920 17:58:02.625326   52041 start.go:128] duration metric: took 23.858547309s to createHost
	I0920 17:58:02.625349   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetSSHHostname
	I0920 17:58:02.628356   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | domain kubernetes-upgrade-299508 has defined MAC address 52:54:00:90:2b:40 in network mk-kubernetes-upgrade-299508
	I0920 17:58:02.628716   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:2b:40", ip: ""} in network mk-kubernetes-upgrade-299508: {Iface:virbr1 ExpiryTime:2024-09-20 18:57:54 +0000 UTC Type:0 Mac:52:54:00:90:2b:40 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:kubernetes-upgrade-299508 Clientid:01:52:54:00:90:2b:40}
	I0920 17:58:02.628746   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | domain kubernetes-upgrade-299508 has defined IP address 192.168.39.69 and MAC address 52:54:00:90:2b:40 in network mk-kubernetes-upgrade-299508
	I0920 17:58:02.628895   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetSSHPort
	I0920 17:58:02.629073   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetSSHKeyPath
	I0920 17:58:02.629236   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetSSHKeyPath
	I0920 17:58:02.629343   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetSSHUsername
	I0920 17:58:02.629470   52041 main.go:141] libmachine: Using SSH client type: native
	I0920 17:58:02.629652   52041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I0920 17:58:02.629674   52041 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 17:58:02.738494   52041 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726855082.708768009
	
	I0920 17:58:02.738521   52041 fix.go:216] guest clock: 1726855082.708768009
	I0920 17:58:02.738531   52041 fix.go:229] Guest: 2024-09-20 17:58:02.708768009 +0000 UTC Remote: 2024-09-20 17:58:02.625337332 +0000 UTC m=+23.978464898 (delta=83.430677ms)
	I0920 17:58:02.738558   52041 fix.go:200] guest clock delta is within tolerance: 83.430677ms
	I0920 17:58:02.738568   52041 start.go:83] releasing machines lock for "kubernetes-upgrade-299508", held for 23.971849071s
	I0920 17:58:02.738606   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .DriverName
	I0920 17:58:02.738878   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetIP
	I0920 17:58:02.741918   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | domain kubernetes-upgrade-299508 has defined MAC address 52:54:00:90:2b:40 in network mk-kubernetes-upgrade-299508
	I0920 17:58:02.742292   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:2b:40", ip: ""} in network mk-kubernetes-upgrade-299508: {Iface:virbr1 ExpiryTime:2024-09-20 18:57:54 +0000 UTC Type:0 Mac:52:54:00:90:2b:40 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:kubernetes-upgrade-299508 Clientid:01:52:54:00:90:2b:40}
	I0920 17:58:02.742322   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | domain kubernetes-upgrade-299508 has defined IP address 192.168.39.69 and MAC address 52:54:00:90:2b:40 in network mk-kubernetes-upgrade-299508
	I0920 17:58:02.742489   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .DriverName
	I0920 17:58:02.743014   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .DriverName
	I0920 17:58:02.743221   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .DriverName
	I0920 17:58:02.743320   52041 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 17:58:02.743361   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetSSHHostname
	I0920 17:58:02.743425   52041 ssh_runner.go:195] Run: cat /version.json
	I0920 17:58:02.743450   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetSSHHostname
	I0920 17:58:02.746174   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | domain kubernetes-upgrade-299508 has defined MAC address 52:54:00:90:2b:40 in network mk-kubernetes-upgrade-299508
	I0920 17:58:02.746458   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | domain kubernetes-upgrade-299508 has defined MAC address 52:54:00:90:2b:40 in network mk-kubernetes-upgrade-299508
	I0920 17:58:02.746520   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:2b:40", ip: ""} in network mk-kubernetes-upgrade-299508: {Iface:virbr1 ExpiryTime:2024-09-20 18:57:54 +0000 UTC Type:0 Mac:52:54:00:90:2b:40 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:kubernetes-upgrade-299508 Clientid:01:52:54:00:90:2b:40}
	I0920 17:58:02.746542   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | domain kubernetes-upgrade-299508 has defined IP address 192.168.39.69 and MAC address 52:54:00:90:2b:40 in network mk-kubernetes-upgrade-299508
	I0920 17:58:02.746660   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetSSHPort
	I0920 17:58:02.746844   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetSSHKeyPath
	I0920 17:58:02.746882   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:2b:40", ip: ""} in network mk-kubernetes-upgrade-299508: {Iface:virbr1 ExpiryTime:2024-09-20 18:57:54 +0000 UTC Type:0 Mac:52:54:00:90:2b:40 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:kubernetes-upgrade-299508 Clientid:01:52:54:00:90:2b:40}
	I0920 17:58:02.746916   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | domain kubernetes-upgrade-299508 has defined IP address 192.168.39.69 and MAC address 52:54:00:90:2b:40 in network mk-kubernetes-upgrade-299508
	I0920 17:58:02.747045   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetSSHUsername
	I0920 17:58:02.747089   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetSSHPort
	I0920 17:58:02.747158   52041 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/kubernetes-upgrade-299508/id_rsa Username:docker}
	I0920 17:58:02.747238   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetSSHKeyPath
	I0920 17:58:02.747375   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetSSHUsername
	I0920 17:58:02.747521   52041 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/kubernetes-upgrade-299508/id_rsa Username:docker}
	I0920 17:58:02.835732   52041 ssh_runner.go:195] Run: systemctl --version
	I0920 17:58:02.871732   52041 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 17:58:03.039988   52041 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 17:58:03.045788   52041 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 17:58:03.045878   52041 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 17:58:03.062844   52041 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 17:58:03.062877   52041 start.go:495] detecting cgroup driver to use...
	I0920 17:58:03.062958   52041 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 17:58:03.080770   52041 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 17:58:03.097851   52041 docker.go:217] disabling cri-docker service (if available) ...
	I0920 17:58:03.097926   52041 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 17:58:03.113880   52041 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 17:58:03.129017   52041 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 17:58:03.251049   52041 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 17:58:03.415940   52041 docker.go:233] disabling docker service ...
	I0920 17:58:03.416005   52041 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 17:58:03.432003   52041 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 17:58:03.445907   52041 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 17:58:03.567965   52041 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 17:58:03.687559   52041 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 17:58:03.703548   52041 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 17:58:03.723196   52041 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0920 17:58:03.723269   52041 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:58:03.734374   52041 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 17:58:03.734441   52041 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:58:03.745437   52041 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:58:03.756661   52041 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:58:03.767774   52041 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 17:58:03.779224   52041 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 17:58:03.789538   52041 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 17:58:03.789617   52041 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 17:58:03.803341   52041 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 17:58:03.813697   52041 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:58:03.951564   52041 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 17:58:04.048773   52041 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 17:58:04.048850   52041 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 17:58:04.054203   52041 start.go:563] Will wait 60s for crictl version
	I0920 17:58:04.054272   52041 ssh_runner.go:195] Run: which crictl
	I0920 17:58:04.058318   52041 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 17:58:04.101991   52041 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 17:58:04.102080   52041 ssh_runner.go:195] Run: crio --version
	I0920 17:58:04.134903   52041 ssh_runner.go:195] Run: crio --version
	I0920 17:58:04.167832   52041 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0920 17:58:04.169358   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetIP
	I0920 17:58:04.173789   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | domain kubernetes-upgrade-299508 has defined MAC address 52:54:00:90:2b:40 in network mk-kubernetes-upgrade-299508
	I0920 17:58:04.174231   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:2b:40", ip: ""} in network mk-kubernetes-upgrade-299508: {Iface:virbr1 ExpiryTime:2024-09-20 18:57:54 +0000 UTC Type:0 Mac:52:54:00:90:2b:40 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:kubernetes-upgrade-299508 Clientid:01:52:54:00:90:2b:40}
	I0920 17:58:04.174272   52041 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | domain kubernetes-upgrade-299508 has defined IP address 192.168.39.69 and MAC address 52:54:00:90:2b:40 in network mk-kubernetes-upgrade-299508
	I0920 17:58:04.174562   52041 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 17:58:04.179147   52041 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 17:58:04.193128   52041 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-299508 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-299508 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.69 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 17:58:04.193277   52041 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0920 17:58:04.193367   52041 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 17:58:04.230728   52041 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0920 17:58:04.230808   52041 ssh_runner.go:195] Run: which lz4
	I0920 17:58:04.234966   52041 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 17:58:04.239493   52041 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 17:58:04.239535   52041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0920 17:58:05.843095   52041 crio.go:462] duration metric: took 1.608168143s to copy over tarball
	I0920 17:58:05.843191   52041 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 17:58:08.644614   52041 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.801383193s)
	I0920 17:58:08.644643   52041 crio.go:469] duration metric: took 2.801515718s to extract the tarball
	I0920 17:58:08.644649   52041 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 17:58:08.687014   52041 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 17:58:08.738946   52041 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0920 17:58:08.738973   52041 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0920 17:58:08.739075   52041 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 17:58:08.739074   52041 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 17:58:08.739119   52041 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0920 17:58:08.739129   52041 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 17:58:08.739141   52041 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0920 17:58:08.739144   52041 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0920 17:58:08.739059   52041 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 17:58:08.739054   52041 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 17:58:08.740702   52041 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 17:58:08.740713   52041 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0920 17:58:08.740731   52041 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 17:58:08.740736   52041 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 17:58:08.740799   52041 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0920 17:58:08.740702   52041 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0920 17:58:08.740742   52041 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 17:58:08.740753   52041 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 17:58:09.012503   52041 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0920 17:58:09.056324   52041 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0920 17:58:09.056379   52041 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0920 17:58:09.056438   52041 ssh_runner.go:195] Run: which crictl
	I0920 17:58:09.061236   52041 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 17:58:09.085299   52041 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0920 17:58:09.088038   52041 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0920 17:58:09.094146   52041 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0920 17:58:09.098824   52041 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 17:58:09.100341   52041 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0920 17:58:09.103562   52041 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 17:58:09.147173   52041 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0920 17:58:09.235015   52041 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0920 17:58:09.235069   52041 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 17:58:09.235075   52041 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0920 17:58:09.235113   52041 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 17:58:09.235134   52041 ssh_runner.go:195] Run: which crictl
	I0920 17:58:09.235151   52041 ssh_runner.go:195] Run: which crictl
	I0920 17:58:09.278120   52041 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0920 17:58:09.278173   52041 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0920 17:58:09.278227   52041 ssh_runner.go:195] Run: which crictl
	I0920 17:58:09.278291   52041 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 17:58:09.282977   52041 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0920 17:58:09.283023   52041 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 17:58:09.283040   52041 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0920 17:58:09.283073   52041 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 17:58:09.283086   52041 ssh_runner.go:195] Run: which crictl
	I0920 17:58:09.283121   52041 ssh_runner.go:195] Run: which crictl
	I0920 17:58:09.304114   52041 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0920 17:58:09.304166   52041 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0920 17:58:09.304199   52041 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 17:58:09.304217   52041 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 17:58:09.304207   52041 ssh_runner.go:195] Run: which crictl
	I0920 17:58:09.304291   52041 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 17:58:09.344106   52041 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0920 17:58:09.344215   52041 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 17:58:09.344255   52041 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 17:58:09.406183   52041 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 17:58:09.406261   52041 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 17:58:09.444290   52041 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 17:58:09.444364   52041 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 17:58:09.447375   52041 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 17:58:09.465502   52041 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 17:58:09.541814   52041 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 17:58:09.541889   52041 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 17:58:09.588953   52041 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 17:58:09.588998   52041 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 17:58:09.606464   52041 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 17:58:09.606569   52041 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 17:58:09.659314   52041 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0920 17:58:09.659409   52041 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 17:58:09.743636   52041 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0920 17:58:09.743735   52041 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0920 17:58:09.746172   52041 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0920 17:58:09.746175   52041 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0920 17:58:09.752334   52041 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0920 17:58:10.043970   52041 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 17:58:10.186864   52041 cache_images.go:92] duration metric: took 1.447870117s to LoadCachedImages
	W0920 17:58:10.186967   52041 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0920 17:58:10.186988   52041 kubeadm.go:934] updating node { 192.168.39.69 8443 v1.20.0 crio true true} ...
	I0920 17:58:10.187117   52041 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-299508 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.69
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-299508 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 17:58:10.187192   52041 ssh_runner.go:195] Run: crio config
	I0920 17:58:10.235618   52041 cni.go:84] Creating CNI manager for ""
	I0920 17:58:10.235647   52041 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 17:58:10.235659   52041 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 17:58:10.235687   52041 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.69 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-299508 NodeName:kubernetes-upgrade-299508 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.69"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.69 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0920 17:58:10.235864   52041 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.69
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-299508"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.69
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.69"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 17:58:10.235938   52041 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0920 17:58:10.246648   52041 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 17:58:10.246721   52041 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 17:58:10.258099   52041 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I0920 17:58:10.276663   52041 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 17:58:10.297767   52041 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0920 17:58:10.318971   52041 ssh_runner.go:195] Run: grep 192.168.39.69	control-plane.minikube.internal$ /etc/hosts
	I0920 17:58:10.323162   52041 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.69	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 17:58:10.336271   52041 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:58:10.472525   52041 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 17:58:10.489601   52041 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/kubernetes-upgrade-299508 for IP: 192.168.39.69
	I0920 17:58:10.489633   52041 certs.go:194] generating shared ca certs ...
	I0920 17:58:10.489653   52041 certs.go:226] acquiring lock for ca certs: {Name:mkc7ef6c737c6bdc3fdd9dcff8f57029c020d8f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:58:10.489862   52041 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key
	I0920 17:58:10.489921   52041 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key
	I0920 17:58:10.489936   52041 certs.go:256] generating profile certs ...
	I0920 17:58:10.490007   52041 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/kubernetes-upgrade-299508/client.key
	I0920 17:58:10.490024   52041 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/kubernetes-upgrade-299508/client.crt with IP's: []
	I0920 17:58:10.660496   52041 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/kubernetes-upgrade-299508/client.crt ...
	I0920 17:58:10.660528   52041 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/kubernetes-upgrade-299508/client.crt: {Name:mkb221391cc77499ca58ed3e6d9c409165c9068b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:58:10.660719   52041 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/kubernetes-upgrade-299508/client.key ...
	I0920 17:58:10.660749   52041 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/kubernetes-upgrade-299508/client.key: {Name:mk465c55c3b216b0bfc1808805a5e2705d9e8115 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:58:10.660851   52041 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/kubernetes-upgrade-299508/apiserver.key.ef1fe2e2
	I0920 17:58:10.660873   52041 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/kubernetes-upgrade-299508/apiserver.crt.ef1fe2e2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.69]
	I0920 17:58:11.174006   52041 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/kubernetes-upgrade-299508/apiserver.crt.ef1fe2e2 ...
	I0920 17:58:11.174059   52041 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/kubernetes-upgrade-299508/apiserver.crt.ef1fe2e2: {Name:mk255f1423c0486a959eb2b54e88ea1d244ad3e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:58:11.174273   52041 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/kubernetes-upgrade-299508/apiserver.key.ef1fe2e2 ...
	I0920 17:58:11.174294   52041 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/kubernetes-upgrade-299508/apiserver.key.ef1fe2e2: {Name:mk84fff2a3485f1db1ea057d57dcf0afb6b2b9d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:58:11.174394   52041 certs.go:381] copying /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/kubernetes-upgrade-299508/apiserver.crt.ef1fe2e2 -> /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/kubernetes-upgrade-299508/apiserver.crt
	I0920 17:58:11.174495   52041 certs.go:385] copying /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/kubernetes-upgrade-299508/apiserver.key.ef1fe2e2 -> /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/kubernetes-upgrade-299508/apiserver.key
	I0920 17:58:11.174572   52041 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/kubernetes-upgrade-299508/proxy-client.key
	I0920 17:58:11.174595   52041 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/kubernetes-upgrade-299508/proxy-client.crt with IP's: []
	I0920 17:58:11.417423   52041 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/kubernetes-upgrade-299508/proxy-client.crt ...
	I0920 17:58:11.417450   52041 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/kubernetes-upgrade-299508/proxy-client.crt: {Name:mk01f3480d40c2bc091a7ef9060f42b65af104aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:58:11.417642   52041 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/kubernetes-upgrade-299508/proxy-client.key ...
	I0920 17:58:11.417658   52041 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/kubernetes-upgrade-299508/proxy-client.key: {Name:mk2641098bb806e5cc16ec9388623da835c1d78a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:58:11.417880   52041 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem (1338 bytes)
	W0920 17:58:11.417931   52041 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973_empty.pem, impossibly tiny 0 bytes
	I0920 17:58:11.417942   52041 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 17:58:11.417965   52041 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem (1082 bytes)
	I0920 17:58:11.417988   52041 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem (1123 bytes)
	I0920 17:58:11.418009   52041 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem (1675 bytes)
	I0920 17:58:11.418046   52041 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem (1708 bytes)
	I0920 17:58:11.418585   52041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 17:58:11.449216   52041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 17:58:11.475980   52041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 17:58:11.499427   52041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0920 17:58:11.539304   52041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/kubernetes-upgrade-299508/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0920 17:58:11.567199   52041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/kubernetes-upgrade-299508/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 17:58:11.592638   52041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/kubernetes-upgrade-299508/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 17:58:11.619330   52041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/kubernetes-upgrade-299508/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 17:58:11.650807   52041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /usr/share/ca-certificates/159732.pem (1708 bytes)
	I0920 17:58:11.676643   52041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 17:58:11.704092   52041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem --> /usr/share/ca-certificates/15973.pem (1338 bytes)
	I0920 17:58:11.731167   52041 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 17:58:11.749674   52041 ssh_runner.go:195] Run: openssl version
	I0920 17:58:11.756085   52041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/159732.pem && ln -fs /usr/share/ca-certificates/159732.pem /etc/ssl/certs/159732.pem"
	I0920 17:58:11.767668   52041 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/159732.pem
	I0920 17:58:11.772586   52041 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:04 /usr/share/ca-certificates/159732.pem
	I0920 17:58:11.772646   52041 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/159732.pem
	I0920 17:58:11.778659   52041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/159732.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 17:58:11.791073   52041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 17:58:11.803748   52041 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:58:11.809031   52041 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:58:11.809118   52041 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:58:11.815565   52041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 17:58:11.827648   52041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15973.pem && ln -fs /usr/share/ca-certificates/15973.pem /etc/ssl/certs/15973.pem"
	I0920 17:58:11.839412   52041 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15973.pem
	I0920 17:58:11.844907   52041 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:04 /usr/share/ca-certificates/15973.pem
	I0920 17:58:11.844978   52041 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15973.pem
	I0920 17:58:11.851188   52041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15973.pem /etc/ssl/certs/51391683.0"
	I0920 17:58:11.862282   52041 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 17:58:11.866693   52041 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 17:58:11.866753   52041 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-299508 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-299508 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.69 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 17:58:11.866846   52041 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 17:58:11.866901   52041 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 17:58:11.910394   52041 cri.go:89] found id: ""
	I0920 17:58:11.910472   52041 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 17:58:11.921090   52041 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 17:58:11.931411   52041 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 17:58:11.941781   52041 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 17:58:11.941810   52041 kubeadm.go:157] found existing configuration files:
	
	I0920 17:58:11.941872   52041 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 17:58:11.951727   52041 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 17:58:11.951798   52041 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 17:58:11.962582   52041 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 17:58:11.972441   52041 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 17:58:11.972500   52041 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 17:58:11.982951   52041 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 17:58:11.992749   52041 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 17:58:11.992817   52041 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 17:58:12.002654   52041 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 17:58:12.012230   52041 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 17:58:12.012296   52041 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 17:58:12.022027   52041 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 17:58:12.308728   52041 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 18:00:09.972302   52041 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0920 18:00:09.972415   52041 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0920 18:00:09.974693   52041 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0920 18:00:09.974793   52041 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 18:00:09.974905   52041 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 18:00:09.975068   52041 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 18:00:09.975222   52041 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0920 18:00:09.975320   52041 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 18:00:10.045779   52041 out.go:235]   - Generating certificates and keys ...
	I0920 18:00:10.045929   52041 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 18:00:10.046006   52041 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 18:00:10.046099   52041 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0920 18:00:10.046180   52041 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0920 18:00:10.046289   52041 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0920 18:00:10.046396   52041 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0920 18:00:10.046487   52041 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0920 18:00:10.046665   52041 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-299508 localhost] and IPs [192.168.39.69 127.0.0.1 ::1]
	I0920 18:00:10.046736   52041 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0920 18:00:10.046931   52041 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-299508 localhost] and IPs [192.168.39.69 127.0.0.1 ::1]
	I0920 18:00:10.047050   52041 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0920 18:00:10.047150   52041 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0920 18:00:10.047221   52041 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0920 18:00:10.047330   52041 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 18:00:10.047411   52041 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 18:00:10.047489   52041 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 18:00:10.047579   52041 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 18:00:10.047652   52041 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 18:00:10.047788   52041 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 18:00:10.047915   52041 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 18:00:10.047978   52041 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 18:00:10.048084   52041 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 18:00:10.151538   52041 out.go:235]   - Booting up control plane ...
	I0920 18:00:10.151701   52041 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 18:00:10.151795   52041 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 18:00:10.151889   52041 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 18:00:10.151992   52041 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 18:00:10.152250   52041 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0920 18:00:10.152342   52041 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0920 18:00:10.152439   52041 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:00:10.152688   52041 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:00:10.152789   52041 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:00:10.153052   52041 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:00:10.153139   52041 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:00:10.153397   52041 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:00:10.153498   52041 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:00:10.153807   52041 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:00:10.153942   52041 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:00:10.154165   52041 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:00:10.154174   52041 kubeadm.go:310] 
	I0920 18:00:10.154216   52041 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0920 18:00:10.154299   52041 kubeadm.go:310] 		timed out waiting for the condition
	I0920 18:00:10.154315   52041 kubeadm.go:310] 
	I0920 18:00:10.154364   52041 kubeadm.go:310] 	This error is likely caused by:
	I0920 18:00:10.154403   52041 kubeadm.go:310] 		- The kubelet is not running
	I0920 18:00:10.154517   52041 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0920 18:00:10.154527   52041 kubeadm.go:310] 
	I0920 18:00:10.154655   52041 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0920 18:00:10.154711   52041 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0920 18:00:10.154778   52041 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0920 18:00:10.154802   52041 kubeadm.go:310] 
	I0920 18:00:10.154996   52041 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0920 18:00:10.155124   52041 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0920 18:00:10.155144   52041 kubeadm.go:310] 
	I0920 18:00:10.155268   52041 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0920 18:00:10.155406   52041 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0920 18:00:10.155531   52041 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0920 18:00:10.155657   52041 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0920 18:00:10.155686   52041 kubeadm.go:310] 
	W0920 18:00:10.155840   52041 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-299508 localhost] and IPs [192.168.39.69 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-299508 localhost] and IPs [192.168.39.69 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-299508 localhost] and IPs [192.168.39.69 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-299508 localhost] and IPs [192.168.39.69 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0920 18:00:10.155884   52041 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0920 18:00:11.198444   52041 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.042511913s)
	I0920 18:00:11.198551   52041 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:00:11.219598   52041 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 18:00:11.233596   52041 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 18:00:11.233620   52041 kubeadm.go:157] found existing configuration files:
	
	I0920 18:00:11.233674   52041 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 18:00:11.245915   52041 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 18:00:11.246000   52041 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 18:00:11.260018   52041 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 18:00:11.272978   52041 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 18:00:11.273049   52041 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 18:00:11.284076   52041 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 18:00:11.297141   52041 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 18:00:11.297219   52041 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 18:00:11.310331   52041 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 18:00:11.320538   52041 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 18:00:11.320605   52041 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 18:00:11.331347   52041 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 18:00:11.411204   52041 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0920 18:00:11.411417   52041 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 18:00:11.603304   52041 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 18:00:11.603448   52041 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 18:00:11.603609   52041 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0920 18:00:11.821576   52041 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 18:00:11.823908   52041 out.go:235]   - Generating certificates and keys ...
	I0920 18:00:11.824008   52041 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 18:00:11.824090   52041 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 18:00:11.824228   52041 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 18:00:11.824327   52041 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 18:00:11.824442   52041 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 18:00:11.824534   52041 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 18:00:11.824629   52041 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 18:00:11.824761   52041 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 18:00:11.825414   52041 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 18:00:11.826052   52041 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 18:00:11.826112   52041 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 18:00:11.826197   52041 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 18:00:12.029670   52041 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 18:00:12.540826   52041 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 18:00:12.683551   52041 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 18:00:12.931371   52041 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 18:00:12.951376   52041 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 18:00:12.951591   52041 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 18:00:12.951650   52041 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 18:00:13.099282   52041 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 18:00:13.101134   52041 out.go:235]   - Booting up control plane ...
	I0920 18:00:13.101263   52041 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 18:00:13.109859   52041 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 18:00:13.111065   52041 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 18:00:13.111886   52041 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 18:00:13.114428   52041 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0920 18:00:53.117306   52041 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0920 18:00:53.117772   52041 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:00:53.118112   52041 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:00:58.119127   52041 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:00:58.119453   52041 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:01:08.119876   52041 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:01:08.120127   52041 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:01:28.119414   52041 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:01:28.119721   52041 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:02:08.119671   52041 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:02:08.119951   52041 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:02:08.119984   52041 kubeadm.go:310] 
	I0920 18:02:08.120056   52041 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0920 18:02:08.120120   52041 kubeadm.go:310] 		timed out waiting for the condition
	I0920 18:02:08.120130   52041 kubeadm.go:310] 
	I0920 18:02:08.120180   52041 kubeadm.go:310] 	This error is likely caused by:
	I0920 18:02:08.120253   52041 kubeadm.go:310] 		- The kubelet is not running
	I0920 18:02:08.120410   52041 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0920 18:02:08.120431   52041 kubeadm.go:310] 
	I0920 18:02:08.120598   52041 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0920 18:02:08.120743   52041 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0920 18:02:08.120804   52041 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0920 18:02:08.120813   52041 kubeadm.go:310] 
	I0920 18:02:08.120954   52041 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0920 18:02:08.121082   52041 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0920 18:02:08.121096   52041 kubeadm.go:310] 
	I0920 18:02:08.121235   52041 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0920 18:02:08.121334   52041 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0920 18:02:08.121451   52041 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0920 18:02:08.121551   52041 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0920 18:02:08.121564   52041 kubeadm.go:310] 
	I0920 18:02:08.121770   52041 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 18:02:08.121932   52041 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0920 18:02:08.122090   52041 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0920 18:02:08.122097   52041 kubeadm.go:394] duration metric: took 3m56.255346662s to StartCluster
	I0920 18:02:08.122141   52041 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:02:08.122203   52041 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:02:08.171686   52041 cri.go:89] found id: ""
	I0920 18:02:08.171715   52041 logs.go:276] 0 containers: []
	W0920 18:02:08.171726   52041 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:02:08.171734   52041 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:02:08.171798   52041 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:02:08.207713   52041 cri.go:89] found id: ""
	I0920 18:02:08.207740   52041 logs.go:276] 0 containers: []
	W0920 18:02:08.207747   52041 logs.go:278] No container was found matching "etcd"
	I0920 18:02:08.207753   52041 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:02:08.207803   52041 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:02:08.243654   52041 cri.go:89] found id: ""
	I0920 18:02:08.243686   52041 logs.go:276] 0 containers: []
	W0920 18:02:08.243698   52041 logs.go:278] No container was found matching "coredns"
	I0920 18:02:08.243706   52041 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:02:08.243763   52041 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:02:08.278771   52041 cri.go:89] found id: ""
	I0920 18:02:08.278801   52041 logs.go:276] 0 containers: []
	W0920 18:02:08.278809   52041 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:02:08.278815   52041 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:02:08.278864   52041 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:02:08.322269   52041 cri.go:89] found id: ""
	I0920 18:02:08.322299   52041 logs.go:276] 0 containers: []
	W0920 18:02:08.322310   52041 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:02:08.322318   52041 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:02:08.322382   52041 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:02:08.358919   52041 cri.go:89] found id: ""
	I0920 18:02:08.358956   52041 logs.go:276] 0 containers: []
	W0920 18:02:08.358969   52041 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:02:08.358976   52041 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:02:08.359042   52041 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:02:08.395638   52041 cri.go:89] found id: ""
	I0920 18:02:08.395667   52041 logs.go:276] 0 containers: []
	W0920 18:02:08.395675   52041 logs.go:278] No container was found matching "kindnet"
	I0920 18:02:08.395685   52041 logs.go:123] Gathering logs for kubelet ...
	I0920 18:02:08.395696   52041 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:02:08.449528   52041 logs.go:123] Gathering logs for dmesg ...
	I0920 18:02:08.449579   52041 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:02:08.463738   52041 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:02:08.463765   52041 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:02:08.595341   52041 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:02:08.595364   52041 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:02:08.595377   52041 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:02:08.705987   52041 logs.go:123] Gathering logs for container status ...
	I0920 18:02:08.706025   52041 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0920 18:02:08.745784   52041 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0920 18:02:08.745875   52041 out.go:270] * 
	* 
	W0920 18:02:08.745950   52041 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0920 18:02:08.745971   52041 out.go:270] * 
	* 
	W0920 18:02:08.746943   52041 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 18:02:08.749701   52041 out.go:201] 
	W0920 18:02:08.750878   52041 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0920 18:02:08.750939   52041 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0920 18:02:08.750975   52041 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0920 18:02:08.752263   52041 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-299508 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-299508
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-299508: (1.552334601s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-299508 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-299508 status --format={{.Host}}: exit status 7 (63.355472ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-299508 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-299508 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (40.708191556s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-299508 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-299508 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-299508 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (100.729594ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-299508] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19672
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19672-8777/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-8777/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-299508
	    minikube start -p kubernetes-upgrade-299508 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2995082 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-299508 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-299508 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-299508 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (27.553642363s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-09-20 18:03:18.869765933 +0000 UTC m=+4780.513847482
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-299508 -n kubernetes-upgrade-299508
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-299508 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-299508 logs -n 25: (1.729882031s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-956160          | force-systemd-flag-956160 | jenkins | v1.34.0 | 20 Sep 24 17:57 UTC | 20 Sep 24 17:59 UTC |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p offline-crio-312889                | offline-crio-312889       | jenkins | v1.34.0 | 20 Sep 24 17:58 UTC | 20 Sep 24 17:58 UTC |
	| start   | -p force-systemd-env-030548           | force-systemd-env-030548  | jenkins | v1.34.0 | 20 Sep 24 17:58 UTC | 20 Sep 24 17:59 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-956160 ssh cat     | force-systemd-flag-956160 | jenkins | v1.34.0 | 20 Sep 24 17:59 UTC | 20 Sep 24 17:59 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-956160          | force-systemd-flag-956160 | jenkins | v1.34.0 | 20 Sep 24 17:59 UTC | 20 Sep 24 17:59 UTC |
	| start   | -p cert-expiration-452691             | cert-expiration-452691    | jenkins | v1.34.0 | 20 Sep 24 17:59 UTC | 20 Sep 24 18:00 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-299391 stop           | minikube                  | jenkins | v1.26.0 | 20 Sep 24 17:59 UTC | 20 Sep 24 17:59 UTC |
	| start   | -p stopped-upgrade-299391             | stopped-upgrade-299391    | jenkins | v1.34.0 | 20 Sep 24 17:59 UTC | 20 Sep 24 18:00 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-030548           | force-systemd-env-030548  | jenkins | v1.34.0 | 20 Sep 24 17:59 UTC | 20 Sep 24 17:59 UTC |
	| start   | -p cert-options-815898                | cert-options-815898       | jenkins | v1.34.0 | 20 Sep 24 17:59 UTC | 20 Sep 24 18:01 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-299391             | stopped-upgrade-299391    | jenkins | v1.34.0 | 20 Sep 24 18:00 UTC | 20 Sep 24 18:00 UTC |
	| start   | -p running-upgrade-267014             | minikube                  | jenkins | v1.26.0 | 20 Sep 24 18:00 UTC | 20 Sep 24 18:01 UTC |
	|         | --memory=2200 --vm-driver=kvm2        |                           |         |         |                     |                     |
	|         |  --container-runtime=crio             |                           |         |         |                     |                     |
	| ssh     | cert-options-815898 ssh               | cert-options-815898       | jenkins | v1.34.0 | 20 Sep 24 18:01 UTC | 20 Sep 24 18:01 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-815898 -- sudo        | cert-options-815898       | jenkins | v1.34.0 | 20 Sep 24 18:01 UTC | 20 Sep 24 18:01 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-815898                | cert-options-815898       | jenkins | v1.34.0 | 20 Sep 24 18:01 UTC | 20 Sep 24 18:01 UTC |
	| start   | -p pause-421146 --memory=2048         | pause-421146              | jenkins | v1.34.0 | 20 Sep 24 18:01 UTC | 20 Sep 24 18:02 UTC |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2              |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p running-upgrade-267014             | running-upgrade-267014    | jenkins | v1.34.0 | 20 Sep 24 18:01 UTC | 20 Sep 24 18:02 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-299508          | kubernetes-upgrade-299508 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	| start   | -p kubernetes-upgrade-299508          | kubernetes-upgrade-299508 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p pause-421146                       | pause-421146              | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-299508          | kubernetes-upgrade-299508 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-299508          | kubernetes-upgrade-299508 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:03 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-267014             | running-upgrade-267014    | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	| start   | -p NoKubernetes-246858                | NoKubernetes-246858       | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC |                     |
	|         | --no-kubernetes                       |                           |         |         |                     |                     |
	|         | --kubernetes-version=1.20             |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-246858                | NoKubernetes-246858       | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 18:02:53
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 18:02:53.220312   58507 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:02:53.220787   58507 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:02:53.220793   58507 out.go:358] Setting ErrFile to fd 2...
	I0920 18:02:53.220799   58507 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:02:53.221198   58507 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-8777/.minikube/bin
	I0920 18:02:53.222250   58507 out.go:352] Setting JSON to false
	I0920 18:02:53.223376   58507 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6316,"bootTime":1726849057,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 18:02:53.223502   58507 start.go:139] virtualization: kvm guest
	I0920 18:02:53.225723   58507 out.go:177] * [NoKubernetes-246858] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 18:02:53.227514   58507 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 18:02:53.227542   58507 notify.go:220] Checking for updates...
	I0920 18:02:53.230262   58507 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 18:02:53.231501   58507 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-8777/kubeconfig
	I0920 18:02:53.232661   58507 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-8777/.minikube
	I0920 18:02:53.233985   58507 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 18:02:53.235229   58507 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 18:02:53.237021   58507 config.go:182] Loaded profile config "cert-expiration-452691": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:02:53.237124   58507 config.go:182] Loaded profile config "kubernetes-upgrade-299508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:02:53.237228   58507 config.go:182] Loaded profile config "pause-421146": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:02:53.237325   58507 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 18:02:53.283275   58507 out.go:177] * Using the kvm2 driver based on user configuration
	I0920 18:02:53.284390   58507 start.go:297] selected driver: kvm2
	I0920 18:02:53.284400   58507 start.go:901] validating driver "kvm2" against <nil>
	I0920 18:02:53.284426   58507 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 18:02:53.284977   58507 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:02:53.285086   58507 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19672-8777/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 18:02:53.307483   58507 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 18:02:53.307535   58507 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 18:02:53.308264   58507 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0920 18:02:53.308470   58507 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0920 18:02:53.308498   58507 cni.go:84] Creating CNI manager for ""
	I0920 18:02:53.308562   58507 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:02:53.308571   58507 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 18:02:53.308638   58507 start.go:340] cluster config:
	{Name:NoKubernetes-246858 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:NoKubernetes-246858 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:02:53.308772   58507 iso.go:125] acquiring lock: {Name:mkba95ef0488e46f622333e9f317f43def93040b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:02:53.324698   58507 out.go:177] * Starting "NoKubernetes-246858" primary control-plane node in "NoKubernetes-246858" cluster
	I0920 18:02:51.567316   58367 machine.go:93] provisionDockerMachine start ...
	I0920 18:02:51.567365   58367 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .DriverName
	I0920 18:02:51.567757   58367 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetSSHHostname
	I0920 18:02:51.571503   58367 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | domain kubernetes-upgrade-299508 has defined MAC address 52:54:00:90:2b:40 in network mk-kubernetes-upgrade-299508
	I0920 18:02:51.571919   58367 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:2b:40", ip: ""} in network mk-kubernetes-upgrade-299508: {Iface:virbr1 ExpiryTime:2024-09-20 19:02:22 +0000 UTC Type:0 Mac:52:54:00:90:2b:40 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:kubernetes-upgrade-299508 Clientid:01:52:54:00:90:2b:40}
	I0920 18:02:51.571949   58367 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | domain kubernetes-upgrade-299508 has defined IP address 192.168.39.69 and MAC address 52:54:00:90:2b:40 in network mk-kubernetes-upgrade-299508
	I0920 18:02:51.572249   58367 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetSSHPort
	I0920 18:02:51.578187   58367 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetSSHKeyPath
	I0920 18:02:51.578386   58367 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetSSHKeyPath
	I0920 18:02:51.578567   58367 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetSSHUsername
	I0920 18:02:51.578789   58367 main.go:141] libmachine: Using SSH client type: native
	I0920 18:02:51.579069   58367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I0920 18:02:51.579082   58367 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 18:02:51.708082   58367 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-299508
	
	I0920 18:02:51.708114   58367 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetMachineName
	I0920 18:02:51.708414   58367 buildroot.go:166] provisioning hostname "kubernetes-upgrade-299508"
	I0920 18:02:51.708441   58367 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetMachineName
	I0920 18:02:51.708670   58367 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetSSHHostname
	I0920 18:02:51.712041   58367 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | domain kubernetes-upgrade-299508 has defined MAC address 52:54:00:90:2b:40 in network mk-kubernetes-upgrade-299508
	I0920 18:02:51.712500   58367 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:2b:40", ip: ""} in network mk-kubernetes-upgrade-299508: {Iface:virbr1 ExpiryTime:2024-09-20 19:02:22 +0000 UTC Type:0 Mac:52:54:00:90:2b:40 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:kubernetes-upgrade-299508 Clientid:01:52:54:00:90:2b:40}
	I0920 18:02:51.712573   58367 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | domain kubernetes-upgrade-299508 has defined IP address 192.168.39.69 and MAC address 52:54:00:90:2b:40 in network mk-kubernetes-upgrade-299508
	I0920 18:02:51.713047   58367 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetSSHPort
	I0920 18:02:51.713308   58367 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetSSHKeyPath
	I0920 18:02:51.713483   58367 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetSSHKeyPath
	I0920 18:02:51.713685   58367 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetSSHUsername
	I0920 18:02:51.713933   58367 main.go:141] libmachine: Using SSH client type: native
	I0920 18:02:51.714148   58367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I0920 18:02:51.714166   58367 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-299508 && echo "kubernetes-upgrade-299508" | sudo tee /etc/hostname
	I0920 18:02:51.862435   58367 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-299508
	
	I0920 18:02:51.862468   58367 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetSSHHostname
	I0920 18:02:51.866302   58367 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | domain kubernetes-upgrade-299508 has defined MAC address 52:54:00:90:2b:40 in network mk-kubernetes-upgrade-299508
	I0920 18:02:51.866910   58367 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:2b:40", ip: ""} in network mk-kubernetes-upgrade-299508: {Iface:virbr1 ExpiryTime:2024-09-20 19:02:22 +0000 UTC Type:0 Mac:52:54:00:90:2b:40 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:kubernetes-upgrade-299508 Clientid:01:52:54:00:90:2b:40}
	I0920 18:02:51.866945   58367 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | domain kubernetes-upgrade-299508 has defined IP address 192.168.39.69 and MAC address 52:54:00:90:2b:40 in network mk-kubernetes-upgrade-299508
	I0920 18:02:51.867083   58367 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetSSHPort
	I0920 18:02:51.867282   58367 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetSSHKeyPath
	I0920 18:02:51.867447   58367 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetSSHKeyPath
	I0920 18:02:51.867566   58367 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetSSHUsername
	I0920 18:02:51.867738   58367 main.go:141] libmachine: Using SSH client type: native
	I0920 18:02:51.867974   58367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I0920 18:02:51.868000   58367 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-299508' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-299508/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-299508' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 18:02:52.003358   58367 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:02:52.003393   58367 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-8777/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-8777/.minikube}
	I0920 18:02:52.003441   58367 buildroot.go:174] setting up certificates
	I0920 18:02:52.003453   58367 provision.go:84] configureAuth start
	I0920 18:02:52.003469   58367 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetMachineName
	I0920 18:02:52.003788   58367 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetIP
	I0920 18:02:52.007032   58367 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | domain kubernetes-upgrade-299508 has defined MAC address 52:54:00:90:2b:40 in network mk-kubernetes-upgrade-299508
	I0920 18:02:52.198957   58367 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:2b:40", ip: ""} in network mk-kubernetes-upgrade-299508: {Iface:virbr1 ExpiryTime:2024-09-20 19:02:22 +0000 UTC Type:0 Mac:52:54:00:90:2b:40 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:kubernetes-upgrade-299508 Clientid:01:52:54:00:90:2b:40}
	I0920 18:02:52.198992   58367 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | domain kubernetes-upgrade-299508 has defined IP address 192.168.39.69 and MAC address 52:54:00:90:2b:40 in network mk-kubernetes-upgrade-299508
	I0920 18:02:52.199270   58367 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetSSHHostname
	I0920 18:02:52.762212   58367 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | domain kubernetes-upgrade-299508 has defined MAC address 52:54:00:90:2b:40 in network mk-kubernetes-upgrade-299508
	I0920 18:02:52.762655   58367 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:2b:40", ip: ""} in network mk-kubernetes-upgrade-299508: {Iface:virbr1 ExpiryTime:2024-09-20 19:02:22 +0000 UTC Type:0 Mac:52:54:00:90:2b:40 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:kubernetes-upgrade-299508 Clientid:01:52:54:00:90:2b:40}
	I0920 18:02:52.762687   58367 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | domain kubernetes-upgrade-299508 has defined IP address 192.168.39.69 and MAC address 52:54:00:90:2b:40 in network mk-kubernetes-upgrade-299508
	I0920 18:02:52.762876   58367 provision.go:143] copyHostCerts
	I0920 18:02:52.762957   58367 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem, removing ...
	I0920 18:02:52.762977   58367 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem
	I0920 18:02:52.763038   58367 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem (1082 bytes)
	I0920 18:02:52.763179   58367 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem, removing ...
	I0920 18:02:52.763192   58367 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem
	I0920 18:02:52.763222   58367 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem (1123 bytes)
	I0920 18:02:52.763307   58367 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem, removing ...
	I0920 18:02:52.763318   58367 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem
	I0920 18:02:52.763350   58367 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem (1675 bytes)
	I0920 18:02:52.763427   58367 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-299508 san=[127.0.0.1 192.168.39.69 kubernetes-upgrade-299508 localhost minikube]
	I0920 18:02:52.890423   58367 provision.go:177] copyRemoteCerts
	I0920 18:02:52.890509   58367 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 18:02:52.890543   58367 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetSSHHostname
	I0920 18:02:52.893912   58367 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | domain kubernetes-upgrade-299508 has defined MAC address 52:54:00:90:2b:40 in network mk-kubernetes-upgrade-299508
	I0920 18:02:52.894498   58367 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:2b:40", ip: ""} in network mk-kubernetes-upgrade-299508: {Iface:virbr1 ExpiryTime:2024-09-20 19:02:22 +0000 UTC Type:0 Mac:52:54:00:90:2b:40 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:kubernetes-upgrade-299508 Clientid:01:52:54:00:90:2b:40}
	I0920 18:02:52.894534   58367 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | domain kubernetes-upgrade-299508 has defined IP address 192.168.39.69 and MAC address 52:54:00:90:2b:40 in network mk-kubernetes-upgrade-299508
	I0920 18:02:52.894876   58367 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetSSHPort
	I0920 18:02:52.895043   58367 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetSSHKeyPath
	I0920 18:02:52.895201   58367 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetSSHUsername
	I0920 18:02:52.895372   58367 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/kubernetes-upgrade-299508/id_rsa Username:docker}
	I0920 18:02:52.985714   58367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0920 18:02:53.025034   58367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0920 18:02:53.060581   58367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 18:02:53.094351   58367 provision.go:87] duration metric: took 1.090883469s to configureAuth
	I0920 18:02:53.094377   58367 buildroot.go:189] setting minikube options for container-runtime
	I0920 18:02:53.094594   58367 config.go:182] Loaded profile config "kubernetes-upgrade-299508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:02:53.094689   58367 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetSSHHostname
	I0920 18:02:53.098383   58367 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | domain kubernetes-upgrade-299508 has defined MAC address 52:54:00:90:2b:40 in network mk-kubernetes-upgrade-299508
	I0920 18:02:53.098852   58367 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:2b:40", ip: ""} in network mk-kubernetes-upgrade-299508: {Iface:virbr1 ExpiryTime:2024-09-20 19:02:22 +0000 UTC Type:0 Mac:52:54:00:90:2b:40 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:kubernetes-upgrade-299508 Clientid:01:52:54:00:90:2b:40}
	I0920 18:02:53.098884   58367 main.go:141] libmachine: (kubernetes-upgrade-299508) DBG | domain kubernetes-upgrade-299508 has defined IP address 192.168.39.69 and MAC address 52:54:00:90:2b:40 in network mk-kubernetes-upgrade-299508
	I0920 18:02:53.099223   58367 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetSSHPort
	I0920 18:02:53.099473   58367 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetSSHKeyPath
	I0920 18:02:53.099701   58367 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetSSHKeyPath
	I0920 18:02:53.099876   58367 main.go:141] libmachine: (kubernetes-upgrade-299508) Calling .GetSSHUsername
	I0920 18:02:53.100048   58367 main.go:141] libmachine: Using SSH client type: native
	I0920 18:02:53.100287   58367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I0920 18:02:53.100316   58367 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 18:02:51.950133   58226 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:02:51.983241   58226 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/pause-421146 for IP: 192.168.50.200
	I0920 18:02:51.983263   58226 certs.go:194] generating shared ca certs ...
	I0920 18:02:51.983283   58226 certs.go:226] acquiring lock for ca certs: {Name:mkc7ef6c737c6bdc3fdd9dcff8f57029c020d8f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:02:51.983473   58226 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key
	I0920 18:02:51.983536   58226 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key
	I0920 18:02:51.983550   58226 certs.go:256] generating profile certs ...
	I0920 18:02:51.983656   58226 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/pause-421146/client.key
	I0920 18:02:51.983744   58226 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/pause-421146/apiserver.key.d69ea9c8
	I0920 18:02:51.983805   58226 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/pause-421146/proxy-client.key
	I0920 18:02:51.983954   58226 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem (1338 bytes)
	W0920 18:02:51.983992   58226 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973_empty.pem, impossibly tiny 0 bytes
	I0920 18:02:51.984003   58226 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 18:02:51.984043   58226 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem (1082 bytes)
	I0920 18:02:51.984077   58226 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem (1123 bytes)
	I0920 18:02:51.984110   58226 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem (1675 bytes)
	I0920 18:02:51.984219   58226 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem (1708 bytes)
	I0920 18:02:51.985028   58226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 18:02:52.023905   58226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 18:02:52.070693   58226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 18:02:52.117448   58226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0920 18:02:52.152420   58226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/pause-421146/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0920 18:02:52.228104   58226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/pause-421146/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 18:02:52.289123   58226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/pause-421146/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 18:02:52.340679   58226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/pause-421146/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 18:02:52.432297   58226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /usr/share/ca-certificates/159732.pem (1708 bytes)
	I0920 18:02:52.475771   58226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 18:02:52.529424   58226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem --> /usr/share/ca-certificates/15973.pem (1338 bytes)
	I0920 18:02:52.565175   58226 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 18:02:52.589615   58226 ssh_runner.go:195] Run: openssl version
	I0920 18:02:52.597319   58226 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 18:02:52.612489   58226 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:02:52.619144   58226 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:02:52.619209   58226 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:02:52.625590   58226 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 18:02:52.637604   58226 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15973.pem && ln -fs /usr/share/ca-certificates/15973.pem /etc/ssl/certs/15973.pem"
	I0920 18:02:52.650067   58226 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15973.pem
	I0920 18:02:52.655149   58226 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:04 /usr/share/ca-certificates/15973.pem
	I0920 18:02:52.655222   58226 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15973.pem
	I0920 18:02:52.661990   58226 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15973.pem /etc/ssl/certs/51391683.0"
	I0920 18:02:52.673239   58226 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/159732.pem && ln -fs /usr/share/ca-certificates/159732.pem /etc/ssl/certs/159732.pem"
	I0920 18:02:52.689525   58226 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/159732.pem
	I0920 18:02:52.694746   58226 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:04 /usr/share/ca-certificates/159732.pem
	I0920 18:02:52.694816   58226 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/159732.pem
	I0920 18:02:52.702906   58226 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/159732.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 18:02:52.718784   58226 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 18:02:52.724656   58226 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 18:02:52.731433   58226 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 18:02:52.740334   58226 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 18:02:52.749768   58226 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 18:02:52.758768   58226 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 18:02:52.766454   58226 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 18:02:52.773783   58226 kubeadm.go:392] StartCluster: {Name:pause-421146 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:pause-421146 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.200 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-secur
ity-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:02:52.774034   58226 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 18:02:52.774092   58226 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:02:52.851875   58226 cri.go:89] found id: "75d3de29f781282cadcafa18080ca10c150d68ea2643c96305500091f11cd130"
	I0920 18:02:52.851901   58226 cri.go:89] found id: "e65068aabc970c4f3ded7fd5c3bbaef59385603701d9c00ed027e8385212b674"
	I0920 18:02:52.851907   58226 cri.go:89] found id: "6caff0d1adf3a97c67a13c3e8be7618e77ea79c31f750fff59228a7be609fd4d"
	I0920 18:02:52.851912   58226 cri.go:89] found id: "acff960df015881f08db968c9ce85cea196679f2ba2dd86c8c0589365b50502c"
	I0920 18:02:52.851916   58226 cri.go:89] found id: "998f1cab8849745ac4a858a8df2532e7b3bfc179f529b1172684f92fee861d0d"
	I0920 18:02:52.851921   58226 cri.go:89] found id: "81a97c79e3ef43249f4d0285077861826dcf108662f81b26a4e9ea112ebc5f45"
	I0920 18:02:52.851925   58226 cri.go:89] found id: "8f06c6c4d665531dbf0c0d8e2b4477662bd5b99e2ccd0f4db3ac78f78989c26d"
	I0920 18:02:52.851931   58226 cri.go:89] found id: "649042c723b9db769c29dc7735cc090423ca44ef0dd4553705afa66820f21574"
	I0920 18:02:52.851935   58226 cri.go:89] found id: "94163c85f0797bbaa56ddd8fc3acb0c3cd391e24271045f5ae259c7c4c0babf1"
	I0920 18:02:52.851944   58226 cri.go:89] found id: "163b3f69cfd1ee643b766df4f2e5038143ecf7af02628eb719a9f18368721dbc"
	I0920 18:02:52.851951   58226 cri.go:89] found id: "70e2c0d4123ff483cfaa8d13e195a04d53dba0a0fdc7281f30e38dc720038f72"
	I0920 18:02:52.851956   58226 cri.go:89] found id: "0fc32a3094b4e1736505ba6b9eb2d1120b6b4c38afed1b6734045a58d56ba358"
	I0920 18:02:52.851961   58226 cri.go:89] found id: ""
	I0920 18:02:52.852012   58226 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 20 18:03:19 kubernetes-upgrade-299508 crio[2242]: time="2024-09-20 18:03:19.612531732Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855399612266612,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=797ba4dc-4232-48ee-9ac8-1be35a518160 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:03:19 kubernetes-upgrade-299508 crio[2242]: time="2024-09-20 18:03:19.613577506Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=45a0a0b5-5332-4fe7-ab3c-a77874984994 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:03:19 kubernetes-upgrade-299508 crio[2242]: time="2024-09-20 18:03:19.613683645Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=45a0a0b5-5332-4fe7-ab3c-a77874984994 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:03:19 kubernetes-upgrade-299508 crio[2242]: time="2024-09-20 18:03:19.614132500Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4cb930cde2e28b73c9d154731b52bcb4533ee8b705deb4df86dfd074e2053ee3,PodSandboxId:30063be54f9b45d693076bd82698b48d5f1b7bd5bff02696c18145e8bea3e487,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726855396967943471,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xqcc7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4564cece-0f72-4c0d-894a-db118f858084,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96afae658e4a423788a42e325057515d144d10b009c6f13c53f03db83d03f6bd,PodSandboxId:ee19c7262ef3aabc44da54e89e58bdee1e1664ac790890b331ed66f003d0cd0e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726855396815147883,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-s6hjr,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: fa1e96e5-7b44-44fb-8388-d5a9912c804e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64ab810d7a510c2840b638f855d95847bb7c865590232cf57a5478e41514d20f,PodSandboxId:5e002688dbd7afacb821247ea4055d51dad101ae7276db8ea02e7176840ab457,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1726855396444845583,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79a639c2-6c99-47af-b2fd-96943717b6ce,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a4e1a163bee5efa59cff6af4c8121d8baced8806d7381fe7e098aff50b7e45c,PodSandboxId:d3f35ac89846f204344df418af79bf9c37295044a17289fa9ab83320e61a34a7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,C
reatedAt:1726855396305863136,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p4xvv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8431fb54-b28c-485b-96bd-51de3cbd5797,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a920ed93c07cc5d75c488a50f1fd1475d5416e2c8285c051ed2916c21b78d4cd,PodSandboxId:df9b0febded6c11fa747614b3547cb3e8762600be4bf717712a9f9afd23a851b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726855391393763530,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-299508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e81f916c5ade886e8e0e3ab058c1a4ab,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbf87ce35336e3e90df4af93b7beb94ee2b3d289207c3dab3c3d2bfdbbf2147a,PodSandboxId:ee8725bff59fc880f5a033d78595641ca96084a3fa534d6348130bfc7f3b99c1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726855391336686676,Labels:map[string]string{io.kuberne
tes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-299508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce5898d9714800092a0a8c74144333f5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:722d126d21b9c88fa1969a52f2c51a492c61507214fc6602f4e916353db324c3,PodSandboxId:f0a0a0159c36ab5959e81e4a746f69df92cfa4add0753d97c31c187ebb05aa9c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726855391343092642,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-299508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c49340a6f972bf85931b86971d9c9fa0,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb63711cff6b682b2accbf144d09679ce19b67084355d97b0a78f060140af22f,PodSandboxId:3ebb64fea43ecb2599c93b6dceea85c964b380f515dc8f189b79472cbfe88009,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726855391329398923,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-299508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e51fcb299a163fe438f099860ee8bd6b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee50b4169ec8b0796713c2eb6e22919b505c43a4c409a86ba57d1e5900e373c3,PodSandboxId:b5768fbb8d6c003e1d41e4dda746c95649ce8984cea9f0cc1277ff3ac9b05377,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726855371463111076,Labels:map[string]s
tring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79a639c2-6c99-47af-b2fd-96943717b6ce,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f0c05c90ba006aa0ce12b88eb3fe4e70fd16c8749546daea35733a9b498a54a,PodSandboxId:13d264e906dbb5c4bb2d3f017388471b1285a46502a8deaae5d68072e92a5f67,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726855370669145001,Labels:map[string]string{io.kubernetes.conta
iner.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-s6hjr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa1e96e5-7b44-44fb-8388-d5a9912c804e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ed69061b7bb12a62fd1cfe919158cbc887c60089dcaebf036dc67cd6a58aa87,PodSandboxId:174f5d06becb54c63e45cd30637dbb29bddabad1e78097cb775bf8f5bc3b8c56,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedIm
age:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726855370440734039,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xqcc7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4564cece-0f72-4c0d-894a-db118f858084,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2eba3fdc72c486bb8a3a9f5b11df36626a6ba4408d611da7e0f9a8227548a4a8,PodSandboxId:baf09e0180c73d3c209101aaed1debb83ede2dc12fdab631e1ec53f3a02
3748e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726855369810820591,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p4xvv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8431fb54-b28c-485b-96bd-51de3cbd5797,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62d7d3ba9db5dddac4b5f8859a12356b0f8284dbbe365454b4316ff428dce86f,PodSandboxId:cfe41daab32a47779c4a4a27c3d9cee390d758bf6e25e6ef3c56f4696143aa91,Metadata:&ContainerMetadata{
Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726855359058845707,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-299508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e81f916c5ade886e8e0e3ab058c1a4ab,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20e6a51255de6330d2168e21751ad52574a92a7040083e74014c5ad89103f231,PodSandboxId:df53803213f7ee83fdf2615595b8abdbfa2133e07b0ddbc514e373d06ed1ec2b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Imag
e:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726855359002455258,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-299508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c49340a6f972bf85931b86971d9c9fa0,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1b5f528054033fae69eed59e819027d626c0678002580cd7cd3b812530c84b3,PodSandboxId:dee8a6336cc4b21214bdf9de38bbecb14353cb77b094fc26359d283e5cea3eba,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},I
mage:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726855358956459359,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-299508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e51fcb299a163fe438f099860ee8bd6b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50c7aeed5942e8820c55d2fbe082174e92f79f13313b05fddc6573d44f84e13a,PodSandboxId:8d328dc5b2f70d41781e1c5bc2daa6f691dbe2c6dbd62f21a5063cd413be8729,Metadata:&ContainerMetadata{Name:kube-apiserver,A
ttempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726855358878554794,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-299508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce5898d9714800092a0a8c74144333f5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=45a0a0b5-5332-4fe7-ab3c-a77874984994 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:03:19 kubernetes-upgrade-299508 crio[2242]: time="2024-09-20 18:03:19.663896114Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=32a07b1a-dbc9-403a-bf19-44e9bfe8c0e2 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:03:19 kubernetes-upgrade-299508 crio[2242]: time="2024-09-20 18:03:19.664238432Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=32a07b1a-dbc9-403a-bf19-44e9bfe8c0e2 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:03:19 kubernetes-upgrade-299508 crio[2242]: time="2024-09-20 18:03:19.665675400Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=719a207c-b2b0-44c9-aae1-277e4ad2bbd8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:03:19 kubernetes-upgrade-299508 crio[2242]: time="2024-09-20 18:03:19.666116205Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855399666091863,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=719a207c-b2b0-44c9-aae1-277e4ad2bbd8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:03:19 kubernetes-upgrade-299508 crio[2242]: time="2024-09-20 18:03:19.666821820Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=29006ffe-251d-456a-ac75-37f628187a2b name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:03:19 kubernetes-upgrade-299508 crio[2242]: time="2024-09-20 18:03:19.666914294Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=29006ffe-251d-456a-ac75-37f628187a2b name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:03:19 kubernetes-upgrade-299508 crio[2242]: time="2024-09-20 18:03:19.667419206Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4cb930cde2e28b73c9d154731b52bcb4533ee8b705deb4df86dfd074e2053ee3,PodSandboxId:30063be54f9b45d693076bd82698b48d5f1b7bd5bff02696c18145e8bea3e487,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726855396967943471,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xqcc7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4564cece-0f72-4c0d-894a-db118f858084,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96afae658e4a423788a42e325057515d144d10b009c6f13c53f03db83d03f6bd,PodSandboxId:ee19c7262ef3aabc44da54e89e58bdee1e1664ac790890b331ed66f003d0cd0e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726855396815147883,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-s6hjr,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: fa1e96e5-7b44-44fb-8388-d5a9912c804e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64ab810d7a510c2840b638f855d95847bb7c865590232cf57a5478e41514d20f,PodSandboxId:5e002688dbd7afacb821247ea4055d51dad101ae7276db8ea02e7176840ab457,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1726855396444845583,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79a639c2-6c99-47af-b2fd-96943717b6ce,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a4e1a163bee5efa59cff6af4c8121d8baced8806d7381fe7e098aff50b7e45c,PodSandboxId:d3f35ac89846f204344df418af79bf9c37295044a17289fa9ab83320e61a34a7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,C
reatedAt:1726855396305863136,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p4xvv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8431fb54-b28c-485b-96bd-51de3cbd5797,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a920ed93c07cc5d75c488a50f1fd1475d5416e2c8285c051ed2916c21b78d4cd,PodSandboxId:df9b0febded6c11fa747614b3547cb3e8762600be4bf717712a9f9afd23a851b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726855391393763530,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-299508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e81f916c5ade886e8e0e3ab058c1a4ab,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbf87ce35336e3e90df4af93b7beb94ee2b3d289207c3dab3c3d2bfdbbf2147a,PodSandboxId:ee8725bff59fc880f5a033d78595641ca96084a3fa534d6348130bfc7f3b99c1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726855391336686676,Labels:map[string]string{io.kuberne
tes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-299508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce5898d9714800092a0a8c74144333f5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:722d126d21b9c88fa1969a52f2c51a492c61507214fc6602f4e916353db324c3,PodSandboxId:f0a0a0159c36ab5959e81e4a746f69df92cfa4add0753d97c31c187ebb05aa9c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726855391343092642,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-299508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c49340a6f972bf85931b86971d9c9fa0,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb63711cff6b682b2accbf144d09679ce19b67084355d97b0a78f060140af22f,PodSandboxId:3ebb64fea43ecb2599c93b6dceea85c964b380f515dc8f189b79472cbfe88009,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726855391329398923,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-299508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e51fcb299a163fe438f099860ee8bd6b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee50b4169ec8b0796713c2eb6e22919b505c43a4c409a86ba57d1e5900e373c3,PodSandboxId:b5768fbb8d6c003e1d41e4dda746c95649ce8984cea9f0cc1277ff3ac9b05377,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726855371463111076,Labels:map[string]s
tring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79a639c2-6c99-47af-b2fd-96943717b6ce,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f0c05c90ba006aa0ce12b88eb3fe4e70fd16c8749546daea35733a9b498a54a,PodSandboxId:13d264e906dbb5c4bb2d3f017388471b1285a46502a8deaae5d68072e92a5f67,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726855370669145001,Labels:map[string]string{io.kubernetes.conta
iner.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-s6hjr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa1e96e5-7b44-44fb-8388-d5a9912c804e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ed69061b7bb12a62fd1cfe919158cbc887c60089dcaebf036dc67cd6a58aa87,PodSandboxId:174f5d06becb54c63e45cd30637dbb29bddabad1e78097cb775bf8f5bc3b8c56,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedIm
age:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726855370440734039,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xqcc7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4564cece-0f72-4c0d-894a-db118f858084,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2eba3fdc72c486bb8a3a9f5b11df36626a6ba4408d611da7e0f9a8227548a4a8,PodSandboxId:baf09e0180c73d3c209101aaed1debb83ede2dc12fdab631e1ec53f3a02
3748e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726855369810820591,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p4xvv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8431fb54-b28c-485b-96bd-51de3cbd5797,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62d7d3ba9db5dddac4b5f8859a12356b0f8284dbbe365454b4316ff428dce86f,PodSandboxId:cfe41daab32a47779c4a4a27c3d9cee390d758bf6e25e6ef3c56f4696143aa91,Metadata:&ContainerMetadata{
Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726855359058845707,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-299508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e81f916c5ade886e8e0e3ab058c1a4ab,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20e6a51255de6330d2168e21751ad52574a92a7040083e74014c5ad89103f231,PodSandboxId:df53803213f7ee83fdf2615595b8abdbfa2133e07b0ddbc514e373d06ed1ec2b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Imag
e:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726855359002455258,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-299508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c49340a6f972bf85931b86971d9c9fa0,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1b5f528054033fae69eed59e819027d626c0678002580cd7cd3b812530c84b3,PodSandboxId:dee8a6336cc4b21214bdf9de38bbecb14353cb77b094fc26359d283e5cea3eba,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},I
mage:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726855358956459359,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-299508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e51fcb299a163fe438f099860ee8bd6b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50c7aeed5942e8820c55d2fbe082174e92f79f13313b05fddc6573d44f84e13a,PodSandboxId:8d328dc5b2f70d41781e1c5bc2daa6f691dbe2c6dbd62f21a5063cd413be8729,Metadata:&ContainerMetadata{Name:kube-apiserver,A
ttempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726855358878554794,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-299508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce5898d9714800092a0a8c74144333f5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=29006ffe-251d-456a-ac75-37f628187a2b name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:03:19 kubernetes-upgrade-299508 crio[2242]: time="2024-09-20 18:03:19.718414414Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=70453ae6-cf63-4a69-9b09-03b2a23bb443 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:03:19 kubernetes-upgrade-299508 crio[2242]: time="2024-09-20 18:03:19.718508121Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=70453ae6-cf63-4a69-9b09-03b2a23bb443 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:03:19 kubernetes-upgrade-299508 crio[2242]: time="2024-09-20 18:03:19.719846509Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fd44b89f-1772-415d-856e-0fb26da4e6eb name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:03:19 kubernetes-upgrade-299508 crio[2242]: time="2024-09-20 18:03:19.720625091Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855399720595099,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fd44b89f-1772-415d-856e-0fb26da4e6eb name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:03:19 kubernetes-upgrade-299508 crio[2242]: time="2024-09-20 18:03:19.721417684Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=73c9452f-3727-4ad4-9ac6-7cac5832f871 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:03:19 kubernetes-upgrade-299508 crio[2242]: time="2024-09-20 18:03:19.721484909Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=73c9452f-3727-4ad4-9ac6-7cac5832f871 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:03:19 kubernetes-upgrade-299508 crio[2242]: time="2024-09-20 18:03:19.722349383Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4cb930cde2e28b73c9d154731b52bcb4533ee8b705deb4df86dfd074e2053ee3,PodSandboxId:30063be54f9b45d693076bd82698b48d5f1b7bd5bff02696c18145e8bea3e487,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726855396967943471,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xqcc7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4564cece-0f72-4c0d-894a-db118f858084,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96afae658e4a423788a42e325057515d144d10b009c6f13c53f03db83d03f6bd,PodSandboxId:ee19c7262ef3aabc44da54e89e58bdee1e1664ac790890b331ed66f003d0cd0e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726855396815147883,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-s6hjr,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: fa1e96e5-7b44-44fb-8388-d5a9912c804e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64ab810d7a510c2840b638f855d95847bb7c865590232cf57a5478e41514d20f,PodSandboxId:5e002688dbd7afacb821247ea4055d51dad101ae7276db8ea02e7176840ab457,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1726855396444845583,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79a639c2-6c99-47af-b2fd-96943717b6ce,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a4e1a163bee5efa59cff6af4c8121d8baced8806d7381fe7e098aff50b7e45c,PodSandboxId:d3f35ac89846f204344df418af79bf9c37295044a17289fa9ab83320e61a34a7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,C
reatedAt:1726855396305863136,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p4xvv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8431fb54-b28c-485b-96bd-51de3cbd5797,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a920ed93c07cc5d75c488a50f1fd1475d5416e2c8285c051ed2916c21b78d4cd,PodSandboxId:df9b0febded6c11fa747614b3547cb3e8762600be4bf717712a9f9afd23a851b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726855391393763530,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-299508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e81f916c5ade886e8e0e3ab058c1a4ab,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbf87ce35336e3e90df4af93b7beb94ee2b3d289207c3dab3c3d2bfdbbf2147a,PodSandboxId:ee8725bff59fc880f5a033d78595641ca96084a3fa534d6348130bfc7f3b99c1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726855391336686676,Labels:map[string]string{io.kuberne
tes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-299508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce5898d9714800092a0a8c74144333f5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:722d126d21b9c88fa1969a52f2c51a492c61507214fc6602f4e916353db324c3,PodSandboxId:f0a0a0159c36ab5959e81e4a746f69df92cfa4add0753d97c31c187ebb05aa9c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726855391343092642,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-299508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c49340a6f972bf85931b86971d9c9fa0,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb63711cff6b682b2accbf144d09679ce19b67084355d97b0a78f060140af22f,PodSandboxId:3ebb64fea43ecb2599c93b6dceea85c964b380f515dc8f189b79472cbfe88009,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726855391329398923,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-299508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e51fcb299a163fe438f099860ee8bd6b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee50b4169ec8b0796713c2eb6e22919b505c43a4c409a86ba57d1e5900e373c3,PodSandboxId:b5768fbb8d6c003e1d41e4dda746c95649ce8984cea9f0cc1277ff3ac9b05377,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726855371463111076,Labels:map[string]s
tring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79a639c2-6c99-47af-b2fd-96943717b6ce,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f0c05c90ba006aa0ce12b88eb3fe4e70fd16c8749546daea35733a9b498a54a,PodSandboxId:13d264e906dbb5c4bb2d3f017388471b1285a46502a8deaae5d68072e92a5f67,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726855370669145001,Labels:map[string]string{io.kubernetes.conta
iner.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-s6hjr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa1e96e5-7b44-44fb-8388-d5a9912c804e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ed69061b7bb12a62fd1cfe919158cbc887c60089dcaebf036dc67cd6a58aa87,PodSandboxId:174f5d06becb54c63e45cd30637dbb29bddabad1e78097cb775bf8f5bc3b8c56,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedIm
age:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726855370440734039,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xqcc7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4564cece-0f72-4c0d-894a-db118f858084,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2eba3fdc72c486bb8a3a9f5b11df36626a6ba4408d611da7e0f9a8227548a4a8,PodSandboxId:baf09e0180c73d3c209101aaed1debb83ede2dc12fdab631e1ec53f3a02
3748e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726855369810820591,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p4xvv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8431fb54-b28c-485b-96bd-51de3cbd5797,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62d7d3ba9db5dddac4b5f8859a12356b0f8284dbbe365454b4316ff428dce86f,PodSandboxId:cfe41daab32a47779c4a4a27c3d9cee390d758bf6e25e6ef3c56f4696143aa91,Metadata:&ContainerMetadata{
Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726855359058845707,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-299508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e81f916c5ade886e8e0e3ab058c1a4ab,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20e6a51255de6330d2168e21751ad52574a92a7040083e74014c5ad89103f231,PodSandboxId:df53803213f7ee83fdf2615595b8abdbfa2133e07b0ddbc514e373d06ed1ec2b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Imag
e:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726855359002455258,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-299508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c49340a6f972bf85931b86971d9c9fa0,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1b5f528054033fae69eed59e819027d626c0678002580cd7cd3b812530c84b3,PodSandboxId:dee8a6336cc4b21214bdf9de38bbecb14353cb77b094fc26359d283e5cea3eba,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},I
mage:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726855358956459359,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-299508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e51fcb299a163fe438f099860ee8bd6b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50c7aeed5942e8820c55d2fbe082174e92f79f13313b05fddc6573d44f84e13a,PodSandboxId:8d328dc5b2f70d41781e1c5bc2daa6f691dbe2c6dbd62f21a5063cd413be8729,Metadata:&ContainerMetadata{Name:kube-apiserver,A
ttempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726855358878554794,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-299508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce5898d9714800092a0a8c74144333f5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=73c9452f-3727-4ad4-9ac6-7cac5832f871 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:03:19 kubernetes-upgrade-299508 crio[2242]: time="2024-09-20 18:03:19.775437162Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=01a47132-4bd9-42ea-aa1d-9a92fcf476e4 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:03:19 kubernetes-upgrade-299508 crio[2242]: time="2024-09-20 18:03:19.775529718Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=01a47132-4bd9-42ea-aa1d-9a92fcf476e4 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:03:19 kubernetes-upgrade-299508 crio[2242]: time="2024-09-20 18:03:19.778681163Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=78e13f7b-0c22-42d6-ade7-d835e42d7685 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:03:19 kubernetes-upgrade-299508 crio[2242]: time="2024-09-20 18:03:19.779327027Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855399779282608,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=78e13f7b-0c22-42d6-ade7-d835e42d7685 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:03:19 kubernetes-upgrade-299508 crio[2242]: time="2024-09-20 18:03:19.780884547Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bf9b6c6f-a11d-4977-83e3-d72f4c25f71f name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:03:19 kubernetes-upgrade-299508 crio[2242]: time="2024-09-20 18:03:19.780996413Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bf9b6c6f-a11d-4977-83e3-d72f4c25f71f name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:03:19 kubernetes-upgrade-299508 crio[2242]: time="2024-09-20 18:03:19.781531816Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4cb930cde2e28b73c9d154731b52bcb4533ee8b705deb4df86dfd074e2053ee3,PodSandboxId:30063be54f9b45d693076bd82698b48d5f1b7bd5bff02696c18145e8bea3e487,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726855396967943471,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xqcc7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4564cece-0f72-4c0d-894a-db118f858084,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96afae658e4a423788a42e325057515d144d10b009c6f13c53f03db83d03f6bd,PodSandboxId:ee19c7262ef3aabc44da54e89e58bdee1e1664ac790890b331ed66f003d0cd0e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726855396815147883,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-s6hjr,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: fa1e96e5-7b44-44fb-8388-d5a9912c804e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64ab810d7a510c2840b638f855d95847bb7c865590232cf57a5478e41514d20f,PodSandboxId:5e002688dbd7afacb821247ea4055d51dad101ae7276db8ea02e7176840ab457,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1726855396444845583,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79a639c2-6c99-47af-b2fd-96943717b6ce,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a4e1a163bee5efa59cff6af4c8121d8baced8806d7381fe7e098aff50b7e45c,PodSandboxId:d3f35ac89846f204344df418af79bf9c37295044a17289fa9ab83320e61a34a7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,C
reatedAt:1726855396305863136,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p4xvv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8431fb54-b28c-485b-96bd-51de3cbd5797,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a920ed93c07cc5d75c488a50f1fd1475d5416e2c8285c051ed2916c21b78d4cd,PodSandboxId:df9b0febded6c11fa747614b3547cb3e8762600be4bf717712a9f9afd23a851b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726855391393763530,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-299508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e81f916c5ade886e8e0e3ab058c1a4ab,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbf87ce35336e3e90df4af93b7beb94ee2b3d289207c3dab3c3d2bfdbbf2147a,PodSandboxId:ee8725bff59fc880f5a033d78595641ca96084a3fa534d6348130bfc7f3b99c1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726855391336686676,Labels:map[string]string{io.kuberne
tes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-299508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce5898d9714800092a0a8c74144333f5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:722d126d21b9c88fa1969a52f2c51a492c61507214fc6602f4e916353db324c3,PodSandboxId:f0a0a0159c36ab5959e81e4a746f69df92cfa4add0753d97c31c187ebb05aa9c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726855391343092642,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-299508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c49340a6f972bf85931b86971d9c9fa0,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb63711cff6b682b2accbf144d09679ce19b67084355d97b0a78f060140af22f,PodSandboxId:3ebb64fea43ecb2599c93b6dceea85c964b380f515dc8f189b79472cbfe88009,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726855391329398923,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-299508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e51fcb299a163fe438f099860ee8bd6b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee50b4169ec8b0796713c2eb6e22919b505c43a4c409a86ba57d1e5900e373c3,PodSandboxId:b5768fbb8d6c003e1d41e4dda746c95649ce8984cea9f0cc1277ff3ac9b05377,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726855371463111076,Labels:map[string]s
tring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79a639c2-6c99-47af-b2fd-96943717b6ce,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f0c05c90ba006aa0ce12b88eb3fe4e70fd16c8749546daea35733a9b498a54a,PodSandboxId:13d264e906dbb5c4bb2d3f017388471b1285a46502a8deaae5d68072e92a5f67,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726855370669145001,Labels:map[string]string{io.kubernetes.conta
iner.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-s6hjr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa1e96e5-7b44-44fb-8388-d5a9912c804e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ed69061b7bb12a62fd1cfe919158cbc887c60089dcaebf036dc67cd6a58aa87,PodSandboxId:174f5d06becb54c63e45cd30637dbb29bddabad1e78097cb775bf8f5bc3b8c56,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedIm
age:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726855370440734039,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xqcc7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4564cece-0f72-4c0d-894a-db118f858084,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2eba3fdc72c486bb8a3a9f5b11df36626a6ba4408d611da7e0f9a8227548a4a8,PodSandboxId:baf09e0180c73d3c209101aaed1debb83ede2dc12fdab631e1ec53f3a02
3748e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726855369810820591,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p4xvv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8431fb54-b28c-485b-96bd-51de3cbd5797,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62d7d3ba9db5dddac4b5f8859a12356b0f8284dbbe365454b4316ff428dce86f,PodSandboxId:cfe41daab32a47779c4a4a27c3d9cee390d758bf6e25e6ef3c56f4696143aa91,Metadata:&ContainerMetadata{
Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726855359058845707,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-299508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e81f916c5ade886e8e0e3ab058c1a4ab,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20e6a51255de6330d2168e21751ad52574a92a7040083e74014c5ad89103f231,PodSandboxId:df53803213f7ee83fdf2615595b8abdbfa2133e07b0ddbc514e373d06ed1ec2b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Imag
e:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726855359002455258,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-299508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c49340a6f972bf85931b86971d9c9fa0,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1b5f528054033fae69eed59e819027d626c0678002580cd7cd3b812530c84b3,PodSandboxId:dee8a6336cc4b21214bdf9de38bbecb14353cb77b094fc26359d283e5cea3eba,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},I
mage:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726855358956459359,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-299508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e51fcb299a163fe438f099860ee8bd6b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50c7aeed5942e8820c55d2fbe082174e92f79f13313b05fddc6573d44f84e13a,PodSandboxId:8d328dc5b2f70d41781e1c5bc2daa6f691dbe2c6dbd62f21a5063cd413be8729,Metadata:&ContainerMetadata{Name:kube-apiserver,A
ttempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726855358878554794,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-299508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce5898d9714800092a0a8c74144333f5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bf9b6c6f-a11d-4977-83e3-d72f4c25f71f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4cb930cde2e28       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   2 seconds ago       Running             coredns                   1                   30063be54f9b4       coredns-7c65d6cfc9-xqcc7
	96afae658e4a4       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   3 seconds ago       Running             coredns                   1                   ee19c7262ef3a       coredns-7c65d6cfc9-s6hjr
	64ab810d7a510       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   3 seconds ago       Running             storage-provisioner       1                   5e002688dbd7a       storage-provisioner
	7a4e1a163bee5       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   3 seconds ago       Running             kube-proxy                1                   d3f35ac89846f       kube-proxy-p4xvv
	a920ed93c07cc       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   8 seconds ago       Running             etcd                      1                   df9b0febded6c       etcd-kubernetes-upgrade-299508
	722d126d21b9c       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   8 seconds ago       Running             kube-scheduler            1                   f0a0a0159c36a       kube-scheduler-kubernetes-upgrade-299508
	cbf87ce35336e       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   8 seconds ago       Running             kube-apiserver            1                   ee8725bff59fc       kube-apiserver-kubernetes-upgrade-299508
	bb63711cff6b6       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   8 seconds ago       Running             kube-controller-manager   1                   3ebb64fea43ec       kube-controller-manager-kubernetes-upgrade-299508
	ee50b4169ec8b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   28 seconds ago      Exited              storage-provisioner       0                   b5768fbb8d6c0       storage-provisioner
	3f0c05c90ba00       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   29 seconds ago      Exited              coredns                   0                   13d264e906dbb       coredns-7c65d6cfc9-s6hjr
	8ed69061b7bb1       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   29 seconds ago      Exited              coredns                   0                   174f5d06becb5       coredns-7c65d6cfc9-xqcc7
	2eba3fdc72c48       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   30 seconds ago      Exited              kube-proxy                0                   baf09e0180c73       kube-proxy-p4xvv
	62d7d3ba9db5d       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   40 seconds ago      Exited              etcd                      0                   cfe41daab32a4       etcd-kubernetes-upgrade-299508
	20e6a51255de6       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   40 seconds ago      Exited              kube-scheduler            0                   df53803213f7e       kube-scheduler-kubernetes-upgrade-299508
	e1b5f52805403       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   40 seconds ago      Exited              kube-controller-manager   0                   dee8a6336cc4b       kube-controller-manager-kubernetes-upgrade-299508
	50c7aeed5942e       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   40 seconds ago      Exited              kube-apiserver            0                   8d328dc5b2f70       kube-apiserver-kubernetes-upgrade-299508
	
	
	==> coredns [3f0c05c90ba006aa0ce12b88eb3fe4e70fd16c8749546daea35733a9b498a54a] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [4cb930cde2e28b73c9d154731b52bcb4533ee8b705deb4df86dfd074e2053ee3] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [8ed69061b7bb12a62fd1cfe919158cbc887c60089dcaebf036dc67cd6a58aa87] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [96afae658e4a423788a42e325057515d144d10b009c6f13c53f03db83d03f6bd] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-299508
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-299508
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 18:02:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-299508
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 18:03:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 18:03:15 +0000   Fri, 20 Sep 2024 18:02:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 18:03:15 +0000   Fri, 20 Sep 2024 18:02:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 18:03:15 +0000   Fri, 20 Sep 2024 18:02:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 18:03:15 +0000   Fri, 20 Sep 2024 18:02:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.69
	  Hostname:    kubernetes-upgrade-299508
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 883967b4fb324214b530ca9287a4edb3
	  System UUID:                883967b4-fb32-4214-b530-ca9287a4edb3
	  Boot ID:                    9d7304ae-7eb1-4502-a171-3615150ca042
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-s6hjr                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     31s
	  kube-system                 coredns-7c65d6cfc9-xqcc7                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     31s
	  kube-system                 etcd-kubernetes-upgrade-299508                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         32s
	  kube-system                 kube-apiserver-kubernetes-upgrade-299508             250m (12%)    0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-299508    200m (10%)    0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-p4xvv                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-scheduler-kubernetes-upgrade-299508             100m (5%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 storage-provisioner                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 29s                kube-proxy       
	  Normal  Starting                 3s                 kube-proxy       
	  Normal  NodeHasNoDiskPressure    42s (x8 over 43s)  kubelet          Node kubernetes-upgrade-299508 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     42s (x7 over 43s)  kubelet          Node kubernetes-upgrade-299508 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  42s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  42s (x8 over 43s)  kubelet          Node kubernetes-upgrade-299508 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           32s                node-controller  Node kubernetes-upgrade-299508 event: Registered Node kubernetes-upgrade-299508 in Controller
	  Normal  Starting                 10s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10s (x8 over 10s)  kubelet          Node kubernetes-upgrade-299508 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10s (x8 over 10s)  kubelet          Node kubernetes-upgrade-299508 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10s (x7 over 10s)  kubelet          Node kubernetes-upgrade-299508 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2s                 node-controller  Node kubernetes-upgrade-299508 event: Registered Node kubernetes-upgrade-299508 in Controller
	
	
	==> dmesg <==
	[  +1.581432] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.999578] systemd-fstab-generator[558]: Ignoring "noauto" option for root device
	[  +0.062691] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.053233] systemd-fstab-generator[570]: Ignoring "noauto" option for root device
	[  +0.176934] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.146002] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.308550] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +4.361347] systemd-fstab-generator[716]: Ignoring "noauto" option for root device
	[  +0.069153] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.995119] systemd-fstab-generator[837]: Ignoring "noauto" option for root device
	[ +11.970292] systemd-fstab-generator[1211]: Ignoring "noauto" option for root device
	[  +0.138649] kauditd_printk_skb: 97 callbacks suppressed
	[Sep20 18:03] systemd-fstab-generator[2167]: Ignoring "noauto" option for root device
	[  +0.090234] kauditd_printk_skb: 105 callbacks suppressed
	[  +0.060536] systemd-fstab-generator[2179]: Ignoring "noauto" option for root device
	[  +0.170204] systemd-fstab-generator[2193]: Ignoring "noauto" option for root device
	[  +0.159469] systemd-fstab-generator[2205]: Ignoring "noauto" option for root device
	[  +0.312556] systemd-fstab-generator[2233]: Ignoring "noauto" option for root device
	[  +5.009901] systemd-fstab-generator[2379]: Ignoring "noauto" option for root device
	[  +0.087988] kauditd_printk_skb: 100 callbacks suppressed
	[  +2.036461] systemd-fstab-generator[2500]: Ignoring "noauto" option for root device
	[  +5.573800] kauditd_printk_skb: 74 callbacks suppressed
	[  +1.587834] systemd-fstab-generator[3405]: Ignoring "noauto" option for root device
	
	
	==> etcd [62d7d3ba9db5dddac4b5f8859a12356b0f8284dbbe365454b4316ff428dce86f] <==
	{"level":"info","ts":"2024-09-20T18:02:39.622227Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9199217ddd03919b became leader at term 2"}
	{"level":"info","ts":"2024-09-20T18:02:39.622253Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9199217ddd03919b elected leader 9199217ddd03919b at term 2"}
	{"level":"info","ts":"2024-09-20T18:02:39.626230Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T18:02:39.629389Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"9199217ddd03919b","local-member-attributes":"{Name:kubernetes-upgrade-299508 ClientURLs:[https://192.168.39.69:2379]}","request-path":"/0/members/9199217ddd03919b/attributes","cluster-id":"6c21f62219c1156b","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-20T18:02:39.631085Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T18:02:39.631980Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T18:02:39.635505Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-20T18:02:39.632348Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6c21f62219c1156b","local-member-id":"9199217ddd03919b","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T18:02:39.638408Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T18:02:39.638500Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T18:02:39.632378Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T18:02:39.640127Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T18:02:39.647766Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-20T18:02:39.670121Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-20T18:02:39.683920Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.69:2379"}
	{"level":"info","ts":"2024-09-20T18:02:53.244920Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-20T18:02:53.245107Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"kubernetes-upgrade-299508","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.69:2380"],"advertise-client-urls":["https://192.168.39.69:2379"]}
	{"level":"warn","ts":"2024-09-20T18:02:53.245336Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-20T18:02:53.245469Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-20T18:02:53.319021Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.69:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-20T18:02:53.319198Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.69:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-20T18:02:53.319358Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9199217ddd03919b","current-leader-member-id":"9199217ddd03919b"}
	{"level":"info","ts":"2024-09-20T18:02:53.410993Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.69:2380"}
	{"level":"info","ts":"2024-09-20T18:02:53.411260Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.69:2380"}
	{"level":"info","ts":"2024-09-20T18:02:53.411301Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"kubernetes-upgrade-299508","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.69:2380"],"advertise-client-urls":["https://192.168.39.69:2379"]}
	
	
	==> etcd [a920ed93c07cc5d75c488a50f1fd1475d5416e2c8285c051ed2916c21b78d4cd] <==
	{"level":"info","ts":"2024-09-20T18:03:11.717632Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9199217ddd03919b switched to configuration voters=(10491453631398908315)"}
	{"level":"info","ts":"2024-09-20T18:03:11.717736Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6c21f62219c1156b","local-member-id":"9199217ddd03919b","added-peer-id":"9199217ddd03919b","added-peer-peer-urls":["https://192.168.39.69:2380"]}
	{"level":"info","ts":"2024-09-20T18:03:11.717938Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6c21f62219c1156b","local-member-id":"9199217ddd03919b","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T18:03:11.718000Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T18:03:11.722847Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.69:2380"}
	{"level":"info","ts":"2024-09-20T18:03:11.722948Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.69:2380"}
	{"level":"info","ts":"2024-09-20T18:03:11.709988Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-20T18:03:11.722992Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-20T18:03:11.723063Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-20T18:03:13.387368Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9199217ddd03919b is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-20T18:03:13.387429Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9199217ddd03919b became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-20T18:03:13.387465Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9199217ddd03919b received MsgPreVoteResp from 9199217ddd03919b at term 2"}
	{"level":"info","ts":"2024-09-20T18:03:13.387480Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9199217ddd03919b became candidate at term 3"}
	{"level":"info","ts":"2024-09-20T18:03:13.387486Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9199217ddd03919b received MsgVoteResp from 9199217ddd03919b at term 3"}
	{"level":"info","ts":"2024-09-20T18:03:13.387495Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9199217ddd03919b became leader at term 3"}
	{"level":"info","ts":"2024-09-20T18:03:13.387503Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9199217ddd03919b elected leader 9199217ddd03919b at term 3"}
	{"level":"info","ts":"2024-09-20T18:03:13.392634Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"9199217ddd03919b","local-member-attributes":"{Name:kubernetes-upgrade-299508 ClientURLs:[https://192.168.39.69:2379]}","request-path":"/0/members/9199217ddd03919b/attributes","cluster-id":"6c21f62219c1156b","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-20T18:03:13.392742Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T18:03:13.392906Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T18:03:13.393383Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-20T18:03:13.393418Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-20T18:03:13.394093Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T18:03:13.394309Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T18:03:13.394810Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-20T18:03:13.395474Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.69:2379"}
	
	
	==> kernel <==
	 18:03:20 up 1 min,  0 users,  load average: 1.20, 0.35, 0.12
	Linux kubernetes-upgrade-299508 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [50c7aeed5942e8820c55d2fbe082174e92f79f13313b05fddc6573d44f84e13a] <==
	W0920 18:02:53.281766       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:02:53.281902       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:02:53.281921       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:02:53.275269       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:02:53.275295       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:02:53.275328       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:02:53.275349       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:02:53.282365       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:02:53.275371       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:02:53.282473       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:02:53.275397       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:02:53.275425       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:02:53.275450       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:02:53.275655       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:02:53.275695       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:02:53.275755       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:02:53.275787       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:02:53.275818       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:02:53.275848       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:02:53.275880       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:02:53.275911       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:02:53.275942       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:02:53.275971       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:02:53.276000       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:02:53.275238       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [cbf87ce35336e3e90df4af93b7beb94ee2b3d289207c3dab3c3d2bfdbbf2147a] <==
	I0920 18:03:14.970775       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0920 18:03:14.970935       1 shared_informer.go:320] Caches are synced for configmaps
	I0920 18:03:14.972572       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0920 18:03:14.972704       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0920 18:03:14.973083       1 aggregator.go:171] initial CRD sync complete...
	I0920 18:03:14.973123       1 autoregister_controller.go:144] Starting autoregister controller
	I0920 18:03:14.973133       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0920 18:03:14.973141       1 cache.go:39] Caches are synced for autoregister controller
	I0920 18:03:14.978872       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0920 18:03:14.979504       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0920 18:03:14.979773       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0920 18:03:14.980617       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0920 18:03:14.985487       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0920 18:03:14.990495       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0920 18:03:14.995770       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0920 18:03:14.995888       1 policy_source.go:224] refreshing policies
	I0920 18:03:14.997819       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0920 18:03:15.800316       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0920 18:03:17.396347       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0920 18:03:17.411164       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0920 18:03:17.461217       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0920 18:03:17.511353       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0920 18:03:17.521298       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0920 18:03:18.441188       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0920 18:03:18.604924       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [bb63711cff6b682b2accbf144d09679ce19b67084355d97b0a78f060140af22f] <==
	I0920 18:03:18.289111       1 shared_informer.go:320] Caches are synced for taint
	I0920 18:03:18.289156       1 shared_informer.go:320] Caches are synced for GC
	I0920 18:03:18.289214       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0920 18:03:18.289285       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="kubernetes-upgrade-299508"
	I0920 18:03:18.289314       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0920 18:03:18.291588       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0920 18:03:18.298318       1 shared_informer.go:320] Caches are synced for namespace
	I0920 18:03:18.308835       1 shared_informer.go:320] Caches are synced for expand
	I0920 18:03:18.321691       1 shared_informer.go:320] Caches are synced for resource quota
	I0920 18:03:18.325125       1 shared_informer.go:320] Caches are synced for HPA
	I0920 18:03:18.328412       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0920 18:03:18.328473       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0920 18:03:18.335074       1 shared_informer.go:320] Caches are synced for disruption
	I0920 18:03:18.363762       1 shared_informer.go:320] Caches are synced for resource quota
	I0920 18:03:18.387550       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0920 18:03:18.387629       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0920 18:03:18.387642       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0920 18:03:18.387650       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0920 18:03:18.441402       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0920 18:03:18.487511       1 shared_informer.go:320] Caches are synced for attach detach
	I0920 18:03:18.494840       1 shared_informer.go:320] Caches are synced for persistent volume
	I0920 18:03:18.524785       1 shared_informer.go:320] Caches are synced for PV protection
	I0920 18:03:18.914112       1 shared_informer.go:320] Caches are synced for garbage collector
	I0920 18:03:18.914159       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0920 18:03:18.949767       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-controller-manager [e1b5f528054033fae69eed59e819027d626c0678002580cd7cd3b812530c84b3] <==
	I0920 18:02:48.460240       1 shared_informer.go:320] Caches are synced for disruption
	I0920 18:02:48.460301       1 shared_informer.go:320] Caches are synced for job
	I0920 18:02:48.461505       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0920 18:02:48.474297       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-299508"
	I0920 18:02:48.504906       1 shared_informer.go:320] Caches are synced for HPA
	I0920 18:02:48.507170       1 shared_informer.go:320] Caches are synced for deployment
	I0920 18:02:48.507254       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0920 18:02:48.507184       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0920 18:02:48.507615       1 shared_informer.go:320] Caches are synced for stateful set
	I0920 18:02:48.509285       1 shared_informer.go:320] Caches are synced for ephemeral
	I0920 18:02:48.509330       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0920 18:02:48.509401       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="kubernetes-upgrade-299508"
	I0920 18:02:48.514144       1 shared_informer.go:320] Caches are synced for resource quota
	I0920 18:02:48.516537       1 shared_informer.go:320] Caches are synced for resource quota
	I0920 18:02:48.965021       1 shared_informer.go:320] Caches are synced for garbage collector
	I0920 18:02:49.005435       1 shared_informer.go:320] Caches are synced for garbage collector
	I0920 18:02:49.005502       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0920 18:02:49.205740       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="88.357462ms"
	I0920 18:02:49.359988       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="154.09184ms"
	I0920 18:02:49.360177       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="71.575µs"
	I0920 18:02:49.368293       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="79.524µs"
	I0920 18:02:49.412349       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="3.874727ms"
	I0920 18:02:50.862233       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="86.421µs"
	I0920 18:02:51.825921       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="106.227µs"
	I0920 18:02:51.923557       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-299508"
	
	
	==> kube-proxy [2eba3fdc72c486bb8a3a9f5b11df36626a6ba4408d611da7e0f9a8227548a4a8] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 18:02:50.581420       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0920 18:02:50.602963       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.69"]
	E0920 18:02:50.603249       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 18:02:50.766161       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 18:02:50.766267       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 18:02:50.766316       1 server_linux.go:169] "Using iptables Proxier"
	I0920 18:02:50.877256       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 18:02:50.880558       1 server.go:483] "Version info" version="v1.31.1"
	I0920 18:02:50.880782       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 18:02:50.893972       1 config.go:199] "Starting service config controller"
	I0920 18:02:50.895282       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 18:02:50.895544       1 config.go:105] "Starting endpoint slice config controller"
	I0920 18:02:50.895581       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 18:02:50.899664       1 config.go:328] "Starting node config controller"
	I0920 18:02:50.899746       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 18:02:50.996251       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 18:02:50.997383       1 shared_informer.go:320] Caches are synced for service config
	I0920 18:02:51.000448       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [7a4e1a163bee5efa59cff6af4c8121d8baced8806d7381fe7e098aff50b7e45c] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 18:03:16.742927       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0920 18:03:16.803656       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.69"]
	E0920 18:03:16.803711       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 18:03:16.922759       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 18:03:16.923609       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 18:03:16.923753       1 server_linux.go:169] "Using iptables Proxier"
	I0920 18:03:16.932775       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 18:03:16.933158       1 server.go:483] "Version info" version="v1.31.1"
	I0920 18:03:16.933429       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 18:03:16.934978       1 config.go:199] "Starting service config controller"
	I0920 18:03:16.939549       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 18:03:16.937482       1 config.go:105] "Starting endpoint slice config controller"
	I0920 18:03:16.939699       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 18:03:16.938971       1 config.go:328] "Starting node config controller"
	I0920 18:03:16.939710       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 18:03:17.040463       1 shared_informer.go:320] Caches are synced for node config
	I0920 18:03:17.040547       1 shared_informer.go:320] Caches are synced for service config
	I0920 18:03:17.040572       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [20e6a51255de6330d2168e21751ad52574a92a7040083e74014c5ad89103f231] <==
	E0920 18:02:41.642276       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 18:02:41.642391       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0920 18:02:41.642469       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 18:02:41.642583       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0920 18:02:41.642659       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 18:02:41.651582       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0920 18:02:41.651657       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0920 18:02:42.516351       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0920 18:02:42.516693       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 18:02:42.576702       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0920 18:02:42.576823       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 18:02:42.608331       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0920 18:02:42.608400       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0920 18:02:42.731447       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0920 18:02:42.731516       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0920 18:02:42.758754       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0920 18:02:42.758806       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 18:02:42.758921       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0920 18:02:42.758961       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 18:02:42.786639       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0920 18:02:42.786812       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 18:02:42.836418       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0920 18:02:42.836681       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0920 18:02:44.620950       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0920 18:02:53.246860       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [722d126d21b9c88fa1969a52f2c51a492c61507214fc6602f4e916353db324c3] <==
	I0920 18:03:12.271113       1 serving.go:386] Generated self-signed cert in-memory
	W0920 18:03:14.854580       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0920 18:03:14.854618       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0920 18:03:14.854629       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0920 18:03:14.854640       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0920 18:03:14.912768       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0920 18:03:14.912918       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 18:03:14.916184       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0920 18:03:14.916306       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0920 18:03:14.916604       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0920 18:03:14.916929       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0920 18:03:15.017488       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 20 18:03:10 kubernetes-upgrade-299508 kubelet[2508]: I0920 18:03:10.897959    2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce5898d9714800092a0a8c74144333f5-k8s-certs\") pod \"kube-apiserver-kubernetes-upgrade-299508\" (UID: \"ce5898d9714800092a0a8c74144333f5\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-299508"
	Sep 20 18:03:10 kubernetes-upgrade-299508 kubelet[2508]: I0920 18:03:10.897975    2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce5898d9714800092a0a8c74144333f5-usr-share-ca-certificates\") pod \"kube-apiserver-kubernetes-upgrade-299508\" (UID: \"ce5898d9714800092a0a8c74144333f5\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-299508"
	Sep 20 18:03:10 kubernetes-upgrade-299508 kubelet[2508]: I0920 18:03:10.897993    2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e51fcb299a163fe438f099860ee8bd6b-k8s-certs\") pod \"kube-controller-manager-kubernetes-upgrade-299508\" (UID: \"e51fcb299a163fe438f099860ee8bd6b\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-299508"
	Sep 20 18:03:10 kubernetes-upgrade-299508 kubelet[2508]: I0920 18:03:10.898009    2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c49340a6f972bf85931b86971d9c9fa0-kubeconfig\") pod \"kube-scheduler-kubernetes-upgrade-299508\" (UID: \"c49340a6f972bf85931b86971d9c9fa0\") " pod="kube-system/kube-scheduler-kubernetes-upgrade-299508"
	Sep 20 18:03:10 kubernetes-upgrade-299508 kubelet[2508]: E0920 18:03:10.898381    2508 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-299508?timeout=10s\": dial tcp 192.168.39.69:8443: connect: connection refused" interval="400ms"
	Sep 20 18:03:11 kubernetes-upgrade-299508 kubelet[2508]: I0920 18:03:11.057444    2508 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-299508"
	Sep 20 18:03:11 kubernetes-upgrade-299508 kubelet[2508]: E0920 18:03:11.058343    2508 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.69:8443: connect: connection refused" node="kubernetes-upgrade-299508"
	Sep 20 18:03:11 kubernetes-upgrade-299508 kubelet[2508]: E0920 18:03:11.300495    2508 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-299508?timeout=10s\": dial tcp 192.168.39.69:8443: connect: connection refused" interval="800ms"
	Sep 20 18:03:11 kubernetes-upgrade-299508 kubelet[2508]: I0920 18:03:11.460204    2508 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-299508"
	Sep 20 18:03:11 kubernetes-upgrade-299508 kubelet[2508]: E0920 18:03:11.461945    2508 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.69:8443: connect: connection refused" node="kubernetes-upgrade-299508"
	Sep 20 18:03:11 kubernetes-upgrade-299508 kubelet[2508]: W0920 18:03:11.651223    2508 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.69:8443: connect: connection refused
	Sep 20 18:03:11 kubernetes-upgrade-299508 kubelet[2508]: E0920 18:03:11.651335    2508 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.69:8443: connect: connection refused" logger="UnhandledError"
	Sep 20 18:03:12 kubernetes-upgrade-299508 kubelet[2508]: I0920 18:03:12.263455    2508 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-299508"
	Sep 20 18:03:15 kubernetes-upgrade-299508 kubelet[2508]: I0920 18:03:15.057026    2508 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-299508"
	Sep 20 18:03:15 kubernetes-upgrade-299508 kubelet[2508]: I0920 18:03:15.057625    2508 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-299508"
	Sep 20 18:03:15 kubernetes-upgrade-299508 kubelet[2508]: I0920 18:03:15.057744    2508 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 20 18:03:15 kubernetes-upgrade-299508 kubelet[2508]: I0920 18:03:15.060609    2508 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 20 18:03:15 kubernetes-upgrade-299508 kubelet[2508]: I0920 18:03:15.663777    2508 apiserver.go:52] "Watching apiserver"
	Sep 20 18:03:15 kubernetes-upgrade-299508 kubelet[2508]: I0920 18:03:15.689839    2508 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 20 18:03:15 kubernetes-upgrade-299508 kubelet[2508]: I0920 18:03:15.767476    2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/79a639c2-6c99-47af-b2fd-96943717b6ce-tmp\") pod \"storage-provisioner\" (UID: \"79a639c2-6c99-47af-b2fd-96943717b6ce\") " pod="kube-system/storage-provisioner"
	Sep 20 18:03:15 kubernetes-upgrade-299508 kubelet[2508]: I0920 18:03:15.767587    2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8431fb54-b28c-485b-96bd-51de3cbd5797-xtables-lock\") pod \"kube-proxy-p4xvv\" (UID: \"8431fb54-b28c-485b-96bd-51de3cbd5797\") " pod="kube-system/kube-proxy-p4xvv"
	Sep 20 18:03:15 kubernetes-upgrade-299508 kubelet[2508]: I0920 18:03:15.767617    2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8431fb54-b28c-485b-96bd-51de3cbd5797-lib-modules\") pod \"kube-proxy-p4xvv\" (UID: \"8431fb54-b28c-485b-96bd-51de3cbd5797\") " pod="kube-system/kube-proxy-p4xvv"
	Sep 20 18:03:18 kubernetes-upgrade-299508 kubelet[2508]: I0920 18:03:18.900061    2508 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 20 18:03:20 kubernetes-upgrade-299508 kubelet[2508]: E0920 18:03:20.757183    2508 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855400756496605,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:03:20 kubernetes-upgrade-299508 kubelet[2508]: E0920 18:03:20.757222    2508 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855400756496605,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [64ab810d7a510c2840b638f855d95847bb7c865590232cf57a5478e41514d20f] <==
	I0920 18:03:16.678315       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0920 18:03:16.717911       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0920 18:03:16.720262       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	
	==> storage-provisioner [ee50b4169ec8b0796713c2eb6e22919b505c43a4c409a86ba57d1e5900e373c3] <==
	I0920 18:02:51.611748       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0920 18:02:51.642250       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0920 18:02:51.642321       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0920 18:02:51.657914       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0920 18:02:51.658133       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-299508_4629cb61-4214-4438-b7c8-840acb680826!
	I0920 18:02:51.659573       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b3f606f7-0049-4143-a761-647103871bc3", APIVersion:"v1", ResourceVersion:"367", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-299508_4629cb61-4214-4438-b7c8-840acb680826 became leader
	I0920 18:02:51.758507       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-299508_4629cb61-4214-4438-b7c8-840acb680826!
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 18:03:19.222391   58823 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19672-8777/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-299508 -n kubernetes-upgrade-299508
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-299508 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-299508" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-299508
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-299508: (1.142845238s)
--- FAIL: TestKubernetesUpgrade (343.76s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (60.94s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-421146 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0920 18:02:43.196477   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-421146 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (56.000030067s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-421146] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19672
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19672-8777/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-8777/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-421146" primary control-plane node in "pause-421146" cluster
	* Updating the running kvm2 "pause-421146" VM ...
	* Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-421146" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 18:02:41.652614   58226 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:02:41.652919   58226 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:02:41.652937   58226 out.go:358] Setting ErrFile to fd 2...
	I0920 18:02:41.652944   58226 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:02:41.653271   58226 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-8777/.minikube/bin
	I0920 18:02:41.654038   58226 out.go:352] Setting JSON to false
	I0920 18:02:41.654996   58226 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6305,"bootTime":1726849057,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 18:02:41.655062   58226 start.go:139] virtualization: kvm guest
	I0920 18:02:41.657438   58226 out.go:177] * [pause-421146] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 18:02:41.658991   58226 notify.go:220] Checking for updates...
	I0920 18:02:41.659013   58226 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 18:02:41.660599   58226 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 18:02:41.661994   58226 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-8777/kubeconfig
	I0920 18:02:41.663309   58226 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-8777/.minikube
	I0920 18:02:41.664660   58226 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 18:02:41.666010   58226 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 18:02:41.668171   58226 config.go:182] Loaded profile config "pause-421146": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:02:41.668856   58226 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:02:41.668968   58226 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:02:41.686817   58226 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39559
	I0920 18:02:41.687403   58226 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:02:41.687990   58226 main.go:141] libmachine: Using API Version  1
	I0920 18:02:41.688014   58226 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:02:41.688405   58226 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:02:41.688642   58226 main.go:141] libmachine: (pause-421146) Calling .DriverName
	I0920 18:02:41.688920   58226 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 18:02:41.689356   58226 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:02:41.689396   58226 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:02:41.706915   58226 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34759
	I0920 18:02:41.707431   58226 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:02:41.708002   58226 main.go:141] libmachine: Using API Version  1
	I0920 18:02:41.708025   58226 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:02:41.708457   58226 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:02:41.708815   58226 main.go:141] libmachine: (pause-421146) Calling .DriverName
	I0920 18:02:41.748574   58226 out.go:177] * Using the kvm2 driver based on existing profile
	I0920 18:02:41.749657   58226 start.go:297] selected driver: kvm2
	I0920 18:02:41.749682   58226 start.go:901] validating driver "kvm2" against &{Name:pause-421146 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.31.1 ClusterName:pause-421146 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.200 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false o
lm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:02:41.749928   58226 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 18:02:41.750391   58226 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:02:41.750510   58226 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19672-8777/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 18:02:41.766737   58226 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 18:02:41.767728   58226 cni.go:84] Creating CNI manager for ""
	I0920 18:02:41.767793   58226 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:02:41.767875   58226 start.go:340] cluster config:
	{Name:pause-421146 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:pause-421146 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.200 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-ali
ases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:02:41.768064   58226 iso.go:125] acquiring lock: {Name:mkba95ef0488e46f622333e9f317f43def93040b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:02:41.770060   58226 out.go:177] * Starting "pause-421146" primary control-plane node in "pause-421146" cluster
	I0920 18:02:41.771337   58226 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:02:41.771398   58226 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0920 18:02:41.771408   58226 cache.go:56] Caching tarball of preloaded images
	I0920 18:02:41.771493   58226 preload.go:172] Found /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 18:02:41.771503   58226 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 18:02:41.771623   58226 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/pause-421146/config.json ...
	I0920 18:02:41.771808   58226 start.go:360] acquireMachinesLock for pause-421146: {Name:mkfeedb385cf08b5d2aa00913e85815d02a180c2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 18:02:41.771866   58226 start.go:364] duration metric: took 40.098µs to acquireMachinesLock for "pause-421146"
	I0920 18:02:41.771880   58226 start.go:96] Skipping create...Using existing machine configuration
	I0920 18:02:41.771886   58226 fix.go:54] fixHost starting: 
	I0920 18:02:41.772147   58226 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:02:41.772184   58226 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:02:41.791144   58226 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46593
	I0920 18:02:41.791655   58226 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:02:41.792246   58226 main.go:141] libmachine: Using API Version  1
	I0920 18:02:41.792263   58226 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:02:41.792633   58226 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:02:41.792853   58226 main.go:141] libmachine: (pause-421146) Calling .DriverName
	I0920 18:02:41.792997   58226 main.go:141] libmachine: (pause-421146) Calling .GetState
	I0920 18:02:41.794750   58226 fix.go:112] recreateIfNeeded on pause-421146: state=Running err=<nil>
	W0920 18:02:41.794784   58226 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 18:02:41.796712   58226 out.go:177] * Updating the running kvm2 "pause-421146" VM ...
	I0920 18:02:41.797992   58226 machine.go:93] provisionDockerMachine start ...
	I0920 18:02:41.798019   58226 main.go:141] libmachine: (pause-421146) Calling .DriverName
	I0920 18:02:41.798245   58226 main.go:141] libmachine: (pause-421146) Calling .GetSSHHostname
	I0920 18:02:41.801343   58226 main.go:141] libmachine: (pause-421146) DBG | domain pause-421146 has defined MAC address 52:54:00:aa:c7:5d in network mk-pause-421146
	I0920 18:02:41.801927   58226 main.go:141] libmachine: (pause-421146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:c7:5d", ip: ""} in network mk-pause-421146: {Iface:virbr2 ExpiryTime:2024-09-20 19:01:28 +0000 UTC Type:0 Mac:52:54:00:aa:c7:5d Iaid: IPaddr:192.168.50.200 Prefix:24 Hostname:pause-421146 Clientid:01:52:54:00:aa:c7:5d}
	I0920 18:02:41.801952   58226 main.go:141] libmachine: (pause-421146) DBG | domain pause-421146 has defined IP address 192.168.50.200 and MAC address 52:54:00:aa:c7:5d in network mk-pause-421146
	I0920 18:02:41.802040   58226 main.go:141] libmachine: (pause-421146) Calling .GetSSHPort
	I0920 18:02:41.802263   58226 main.go:141] libmachine: (pause-421146) Calling .GetSSHKeyPath
	I0920 18:02:41.802425   58226 main.go:141] libmachine: (pause-421146) Calling .GetSSHKeyPath
	I0920 18:02:41.802586   58226 main.go:141] libmachine: (pause-421146) Calling .GetSSHUsername
	I0920 18:02:41.802756   58226 main.go:141] libmachine: Using SSH client type: native
	I0920 18:02:41.802965   58226 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.200 22 <nil> <nil>}
	I0920 18:02:41.802975   58226 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 18:02:41.922347   58226 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-421146
	
	I0920 18:02:41.922380   58226 main.go:141] libmachine: (pause-421146) Calling .GetMachineName
	I0920 18:02:41.922624   58226 buildroot.go:166] provisioning hostname "pause-421146"
	I0920 18:02:41.922652   58226 main.go:141] libmachine: (pause-421146) Calling .GetMachineName
	I0920 18:02:41.922832   58226 main.go:141] libmachine: (pause-421146) Calling .GetSSHHostname
	I0920 18:02:41.925673   58226 main.go:141] libmachine: (pause-421146) DBG | domain pause-421146 has defined MAC address 52:54:00:aa:c7:5d in network mk-pause-421146
	I0920 18:02:41.926029   58226 main.go:141] libmachine: (pause-421146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:c7:5d", ip: ""} in network mk-pause-421146: {Iface:virbr2 ExpiryTime:2024-09-20 19:01:28 +0000 UTC Type:0 Mac:52:54:00:aa:c7:5d Iaid: IPaddr:192.168.50.200 Prefix:24 Hostname:pause-421146 Clientid:01:52:54:00:aa:c7:5d}
	I0920 18:02:41.926057   58226 main.go:141] libmachine: (pause-421146) DBG | domain pause-421146 has defined IP address 192.168.50.200 and MAC address 52:54:00:aa:c7:5d in network mk-pause-421146
	I0920 18:02:41.926435   58226 main.go:141] libmachine: (pause-421146) Calling .GetSSHPort
	I0920 18:02:41.926629   58226 main.go:141] libmachine: (pause-421146) Calling .GetSSHKeyPath
	I0920 18:02:41.926798   58226 main.go:141] libmachine: (pause-421146) Calling .GetSSHKeyPath
	I0920 18:02:41.926987   58226 main.go:141] libmachine: (pause-421146) Calling .GetSSHUsername
	I0920 18:02:41.927128   58226 main.go:141] libmachine: Using SSH client type: native
	I0920 18:02:41.927342   58226 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.200 22 <nil> <nil>}
	I0920 18:02:41.927356   58226 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-421146 && echo "pause-421146" | sudo tee /etc/hostname
	I0920 18:02:42.066270   58226 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-421146
	
	I0920 18:02:42.066303   58226 main.go:141] libmachine: (pause-421146) Calling .GetSSHHostname
	I0920 18:02:42.069651   58226 main.go:141] libmachine: (pause-421146) DBG | domain pause-421146 has defined MAC address 52:54:00:aa:c7:5d in network mk-pause-421146
	I0920 18:02:42.070133   58226 main.go:141] libmachine: (pause-421146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:c7:5d", ip: ""} in network mk-pause-421146: {Iface:virbr2 ExpiryTime:2024-09-20 19:01:28 +0000 UTC Type:0 Mac:52:54:00:aa:c7:5d Iaid: IPaddr:192.168.50.200 Prefix:24 Hostname:pause-421146 Clientid:01:52:54:00:aa:c7:5d}
	I0920 18:02:42.070271   58226 main.go:141] libmachine: (pause-421146) DBG | domain pause-421146 has defined IP address 192.168.50.200 and MAC address 52:54:00:aa:c7:5d in network mk-pause-421146
	I0920 18:02:42.070479   58226 main.go:141] libmachine: (pause-421146) Calling .GetSSHPort
	I0920 18:02:42.070693   58226 main.go:141] libmachine: (pause-421146) Calling .GetSSHKeyPath
	I0920 18:02:42.070889   58226 main.go:141] libmachine: (pause-421146) Calling .GetSSHKeyPath
	I0920 18:02:42.071072   58226 main.go:141] libmachine: (pause-421146) Calling .GetSSHUsername
	I0920 18:02:42.071281   58226 main.go:141] libmachine: Using SSH client type: native
	I0920 18:02:42.071509   58226 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.200 22 <nil> <nil>}
	I0920 18:02:42.071529   58226 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-421146' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-421146/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-421146' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 18:02:42.194649   58226 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:02:42.194672   58226 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-8777/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-8777/.minikube}
	I0920 18:02:42.194689   58226 buildroot.go:174] setting up certificates
	I0920 18:02:42.194700   58226 provision.go:84] configureAuth start
	I0920 18:02:42.194712   58226 main.go:141] libmachine: (pause-421146) Calling .GetMachineName
	I0920 18:02:42.194936   58226 main.go:141] libmachine: (pause-421146) Calling .GetIP
	I0920 18:02:42.197279   58226 main.go:141] libmachine: (pause-421146) DBG | domain pause-421146 has defined MAC address 52:54:00:aa:c7:5d in network mk-pause-421146
	I0920 18:02:42.197633   58226 main.go:141] libmachine: (pause-421146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:c7:5d", ip: ""} in network mk-pause-421146: {Iface:virbr2 ExpiryTime:2024-09-20 19:01:28 +0000 UTC Type:0 Mac:52:54:00:aa:c7:5d Iaid: IPaddr:192.168.50.200 Prefix:24 Hostname:pause-421146 Clientid:01:52:54:00:aa:c7:5d}
	I0920 18:02:42.197657   58226 main.go:141] libmachine: (pause-421146) DBG | domain pause-421146 has defined IP address 192.168.50.200 and MAC address 52:54:00:aa:c7:5d in network mk-pause-421146
	I0920 18:02:42.197794   58226 main.go:141] libmachine: (pause-421146) Calling .GetSSHHostname
	I0920 18:02:42.200088   58226 main.go:141] libmachine: (pause-421146) DBG | domain pause-421146 has defined MAC address 52:54:00:aa:c7:5d in network mk-pause-421146
	I0920 18:02:42.200493   58226 main.go:141] libmachine: (pause-421146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:c7:5d", ip: ""} in network mk-pause-421146: {Iface:virbr2 ExpiryTime:2024-09-20 19:01:28 +0000 UTC Type:0 Mac:52:54:00:aa:c7:5d Iaid: IPaddr:192.168.50.200 Prefix:24 Hostname:pause-421146 Clientid:01:52:54:00:aa:c7:5d}
	I0920 18:02:42.200515   58226 main.go:141] libmachine: (pause-421146) DBG | domain pause-421146 has defined IP address 192.168.50.200 and MAC address 52:54:00:aa:c7:5d in network mk-pause-421146
	I0920 18:02:42.200759   58226 provision.go:143] copyHostCerts
	I0920 18:02:42.200836   58226 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem, removing ...
	I0920 18:02:42.200856   58226 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem
	I0920 18:02:42.200910   58226 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem (1082 bytes)
	I0920 18:02:42.201011   58226 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem, removing ...
	I0920 18:02:42.201020   58226 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem
	I0920 18:02:42.201050   58226 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem (1123 bytes)
	I0920 18:02:42.201113   58226 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem, removing ...
	I0920 18:02:42.201119   58226 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem
	I0920 18:02:42.201137   58226 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem (1675 bytes)
	I0920 18:02:42.201204   58226 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem org=jenkins.pause-421146 san=[127.0.0.1 192.168.50.200 localhost minikube pause-421146]
	I0920 18:02:42.367704   58226 provision.go:177] copyRemoteCerts
	I0920 18:02:42.367762   58226 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 18:02:42.367791   58226 main.go:141] libmachine: (pause-421146) Calling .GetSSHHostname
	I0920 18:02:42.370784   58226 main.go:141] libmachine: (pause-421146) DBG | domain pause-421146 has defined MAC address 52:54:00:aa:c7:5d in network mk-pause-421146
	I0920 18:02:42.371140   58226 main.go:141] libmachine: (pause-421146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:c7:5d", ip: ""} in network mk-pause-421146: {Iface:virbr2 ExpiryTime:2024-09-20 19:01:28 +0000 UTC Type:0 Mac:52:54:00:aa:c7:5d Iaid: IPaddr:192.168.50.200 Prefix:24 Hostname:pause-421146 Clientid:01:52:54:00:aa:c7:5d}
	I0920 18:02:42.371171   58226 main.go:141] libmachine: (pause-421146) DBG | domain pause-421146 has defined IP address 192.168.50.200 and MAC address 52:54:00:aa:c7:5d in network mk-pause-421146
	I0920 18:02:42.371398   58226 main.go:141] libmachine: (pause-421146) Calling .GetSSHPort
	I0920 18:02:42.371579   58226 main.go:141] libmachine: (pause-421146) Calling .GetSSHKeyPath
	I0920 18:02:42.371737   58226 main.go:141] libmachine: (pause-421146) Calling .GetSSHUsername
	I0920 18:02:42.371872   58226 sshutil.go:53] new ssh client: &{IP:192.168.50.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/pause-421146/id_rsa Username:docker}
	I0920 18:02:42.455668   58226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0920 18:02:42.481274   58226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 18:02:42.505913   58226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0920 18:02:42.535402   58226 provision.go:87] duration metric: took 340.686555ms to configureAuth
	I0920 18:02:42.535435   58226 buildroot.go:189] setting minikube options for container-runtime
	I0920 18:02:42.535705   58226 config.go:182] Loaded profile config "pause-421146": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:02:42.535812   58226 main.go:141] libmachine: (pause-421146) Calling .GetSSHHostname
	I0920 18:02:42.538888   58226 main.go:141] libmachine: (pause-421146) DBG | domain pause-421146 has defined MAC address 52:54:00:aa:c7:5d in network mk-pause-421146
	I0920 18:02:42.539284   58226 main.go:141] libmachine: (pause-421146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:c7:5d", ip: ""} in network mk-pause-421146: {Iface:virbr2 ExpiryTime:2024-09-20 19:01:28 +0000 UTC Type:0 Mac:52:54:00:aa:c7:5d Iaid: IPaddr:192.168.50.200 Prefix:24 Hostname:pause-421146 Clientid:01:52:54:00:aa:c7:5d}
	I0920 18:02:42.539314   58226 main.go:141] libmachine: (pause-421146) DBG | domain pause-421146 has defined IP address 192.168.50.200 and MAC address 52:54:00:aa:c7:5d in network mk-pause-421146
	I0920 18:02:42.539486   58226 main.go:141] libmachine: (pause-421146) Calling .GetSSHPort
	I0920 18:02:42.539669   58226 main.go:141] libmachine: (pause-421146) Calling .GetSSHKeyPath
	I0920 18:02:42.539841   58226 main.go:141] libmachine: (pause-421146) Calling .GetSSHKeyPath
	I0920 18:02:42.539976   58226 main.go:141] libmachine: (pause-421146) Calling .GetSSHUsername
	I0920 18:02:42.540160   58226 main.go:141] libmachine: Using SSH client type: native
	I0920 18:02:42.540340   58226 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.200 22 <nil> <nil>}
	I0920 18:02:42.540354   58226 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 18:02:48.105129   58226 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 18:02:48.105162   58226 machine.go:96] duration metric: took 6.307151749s to provisionDockerMachine
	I0920 18:02:48.105175   58226 start.go:293] postStartSetup for "pause-421146" (driver="kvm2")
	I0920 18:02:48.105187   58226 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 18:02:48.105214   58226 main.go:141] libmachine: (pause-421146) Calling .DriverName
	I0920 18:02:48.105559   58226 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 18:02:48.105593   58226 main.go:141] libmachine: (pause-421146) Calling .GetSSHHostname
	I0920 18:02:48.109117   58226 main.go:141] libmachine: (pause-421146) DBG | domain pause-421146 has defined MAC address 52:54:00:aa:c7:5d in network mk-pause-421146
	I0920 18:02:48.109551   58226 main.go:141] libmachine: (pause-421146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:c7:5d", ip: ""} in network mk-pause-421146: {Iface:virbr2 ExpiryTime:2024-09-20 19:01:28 +0000 UTC Type:0 Mac:52:54:00:aa:c7:5d Iaid: IPaddr:192.168.50.200 Prefix:24 Hostname:pause-421146 Clientid:01:52:54:00:aa:c7:5d}
	I0920 18:02:48.109577   58226 main.go:141] libmachine: (pause-421146) DBG | domain pause-421146 has defined IP address 192.168.50.200 and MAC address 52:54:00:aa:c7:5d in network mk-pause-421146
	I0920 18:02:48.109842   58226 main.go:141] libmachine: (pause-421146) Calling .GetSSHPort
	I0920 18:02:48.110052   58226 main.go:141] libmachine: (pause-421146) Calling .GetSSHKeyPath
	I0920 18:02:48.110241   58226 main.go:141] libmachine: (pause-421146) Calling .GetSSHUsername
	I0920 18:02:48.110430   58226 sshutil.go:53] new ssh client: &{IP:192.168.50.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/pause-421146/id_rsa Username:docker}
	I0920 18:02:48.197883   58226 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 18:02:48.202636   58226 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 18:02:48.202663   58226 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/addons for local assets ...
	I0920 18:02:48.202738   58226 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/files for local assets ...
	I0920 18:02:48.202838   58226 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem -> 159732.pem in /etc/ssl/certs
	I0920 18:02:48.202952   58226 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 18:02:48.212810   58226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /etc/ssl/certs/159732.pem (1708 bytes)
	I0920 18:02:48.241349   58226 start.go:296] duration metric: took 136.147485ms for postStartSetup
	I0920 18:02:48.241389   58226 fix.go:56] duration metric: took 6.469503241s for fixHost
	I0920 18:02:48.241420   58226 main.go:141] libmachine: (pause-421146) Calling .GetSSHHostname
	I0920 18:02:48.244438   58226 main.go:141] libmachine: (pause-421146) DBG | domain pause-421146 has defined MAC address 52:54:00:aa:c7:5d in network mk-pause-421146
	I0920 18:02:48.244842   58226 main.go:141] libmachine: (pause-421146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:c7:5d", ip: ""} in network mk-pause-421146: {Iface:virbr2 ExpiryTime:2024-09-20 19:01:28 +0000 UTC Type:0 Mac:52:54:00:aa:c7:5d Iaid: IPaddr:192.168.50.200 Prefix:24 Hostname:pause-421146 Clientid:01:52:54:00:aa:c7:5d}
	I0920 18:02:48.244882   58226 main.go:141] libmachine: (pause-421146) DBG | domain pause-421146 has defined IP address 192.168.50.200 and MAC address 52:54:00:aa:c7:5d in network mk-pause-421146
	I0920 18:02:48.245138   58226 main.go:141] libmachine: (pause-421146) Calling .GetSSHPort
	I0920 18:02:48.245347   58226 main.go:141] libmachine: (pause-421146) Calling .GetSSHKeyPath
	I0920 18:02:48.245549   58226 main.go:141] libmachine: (pause-421146) Calling .GetSSHKeyPath
	I0920 18:02:48.245704   58226 main.go:141] libmachine: (pause-421146) Calling .GetSSHUsername
	I0920 18:02:48.245874   58226 main.go:141] libmachine: Using SSH client type: native
	I0920 18:02:48.246076   58226 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.200 22 <nil> <nil>}
	I0920 18:02:48.246092   58226 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 18:02:48.363113   58226 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726855368.352343455
	
	I0920 18:02:48.363141   58226 fix.go:216] guest clock: 1726855368.352343455
	I0920 18:02:48.363151   58226 fix.go:229] Guest: 2024-09-20 18:02:48.352343455 +0000 UTC Remote: 2024-09-20 18:02:48.241400578 +0000 UTC m=+6.635117866 (delta=110.942877ms)
	I0920 18:02:48.363201   58226 fix.go:200] guest clock delta is within tolerance: 110.942877ms
	I0920 18:02:48.363208   58226 start.go:83] releasing machines lock for "pause-421146", held for 6.591332653s
	I0920 18:02:48.363237   58226 main.go:141] libmachine: (pause-421146) Calling .DriverName
	I0920 18:02:48.363546   58226 main.go:141] libmachine: (pause-421146) Calling .GetIP
	I0920 18:02:48.366550   58226 main.go:141] libmachine: (pause-421146) DBG | domain pause-421146 has defined MAC address 52:54:00:aa:c7:5d in network mk-pause-421146
	I0920 18:02:48.367061   58226 main.go:141] libmachine: (pause-421146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:c7:5d", ip: ""} in network mk-pause-421146: {Iface:virbr2 ExpiryTime:2024-09-20 19:01:28 +0000 UTC Type:0 Mac:52:54:00:aa:c7:5d Iaid: IPaddr:192.168.50.200 Prefix:24 Hostname:pause-421146 Clientid:01:52:54:00:aa:c7:5d}
	I0920 18:02:48.367091   58226 main.go:141] libmachine: (pause-421146) DBG | domain pause-421146 has defined IP address 192.168.50.200 and MAC address 52:54:00:aa:c7:5d in network mk-pause-421146
	I0920 18:02:48.367330   58226 main.go:141] libmachine: (pause-421146) Calling .DriverName
	I0920 18:02:48.367908   58226 main.go:141] libmachine: (pause-421146) Calling .DriverName
	I0920 18:02:48.368126   58226 main.go:141] libmachine: (pause-421146) Calling .DriverName
	I0920 18:02:48.368242   58226 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 18:02:48.368292   58226 main.go:141] libmachine: (pause-421146) Calling .GetSSHHostname
	I0920 18:02:48.368324   58226 ssh_runner.go:195] Run: cat /version.json
	I0920 18:02:48.368346   58226 main.go:141] libmachine: (pause-421146) Calling .GetSSHHostname
	I0920 18:02:48.371822   58226 main.go:141] libmachine: (pause-421146) DBG | domain pause-421146 has defined MAC address 52:54:00:aa:c7:5d in network mk-pause-421146
	I0920 18:02:48.371957   58226 main.go:141] libmachine: (pause-421146) DBG | domain pause-421146 has defined MAC address 52:54:00:aa:c7:5d in network mk-pause-421146
	I0920 18:02:48.372271   58226 main.go:141] libmachine: (pause-421146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:c7:5d", ip: ""} in network mk-pause-421146: {Iface:virbr2 ExpiryTime:2024-09-20 19:01:28 +0000 UTC Type:0 Mac:52:54:00:aa:c7:5d Iaid: IPaddr:192.168.50.200 Prefix:24 Hostname:pause-421146 Clientid:01:52:54:00:aa:c7:5d}
	I0920 18:02:48.372299   58226 main.go:141] libmachine: (pause-421146) DBG | domain pause-421146 has defined IP address 192.168.50.200 and MAC address 52:54:00:aa:c7:5d in network mk-pause-421146
	I0920 18:02:48.372594   58226 main.go:141] libmachine: (pause-421146) Calling .GetSSHPort
	I0920 18:02:48.372644   58226 main.go:141] libmachine: (pause-421146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:c7:5d", ip: ""} in network mk-pause-421146: {Iface:virbr2 ExpiryTime:2024-09-20 19:01:28 +0000 UTC Type:0 Mac:52:54:00:aa:c7:5d Iaid: IPaddr:192.168.50.200 Prefix:24 Hostname:pause-421146 Clientid:01:52:54:00:aa:c7:5d}
	I0920 18:02:48.372673   58226 main.go:141] libmachine: (pause-421146) DBG | domain pause-421146 has defined IP address 192.168.50.200 and MAC address 52:54:00:aa:c7:5d in network mk-pause-421146
	I0920 18:02:48.372896   58226 main.go:141] libmachine: (pause-421146) Calling .GetSSHPort
	I0920 18:02:48.372905   58226 main.go:141] libmachine: (pause-421146) Calling .GetSSHKeyPath
	I0920 18:02:48.373094   58226 main.go:141] libmachine: (pause-421146) Calling .GetSSHUsername
	I0920 18:02:48.373106   58226 main.go:141] libmachine: (pause-421146) Calling .GetSSHKeyPath
	I0920 18:02:48.373261   58226 main.go:141] libmachine: (pause-421146) Calling .GetSSHUsername
	I0920 18:02:48.373397   58226 sshutil.go:53] new ssh client: &{IP:192.168.50.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/pause-421146/id_rsa Username:docker}
	I0920 18:02:48.373410   58226 sshutil.go:53] new ssh client: &{IP:192.168.50.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/pause-421146/id_rsa Username:docker}
	I0920 18:02:48.494217   58226 ssh_runner.go:195] Run: systemctl --version
	I0920 18:02:48.500970   58226 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 18:02:48.667749   58226 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 18:02:48.674391   58226 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 18:02:48.674481   58226 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 18:02:48.684404   58226 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0920 18:02:48.684436   58226 start.go:495] detecting cgroup driver to use...
	I0920 18:02:48.684505   58226 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 18:02:48.702156   58226 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 18:02:48.722635   58226 docker.go:217] disabling cri-docker service (if available) ...
	I0920 18:02:48.722706   58226 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 18:02:48.743134   58226 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 18:02:48.763588   58226 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 18:02:48.913460   58226 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 18:02:49.067391   58226 docker.go:233] disabling docker service ...
	I0920 18:02:49.067477   58226 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 18:02:49.092522   58226 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 18:02:49.108342   58226 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 18:02:49.257240   58226 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 18:02:49.419344   58226 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 18:02:49.436808   58226 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 18:02:49.462169   58226 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 18:02:49.462247   58226 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:02:49.474416   58226 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 18:02:49.474506   58226 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:02:49.486737   58226 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:02:49.502316   58226 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:02:49.518652   58226 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 18:02:49.534245   58226 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:02:49.548757   58226 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:02:49.565189   58226 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:02:49.577820   58226 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 18:02:49.588598   58226 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 18:02:49.599692   58226 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:02:49.749010   58226 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 18:02:50.020522   58226 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 18:02:50.020603   58226 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 18:02:50.027168   58226 start.go:563] Will wait 60s for crictl version
	I0920 18:02:50.027235   58226 ssh_runner.go:195] Run: which crictl
	I0920 18:02:50.033011   58226 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 18:02:50.087457   58226 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 18:02:50.087550   58226 ssh_runner.go:195] Run: crio --version
	I0920 18:02:50.208694   58226 ssh_runner.go:195] Run: crio --version
	I0920 18:02:50.582207   58226 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 18:02:50.583373   58226 main.go:141] libmachine: (pause-421146) Calling .GetIP
	I0920 18:02:50.587131   58226 main.go:141] libmachine: (pause-421146) DBG | domain pause-421146 has defined MAC address 52:54:00:aa:c7:5d in network mk-pause-421146
	I0920 18:02:50.587422   58226 main.go:141] libmachine: (pause-421146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:c7:5d", ip: ""} in network mk-pause-421146: {Iface:virbr2 ExpiryTime:2024-09-20 19:01:28 +0000 UTC Type:0 Mac:52:54:00:aa:c7:5d Iaid: IPaddr:192.168.50.200 Prefix:24 Hostname:pause-421146 Clientid:01:52:54:00:aa:c7:5d}
	I0920 18:02:50.587493   58226 main.go:141] libmachine: (pause-421146) DBG | domain pause-421146 has defined IP address 192.168.50.200 and MAC address 52:54:00:aa:c7:5d in network mk-pause-421146
	I0920 18:02:50.587832   58226 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0920 18:02:50.641142   58226 kubeadm.go:883] updating cluster {Name:pause-421146 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:pause-421146 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.200 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-se
curity-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 18:02:50.641338   58226 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:02:50.641405   58226 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:02:50.918861   58226 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 18:02:50.918893   58226 crio.go:433] Images already preloaded, skipping extraction
	I0920 18:02:50.918951   58226 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:02:51.204010   58226 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 18:02:51.204031   58226 cache_images.go:84] Images are preloaded, skipping loading
	I0920 18:02:51.204038   58226 kubeadm.go:934] updating node { 192.168.50.200 8443 v1.31.1 crio true true} ...
	I0920 18:02:51.204133   58226 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-421146 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.200
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:pause-421146 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 18:02:51.204208   58226 ssh_runner.go:195] Run: crio config
	I0920 18:02:51.483072   58226 cni.go:84] Creating CNI manager for ""
	I0920 18:02:51.483098   58226 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:02:51.483110   58226 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 18:02:51.483146   58226 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.200 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-421146 NodeName:pause-421146 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.200"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.200 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 18:02:51.483335   58226 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.200
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-421146"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.200
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.200"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 18:02:51.483406   58226 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 18:02:51.517552   58226 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 18:02:51.517634   58226 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 18:02:51.539588   58226 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0920 18:02:51.573932   58226 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 18:02:51.603720   58226 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0920 18:02:51.634915   58226 ssh_runner.go:195] Run: grep 192.168.50.200	control-plane.minikube.internal$ /etc/hosts
	I0920 18:02:51.643051   58226 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:02:51.950133   58226 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:02:51.983241   58226 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/pause-421146 for IP: 192.168.50.200
	I0920 18:02:51.983263   58226 certs.go:194] generating shared ca certs ...
	I0920 18:02:51.983283   58226 certs.go:226] acquiring lock for ca certs: {Name:mkc7ef6c737c6bdc3fdd9dcff8f57029c020d8f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:02:51.983473   58226 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key
	I0920 18:02:51.983536   58226 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key
	I0920 18:02:51.983550   58226 certs.go:256] generating profile certs ...
	I0920 18:02:51.983656   58226 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/pause-421146/client.key
	I0920 18:02:51.983744   58226 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/pause-421146/apiserver.key.d69ea9c8
	I0920 18:02:51.983805   58226 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/pause-421146/proxy-client.key
	I0920 18:02:51.983954   58226 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem (1338 bytes)
	W0920 18:02:51.983992   58226 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973_empty.pem, impossibly tiny 0 bytes
	I0920 18:02:51.984003   58226 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 18:02:51.984043   58226 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem (1082 bytes)
	I0920 18:02:51.984077   58226 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem (1123 bytes)
	I0920 18:02:51.984110   58226 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem (1675 bytes)
	I0920 18:02:51.984219   58226 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem (1708 bytes)
	I0920 18:02:51.985028   58226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 18:02:52.023905   58226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 18:02:52.070693   58226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 18:02:52.117448   58226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0920 18:02:52.152420   58226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/pause-421146/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0920 18:02:52.228104   58226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/pause-421146/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 18:02:52.289123   58226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/pause-421146/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 18:02:52.340679   58226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/pause-421146/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 18:02:52.432297   58226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /usr/share/ca-certificates/159732.pem (1708 bytes)
	I0920 18:02:52.475771   58226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 18:02:52.529424   58226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem --> /usr/share/ca-certificates/15973.pem (1338 bytes)
	I0920 18:02:52.565175   58226 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 18:02:52.589615   58226 ssh_runner.go:195] Run: openssl version
	I0920 18:02:52.597319   58226 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 18:02:52.612489   58226 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:02:52.619144   58226 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:02:52.619209   58226 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:02:52.625590   58226 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 18:02:52.637604   58226 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15973.pem && ln -fs /usr/share/ca-certificates/15973.pem /etc/ssl/certs/15973.pem"
	I0920 18:02:52.650067   58226 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15973.pem
	I0920 18:02:52.655149   58226 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:04 /usr/share/ca-certificates/15973.pem
	I0920 18:02:52.655222   58226 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15973.pem
	I0920 18:02:52.661990   58226 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15973.pem /etc/ssl/certs/51391683.0"
	I0920 18:02:52.673239   58226 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/159732.pem && ln -fs /usr/share/ca-certificates/159732.pem /etc/ssl/certs/159732.pem"
	I0920 18:02:52.689525   58226 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/159732.pem
	I0920 18:02:52.694746   58226 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:04 /usr/share/ca-certificates/159732.pem
	I0920 18:02:52.694816   58226 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/159732.pem
	I0920 18:02:52.702906   58226 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/159732.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 18:02:52.718784   58226 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 18:02:52.724656   58226 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 18:02:52.731433   58226 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 18:02:52.740334   58226 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 18:02:52.749768   58226 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 18:02:52.758768   58226 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 18:02:52.766454   58226 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 18:02:52.773783   58226 kubeadm.go:392] StartCluster: {Name:pause-421146 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:pause-421146 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.200 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-secur
ity-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:02:52.774034   58226 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 18:02:52.774092   58226 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:02:52.851875   58226 cri.go:89] found id: "75d3de29f781282cadcafa18080ca10c150d68ea2643c96305500091f11cd130"
	I0920 18:02:52.851901   58226 cri.go:89] found id: "e65068aabc970c4f3ded7fd5c3bbaef59385603701d9c00ed027e8385212b674"
	I0920 18:02:52.851907   58226 cri.go:89] found id: "6caff0d1adf3a97c67a13c3e8be7618e77ea79c31f750fff59228a7be609fd4d"
	I0920 18:02:52.851912   58226 cri.go:89] found id: "acff960df015881f08db968c9ce85cea196679f2ba2dd86c8c0589365b50502c"
	I0920 18:02:52.851916   58226 cri.go:89] found id: "998f1cab8849745ac4a858a8df2532e7b3bfc179f529b1172684f92fee861d0d"
	I0920 18:02:52.851921   58226 cri.go:89] found id: "81a97c79e3ef43249f4d0285077861826dcf108662f81b26a4e9ea112ebc5f45"
	I0920 18:02:52.851925   58226 cri.go:89] found id: "8f06c6c4d665531dbf0c0d8e2b4477662bd5b99e2ccd0f4db3ac78f78989c26d"
	I0920 18:02:52.851931   58226 cri.go:89] found id: "649042c723b9db769c29dc7735cc090423ca44ef0dd4553705afa66820f21574"
	I0920 18:02:52.851935   58226 cri.go:89] found id: "94163c85f0797bbaa56ddd8fc3acb0c3cd391e24271045f5ae259c7c4c0babf1"
	I0920 18:02:52.851944   58226 cri.go:89] found id: "163b3f69cfd1ee643b766df4f2e5038143ecf7af02628eb719a9f18368721dbc"
	I0920 18:02:52.851951   58226 cri.go:89] found id: "70e2c0d4123ff483cfaa8d13e195a04d53dba0a0fdc7281f30e38dc720038f72"
	I0920 18:02:52.851956   58226 cri.go:89] found id: "0fc32a3094b4e1736505ba6b9eb2d1120b6b4c38afed1b6734045a58d56ba358"
	I0920 18:02:52.851961   58226 cri.go:89] found id: ""
	I0920 18:02:52.852012   58226 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-421146 -n pause-421146
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-421146 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-421146 logs -n 25: (1.618267121s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | force-systemd-flag-956160 ssh cat     | force-systemd-flag-956160 | jenkins | v1.34.0 | 20 Sep 24 17:59 UTC | 20 Sep 24 17:59 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-956160          | force-systemd-flag-956160 | jenkins | v1.34.0 | 20 Sep 24 17:59 UTC | 20 Sep 24 17:59 UTC |
	| start   | -p cert-expiration-452691             | cert-expiration-452691    | jenkins | v1.34.0 | 20 Sep 24 17:59 UTC | 20 Sep 24 18:00 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-299391 stop           | minikube                  | jenkins | v1.26.0 | 20 Sep 24 17:59 UTC | 20 Sep 24 17:59 UTC |
	| start   | -p stopped-upgrade-299391             | stopped-upgrade-299391    | jenkins | v1.34.0 | 20 Sep 24 17:59 UTC | 20 Sep 24 18:00 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-030548           | force-systemd-env-030548  | jenkins | v1.34.0 | 20 Sep 24 17:59 UTC | 20 Sep 24 17:59 UTC |
	| start   | -p cert-options-815898                | cert-options-815898       | jenkins | v1.34.0 | 20 Sep 24 17:59 UTC | 20 Sep 24 18:01 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-299391             | stopped-upgrade-299391    | jenkins | v1.34.0 | 20 Sep 24 18:00 UTC | 20 Sep 24 18:00 UTC |
	| start   | -p running-upgrade-267014             | minikube                  | jenkins | v1.26.0 | 20 Sep 24 18:00 UTC | 20 Sep 24 18:01 UTC |
	|         | --memory=2200 --vm-driver=kvm2        |                           |         |         |                     |                     |
	|         |  --container-runtime=crio             |                           |         |         |                     |                     |
	| ssh     | cert-options-815898 ssh               | cert-options-815898       | jenkins | v1.34.0 | 20 Sep 24 18:01 UTC | 20 Sep 24 18:01 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-815898 -- sudo        | cert-options-815898       | jenkins | v1.34.0 | 20 Sep 24 18:01 UTC | 20 Sep 24 18:01 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-815898                | cert-options-815898       | jenkins | v1.34.0 | 20 Sep 24 18:01 UTC | 20 Sep 24 18:01 UTC |
	| start   | -p pause-421146 --memory=2048         | pause-421146              | jenkins | v1.34.0 | 20 Sep 24 18:01 UTC | 20 Sep 24 18:02 UTC |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2              |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p running-upgrade-267014             | running-upgrade-267014    | jenkins | v1.34.0 | 20 Sep 24 18:01 UTC | 20 Sep 24 18:02 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-299508          | kubernetes-upgrade-299508 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	| start   | -p kubernetes-upgrade-299508          | kubernetes-upgrade-299508 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p pause-421146                       | pause-421146              | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:03 UTC |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-299508          | kubernetes-upgrade-299508 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-299508          | kubernetes-upgrade-299508 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:03 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-267014             | running-upgrade-267014    | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	| start   | -p NoKubernetes-246858                | NoKubernetes-246858       | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC |                     |
	|         | --no-kubernetes                       |                           |         |         |                     |                     |
	|         | --kubernetes-version=1.20             |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-246858                | NoKubernetes-246858       | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p cert-expiration-452691             | cert-expiration-452691    | jenkins | v1.34.0 | 20 Sep 24 18:03 UTC |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-299508          | kubernetes-upgrade-299508 | jenkins | v1.34.0 | 20 Sep 24 18:03 UTC | 20 Sep 24 18:03 UTC |
	| start   | -p auto-833505 --memory=3072          | auto-833505               | jenkins | v1.34.0 | 20 Sep 24 18:03 UTC |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 18:03:22
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 18:03:22.450914   59003 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:03:22.451201   59003 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:03:22.451212   59003 out.go:358] Setting ErrFile to fd 2...
	I0920 18:03:22.451216   59003 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:03:22.451456   59003 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-8777/.minikube/bin
	I0920 18:03:22.452130   59003 out.go:352] Setting JSON to false
	I0920 18:03:22.453116   59003 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6345,"bootTime":1726849057,"procs":225,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 18:03:22.453222   59003 start.go:139] virtualization: kvm guest
	I0920 18:03:22.455588   59003 out.go:177] * [auto-833505] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 18:03:22.457292   59003 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 18:03:22.457342   59003 notify.go:220] Checking for updates...
	I0920 18:03:22.460798   59003 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 18:03:22.462605   59003 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-8777/kubeconfig
	I0920 18:03:22.464220   59003 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-8777/.minikube
	I0920 18:03:22.465744   59003 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 18:03:22.467309   59003 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 18:03:22.469537   59003 config.go:182] Loaded profile config "NoKubernetes-246858": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:03:22.469705   59003 config.go:182] Loaded profile config "cert-expiration-452691": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:03:22.469944   59003 config.go:182] Loaded profile config "pause-421146": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:03:22.470067   59003 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 18:03:22.508402   59003 out.go:177] * Using the kvm2 driver based on user configuration
	I0920 18:03:22.509926   59003 start.go:297] selected driver: kvm2
	I0920 18:03:22.509945   59003 start.go:901] validating driver "kvm2" against <nil>
	I0920 18:03:22.509957   59003 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 18:03:22.510873   59003 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:03:22.510960   59003 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19672-8777/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 18:03:22.527072   59003 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 18:03:22.527148   59003 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 18:03:22.527403   59003 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:03:22.527433   59003 cni.go:84] Creating CNI manager for ""
	I0920 18:03:22.527475   59003 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:03:22.527484   59003 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 18:03:22.527540   59003 start.go:340] cluster config:
	{Name:auto-833505 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:auto-833505 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:03:22.527632   59003 iso.go:125] acquiring lock: {Name:mkba95ef0488e46f622333e9f317f43def93040b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:03:22.529625   59003 out.go:177] * Starting "auto-833505" primary control-plane node in "auto-833505" cluster
	I0920 18:03:18.478888   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | domain NoKubernetes-246858 has defined MAC address 52:54:00:71:4a:f8 in network mk-NoKubernetes-246858
	I0920 18:03:18.479454   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | unable to find current IP address of domain NoKubernetes-246858 in network mk-NoKubernetes-246858
	I0920 18:03:18.479477   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | I0920 18:03:18.479406   58551 retry.go:31] will retry after 4.255494758s: waiting for machine to come up
	I0920 18:03:22.737279   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | domain NoKubernetes-246858 has defined MAC address 52:54:00:71:4a:f8 in network mk-NoKubernetes-246858
	I0920 18:03:22.738031   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | unable to find current IP address of domain NoKubernetes-246858 in network mk-NoKubernetes-246858
	I0920 18:03:22.738041   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | I0920 18:03:22.737987   58551 retry.go:31] will retry after 4.164560114s: waiting for machine to come up
	I0920 18:03:20.647601   58846 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:03:20.647643   58846 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0920 18:03:20.647649   58846 cache.go:56] Caching tarball of preloaded images
	I0920 18:03:20.647726   58846 preload.go:172] Found /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 18:03:20.647734   58846 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 18:03:20.647855   58846 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/cert-expiration-452691/config.json ...
	I0920 18:03:20.648103   58846 start.go:360] acquireMachinesLock for cert-expiration-452691: {Name:mkfeedb385cf08b5d2aa00913e85815d02a180c2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 18:03:22.692648   58226 pod_ready.go:103] pod "coredns-7c65d6cfc9-kzpzk" in "kube-system" namespace has status "Ready":"False"
	I0920 18:03:25.193160   58226 pod_ready.go:103] pod "coredns-7c65d6cfc9-kzpzk" in "kube-system" namespace has status "Ready":"False"
	I0920 18:03:22.531007   59003 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:03:22.531077   59003 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0920 18:03:22.531089   59003 cache.go:56] Caching tarball of preloaded images
	I0920 18:03:22.531218   59003 preload.go:172] Found /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 18:03:22.531234   59003 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 18:03:22.531344   59003 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/auto-833505/config.json ...
	I0920 18:03:22.531364   59003 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/auto-833505/config.json: {Name:mkff9238b5c083de46a1bc752a696dc0589463c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:03:22.531560   59003 start.go:360] acquireMachinesLock for auto-833505: {Name:mkfeedb385cf08b5d2aa00913e85815d02a180c2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 18:03:26.907388   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | domain NoKubernetes-246858 has defined MAC address 52:54:00:71:4a:f8 in network mk-NoKubernetes-246858
	I0920 18:03:26.908055   58507 main.go:141] libmachine: (NoKubernetes-246858) Found IP for machine: 192.168.72.119
	I0920 18:03:26.908068   58507 main.go:141] libmachine: (NoKubernetes-246858) Reserving static IP address...
	I0920 18:03:26.908080   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | domain NoKubernetes-246858 has current primary IP address 192.168.72.119 and MAC address 52:54:00:71:4a:f8 in network mk-NoKubernetes-246858
	I0920 18:03:26.908404   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | unable to find host DHCP lease matching {name: "NoKubernetes-246858", mac: "52:54:00:71:4a:f8", ip: "192.168.72.119"} in network mk-NoKubernetes-246858
	I0920 18:03:26.995579   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | Getting to WaitForSSH function...
	I0920 18:03:26.995603   58507 main.go:141] libmachine: (NoKubernetes-246858) Reserved static IP address: 192.168.72.119
	I0920 18:03:26.995641   58507 main.go:141] libmachine: (NoKubernetes-246858) Waiting for SSH to be available...
	I0920 18:03:26.997980   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | domain NoKubernetes-246858 has defined MAC address 52:54:00:71:4a:f8 in network mk-NoKubernetes-246858
	I0920 18:03:26.998332   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:4a:f8", ip: ""} in network mk-NoKubernetes-246858: {Iface:virbr4 ExpiryTime:2024-09-20 19:03:17 +0000 UTC Type:0 Mac:52:54:00:71:4a:f8 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:minikube Clientid:01:52:54:00:71:4a:f8}
	I0920 18:03:26.998349   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | domain NoKubernetes-246858 has defined IP address 192.168.72.119 and MAC address 52:54:00:71:4a:f8 in network mk-NoKubernetes-246858
	I0920 18:03:26.998473   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | Using SSH client type: external
	I0920 18:03:26.998495   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/NoKubernetes-246858/id_rsa (-rw-------)
	I0920 18:03:26.998582   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.119 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-8777/.minikube/machines/NoKubernetes-246858/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 18:03:26.998597   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | About to run SSH command:
	I0920 18:03:26.998610   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | exit 0
	I0920 18:03:27.125807   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | SSH cmd err, output: <nil>: 
	I0920 18:03:27.126122   58507 main.go:141] libmachine: (NoKubernetes-246858) KVM machine creation complete!
	I0920 18:03:27.126367   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetConfigRaw
	I0920 18:03:27.127008   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .DriverName
	I0920 18:03:27.127223   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .DriverName
	I0920 18:03:27.127396   58507 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0920 18:03:27.127404   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetState
	I0920 18:03:27.128998   58507 main.go:141] libmachine: Detecting operating system of created instance...
	I0920 18:03:27.129014   58507 main.go:141] libmachine: Waiting for SSH to be available...
	I0920 18:03:27.129020   58507 main.go:141] libmachine: Getting to WaitForSSH function...
	I0920 18:03:27.129027   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHHostname
	I0920 18:03:27.131934   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | domain NoKubernetes-246858 has defined MAC address 52:54:00:71:4a:f8 in network mk-NoKubernetes-246858
	I0920 18:03:27.132352   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:4a:f8", ip: ""} in network mk-NoKubernetes-246858: {Iface:virbr4 ExpiryTime:2024-09-20 19:03:17 +0000 UTC Type:0 Mac:52:54:00:71:4a:f8 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:NoKubernetes-246858 Clientid:01:52:54:00:71:4a:f8}
	I0920 18:03:27.132373   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | domain NoKubernetes-246858 has defined IP address 192.168.72.119 and MAC address 52:54:00:71:4a:f8 in network mk-NoKubernetes-246858
	I0920 18:03:27.132505   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHPort
	I0920 18:03:27.132643   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHKeyPath
	I0920 18:03:27.132842   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHKeyPath
	I0920 18:03:27.133005   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHUsername
	I0920 18:03:27.133194   58507 main.go:141] libmachine: Using SSH client type: native
	I0920 18:03:27.133382   58507 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.119 22 <nil> <nil>}
	I0920 18:03:27.133387   58507 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0920 18:03:27.245390   58507 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:03:27.245404   58507 main.go:141] libmachine: Detecting the provisioner...
	I0920 18:03:27.245447   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHHostname
	I0920 18:03:27.248407   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | domain NoKubernetes-246858 has defined MAC address 52:54:00:71:4a:f8 in network mk-NoKubernetes-246858
	I0920 18:03:27.248880   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:4a:f8", ip: ""} in network mk-NoKubernetes-246858: {Iface:virbr4 ExpiryTime:2024-09-20 19:03:17 +0000 UTC Type:0 Mac:52:54:00:71:4a:f8 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:NoKubernetes-246858 Clientid:01:52:54:00:71:4a:f8}
	I0920 18:03:27.248907   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | domain NoKubernetes-246858 has defined IP address 192.168.72.119 and MAC address 52:54:00:71:4a:f8 in network mk-NoKubernetes-246858
	I0920 18:03:27.249060   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHPort
	I0920 18:03:27.249269   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHKeyPath
	I0920 18:03:27.249424   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHKeyPath
	I0920 18:03:27.249566   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHUsername
	I0920 18:03:27.249714   58507 main.go:141] libmachine: Using SSH client type: native
	I0920 18:03:27.249911   58507 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.119 22 <nil> <nil>}
	I0920 18:03:27.249916   58507 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0920 18:03:27.362973   58507 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0920 18:03:27.363049   58507 main.go:141] libmachine: found compatible host: buildroot
	I0920 18:03:27.363057   58507 main.go:141] libmachine: Provisioning with buildroot...
	I0920 18:03:27.363063   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetMachineName
	I0920 18:03:27.363345   58507 buildroot.go:166] provisioning hostname "NoKubernetes-246858"
	I0920 18:03:27.363365   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetMachineName
	I0920 18:03:27.363582   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHHostname
	I0920 18:03:27.366286   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | domain NoKubernetes-246858 has defined MAC address 52:54:00:71:4a:f8 in network mk-NoKubernetes-246858
	I0920 18:03:27.366665   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:4a:f8", ip: ""} in network mk-NoKubernetes-246858: {Iface:virbr4 ExpiryTime:2024-09-20 19:03:17 +0000 UTC Type:0 Mac:52:54:00:71:4a:f8 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:NoKubernetes-246858 Clientid:01:52:54:00:71:4a:f8}
	I0920 18:03:27.366681   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | domain NoKubernetes-246858 has defined IP address 192.168.72.119 and MAC address 52:54:00:71:4a:f8 in network mk-NoKubernetes-246858
	I0920 18:03:27.366808   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHPort
	I0920 18:03:27.366989   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHKeyPath
	I0920 18:03:27.367131   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHKeyPath
	I0920 18:03:27.367271   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHUsername
	I0920 18:03:27.367431   58507 main.go:141] libmachine: Using SSH client type: native
	I0920 18:03:27.367612   58507 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.119 22 <nil> <nil>}
	I0920 18:03:27.367619   58507 main.go:141] libmachine: About to run SSH command:
	sudo hostname NoKubernetes-246858 && echo "NoKubernetes-246858" | sudo tee /etc/hostname
	I0920 18:03:27.496101   58507 main.go:141] libmachine: SSH cmd err, output: <nil>: NoKubernetes-246858
	
	I0920 18:03:27.496124   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHHostname
	I0920 18:03:27.498852   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | domain NoKubernetes-246858 has defined MAC address 52:54:00:71:4a:f8 in network mk-NoKubernetes-246858
	I0920 18:03:27.499260   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:4a:f8", ip: ""} in network mk-NoKubernetes-246858: {Iface:virbr4 ExpiryTime:2024-09-20 19:03:17 +0000 UTC Type:0 Mac:52:54:00:71:4a:f8 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:NoKubernetes-246858 Clientid:01:52:54:00:71:4a:f8}
	I0920 18:03:27.499298   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | domain NoKubernetes-246858 has defined IP address 192.168.72.119 and MAC address 52:54:00:71:4a:f8 in network mk-NoKubernetes-246858
	I0920 18:03:27.499648   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHPort
	I0920 18:03:27.499842   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHKeyPath
	I0920 18:03:27.499979   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHKeyPath
	I0920 18:03:27.500171   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHUsername
	I0920 18:03:27.500331   58507 main.go:141] libmachine: Using SSH client type: native
	I0920 18:03:27.500496   58507 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.119 22 <nil> <nil>}
	I0920 18:03:27.500505   58507 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sNoKubernetes-246858' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 NoKubernetes-246858/g' /etc/hosts;
				else 
					echo '127.0.1.1 NoKubernetes-246858' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 18:03:27.618885   58507 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:03:27.618902   58507 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-8777/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-8777/.minikube}
	I0920 18:03:27.618962   58507 buildroot.go:174] setting up certificates
	I0920 18:03:27.618969   58507 provision.go:84] configureAuth start
	I0920 18:03:27.618978   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetMachineName
	I0920 18:03:27.619319   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetIP
	I0920 18:03:27.621826   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | domain NoKubernetes-246858 has defined MAC address 52:54:00:71:4a:f8 in network mk-NoKubernetes-246858
	I0920 18:03:27.622156   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:4a:f8", ip: ""} in network mk-NoKubernetes-246858: {Iface:virbr4 ExpiryTime:2024-09-20 19:03:17 +0000 UTC Type:0 Mac:52:54:00:71:4a:f8 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:NoKubernetes-246858 Clientid:01:52:54:00:71:4a:f8}
	I0920 18:03:27.622177   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | domain NoKubernetes-246858 has defined IP address 192.168.72.119 and MAC address 52:54:00:71:4a:f8 in network mk-NoKubernetes-246858
	I0920 18:03:27.622315   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHHostname
	I0920 18:03:27.624649   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | domain NoKubernetes-246858 has defined MAC address 52:54:00:71:4a:f8 in network mk-NoKubernetes-246858
	I0920 18:03:27.625042   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:4a:f8", ip: ""} in network mk-NoKubernetes-246858: {Iface:virbr4 ExpiryTime:2024-09-20 19:03:17 +0000 UTC Type:0 Mac:52:54:00:71:4a:f8 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:NoKubernetes-246858 Clientid:01:52:54:00:71:4a:f8}
	I0920 18:03:27.625051   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | domain NoKubernetes-246858 has defined IP address 192.168.72.119 and MAC address 52:54:00:71:4a:f8 in network mk-NoKubernetes-246858
	I0920 18:03:27.625248   58507 provision.go:143] copyHostCerts
	I0920 18:03:27.625304   58507 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem, removing ...
	I0920 18:03:27.625321   58507 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem
	I0920 18:03:27.625398   58507 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem (1082 bytes)
	I0920 18:03:27.625551   58507 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem, removing ...
	I0920 18:03:27.625562   58507 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem
	I0920 18:03:27.625605   58507 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem (1123 bytes)
	I0920 18:03:27.625699   58507 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem, removing ...
	I0920 18:03:27.625703   58507 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem
	I0920 18:03:27.625732   58507 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem (1675 bytes)
	I0920 18:03:27.625791   58507 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem org=jenkins.NoKubernetes-246858 san=[127.0.0.1 192.168.72.119 NoKubernetes-246858 localhost minikube]
	I0920 18:03:27.711762   58507 provision.go:177] copyRemoteCerts
	I0920 18:03:27.711821   58507 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 18:03:27.711845   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHHostname
	I0920 18:03:27.714570   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | domain NoKubernetes-246858 has defined MAC address 52:54:00:71:4a:f8 in network mk-NoKubernetes-246858
	I0920 18:03:27.714971   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:4a:f8", ip: ""} in network mk-NoKubernetes-246858: {Iface:virbr4 ExpiryTime:2024-09-20 19:03:17 +0000 UTC Type:0 Mac:52:54:00:71:4a:f8 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:NoKubernetes-246858 Clientid:01:52:54:00:71:4a:f8}
	I0920 18:03:27.714995   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | domain NoKubernetes-246858 has defined IP address 192.168.72.119 and MAC address 52:54:00:71:4a:f8 in network mk-NoKubernetes-246858
	I0920 18:03:27.715155   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHPort
	I0920 18:03:27.715361   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHKeyPath
	I0920 18:03:27.715502   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHUsername
	I0920 18:03:27.715614   58507 sshutil.go:53] new ssh client: &{IP:192.168.72.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/NoKubernetes-246858/id_rsa Username:docker}
	I0920 18:03:27.801169   58507 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0920 18:03:27.825744   58507 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0920 18:03:27.852721   58507 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 18:03:27.879448   58507 provision.go:87] duration metric: took 260.465719ms to configureAuth
	I0920 18:03:27.879471   58507 buildroot.go:189] setting minikube options for container-runtime
	I0920 18:03:27.879648   58507 config.go:182] Loaded profile config "NoKubernetes-246858": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:03:27.879709   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHHostname
	I0920 18:03:27.882736   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | domain NoKubernetes-246858 has defined MAC address 52:54:00:71:4a:f8 in network mk-NoKubernetes-246858
	I0920 18:03:27.883068   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:4a:f8", ip: ""} in network mk-NoKubernetes-246858: {Iface:virbr4 ExpiryTime:2024-09-20 19:03:17 +0000 UTC Type:0 Mac:52:54:00:71:4a:f8 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:NoKubernetes-246858 Clientid:01:52:54:00:71:4a:f8}
	I0920 18:03:27.883090   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | domain NoKubernetes-246858 has defined IP address 192.168.72.119 and MAC address 52:54:00:71:4a:f8 in network mk-NoKubernetes-246858
	I0920 18:03:27.883320   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHPort
	I0920 18:03:27.883505   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHKeyPath
	I0920 18:03:27.883650   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHKeyPath
	I0920 18:03:27.883826   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHUsername
	I0920 18:03:27.883956   58507 main.go:141] libmachine: Using SSH client type: native
	I0920 18:03:27.884112   58507 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.119 22 <nil> <nil>}
	I0920 18:03:27.884128   58507 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 18:03:28.116432   58507 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 18:03:28.116445   58507 main.go:141] libmachine: Checking connection to Docker...
	I0920 18:03:28.116451   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetURL
	I0920 18:03:28.117794   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | Using libvirt version 6000000
	I0920 18:03:28.120302   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | domain NoKubernetes-246858 has defined MAC address 52:54:00:71:4a:f8 in network mk-NoKubernetes-246858
	I0920 18:03:28.120721   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:4a:f8", ip: ""} in network mk-NoKubernetes-246858: {Iface:virbr4 ExpiryTime:2024-09-20 19:03:17 +0000 UTC Type:0 Mac:52:54:00:71:4a:f8 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:NoKubernetes-246858 Clientid:01:52:54:00:71:4a:f8}
	I0920 18:03:28.120750   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | domain NoKubernetes-246858 has defined IP address 192.168.72.119 and MAC address 52:54:00:71:4a:f8 in network mk-NoKubernetes-246858
	I0920 18:03:28.120886   58507 main.go:141] libmachine: Docker is up and running!
	I0920 18:03:28.120894   58507 main.go:141] libmachine: Reticulating splines...
	I0920 18:03:28.120899   58507 client.go:171] duration metric: took 25.979136446s to LocalClient.Create
	I0920 18:03:28.120919   58507 start.go:167] duration metric: took 25.979197926s to libmachine.API.Create "NoKubernetes-246858"
	I0920 18:03:28.120925   58507 start.go:293] postStartSetup for "NoKubernetes-246858" (driver="kvm2")
	I0920 18:03:28.120935   58507 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 18:03:28.120966   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .DriverName
	I0920 18:03:28.121236   58507 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 18:03:28.121257   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHHostname
	I0920 18:03:28.123933   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | domain NoKubernetes-246858 has defined MAC address 52:54:00:71:4a:f8 in network mk-NoKubernetes-246858
	I0920 18:03:28.124359   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:4a:f8", ip: ""} in network mk-NoKubernetes-246858: {Iface:virbr4 ExpiryTime:2024-09-20 19:03:17 +0000 UTC Type:0 Mac:52:54:00:71:4a:f8 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:NoKubernetes-246858 Clientid:01:52:54:00:71:4a:f8}
	I0920 18:03:28.124392   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | domain NoKubernetes-246858 has defined IP address 192.168.72.119 and MAC address 52:54:00:71:4a:f8 in network mk-NoKubernetes-246858
	I0920 18:03:28.124581   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHPort
	I0920 18:03:28.124757   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHKeyPath
	I0920 18:03:28.124944   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHUsername
	I0920 18:03:28.125083   58507 sshutil.go:53] new ssh client: &{IP:192.168.72.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/NoKubernetes-246858/id_rsa Username:docker}
	I0920 18:03:28.213203   58507 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 18:03:28.217617   58507 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 18:03:28.217630   58507 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/addons for local assets ...
	I0920 18:03:28.217697   58507 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/files for local assets ...
	I0920 18:03:28.217762   58507 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem -> 159732.pem in /etc/ssl/certs
	I0920 18:03:28.217857   58507 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 18:03:28.370975   58846 start.go:364] duration metric: took 7.722827835s to acquireMachinesLock for "cert-expiration-452691"
	I0920 18:03:28.371021   58846 start.go:96] Skipping create...Using existing machine configuration
	I0920 18:03:28.371027   58846 fix.go:54] fixHost starting: 
	I0920 18:03:28.371428   58846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:03:28.371494   58846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:03:28.389804   58846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37835
	I0920 18:03:28.390250   58846 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:03:28.390822   58846 main.go:141] libmachine: Using API Version  1
	I0920 18:03:28.390842   58846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:03:28.391159   58846 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:03:28.391418   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .DriverName
	I0920 18:03:28.391556   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetState
	I0920 18:03:28.393562   58846 fix.go:112] recreateIfNeeded on cert-expiration-452691: state=Running err=<nil>
	W0920 18:03:28.393578   58846 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 18:03:28.396501   58846 out.go:177] * Updating the running kvm2 "cert-expiration-452691" VM ...
	I0920 18:03:28.227687   58507 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /etc/ssl/certs/159732.pem (1708 bytes)
	I0920 18:03:28.251550   58507 start.go:296] duration metric: took 130.612071ms for postStartSetup
	I0920 18:03:28.251603   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetConfigRaw
	I0920 18:03:28.252235   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetIP
	I0920 18:03:28.255155   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | domain NoKubernetes-246858 has defined MAC address 52:54:00:71:4a:f8 in network mk-NoKubernetes-246858
	I0920 18:03:28.255641   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:4a:f8", ip: ""} in network mk-NoKubernetes-246858: {Iface:virbr4 ExpiryTime:2024-09-20 19:03:17 +0000 UTC Type:0 Mac:52:54:00:71:4a:f8 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:NoKubernetes-246858 Clientid:01:52:54:00:71:4a:f8}
	I0920 18:03:28.255660   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | domain NoKubernetes-246858 has defined IP address 192.168.72.119 and MAC address 52:54:00:71:4a:f8 in network mk-NoKubernetes-246858
	I0920 18:03:28.255998   58507 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/NoKubernetes-246858/config.json ...
	I0920 18:03:28.256254   58507 start.go:128] duration metric: took 26.137020322s to createHost
	I0920 18:03:28.256293   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHHostname
	I0920 18:03:28.258648   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | domain NoKubernetes-246858 has defined MAC address 52:54:00:71:4a:f8 in network mk-NoKubernetes-246858
	I0920 18:03:28.259090   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:4a:f8", ip: ""} in network mk-NoKubernetes-246858: {Iface:virbr4 ExpiryTime:2024-09-20 19:03:17 +0000 UTC Type:0 Mac:52:54:00:71:4a:f8 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:NoKubernetes-246858 Clientid:01:52:54:00:71:4a:f8}
	I0920 18:03:28.259101   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | domain NoKubernetes-246858 has defined IP address 192.168.72.119 and MAC address 52:54:00:71:4a:f8 in network mk-NoKubernetes-246858
	I0920 18:03:28.259219   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHPort
	I0920 18:03:28.259416   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHKeyPath
	I0920 18:03:28.259573   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHKeyPath
	I0920 18:03:28.259745   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHUsername
	I0920 18:03:28.259908   58507 main.go:141] libmachine: Using SSH client type: native
	I0920 18:03:28.260067   58507 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.119 22 <nil> <nil>}
	I0920 18:03:28.260071   58507 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 18:03:28.370808   58507 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726855408.329383665
	
	I0920 18:03:28.370820   58507 fix.go:216] guest clock: 1726855408.329383665
	I0920 18:03:28.370826   58507 fix.go:229] Guest: 2024-09-20 18:03:28.329383665 +0000 UTC Remote: 2024-09-20 18:03:28.256278374 +0000 UTC m=+35.079973559 (delta=73.105291ms)
	I0920 18:03:28.370844   58507 fix.go:200] guest clock delta is within tolerance: 73.105291ms
	I0920 18:03:28.370853   58507 start.go:83] releasing machines lock for "NoKubernetes-246858", held for 26.251846012s
	I0920 18:03:28.370890   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .DriverName
	I0920 18:03:28.371141   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetIP
	I0920 18:03:28.374684   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | domain NoKubernetes-246858 has defined MAC address 52:54:00:71:4a:f8 in network mk-NoKubernetes-246858
	I0920 18:03:28.375082   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:4a:f8", ip: ""} in network mk-NoKubernetes-246858: {Iface:virbr4 ExpiryTime:2024-09-20 19:03:17 +0000 UTC Type:0 Mac:52:54:00:71:4a:f8 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:NoKubernetes-246858 Clientid:01:52:54:00:71:4a:f8}
	I0920 18:03:28.375119   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | domain NoKubernetes-246858 has defined IP address 192.168.72.119 and MAC address 52:54:00:71:4a:f8 in network mk-NoKubernetes-246858
	I0920 18:03:28.375350   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .DriverName
	I0920 18:03:28.375935   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .DriverName
	I0920 18:03:28.376094   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .DriverName
	I0920 18:03:28.376200   58507 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 18:03:28.376234   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHHostname
	I0920 18:03:28.376278   58507 ssh_runner.go:195] Run: cat /version.json
	I0920 18:03:28.376296   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHHostname
	I0920 18:03:28.379291   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | domain NoKubernetes-246858 has defined MAC address 52:54:00:71:4a:f8 in network mk-NoKubernetes-246858
	I0920 18:03:28.379650   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:4a:f8", ip: ""} in network mk-NoKubernetes-246858: {Iface:virbr4 ExpiryTime:2024-09-20 19:03:17 +0000 UTC Type:0 Mac:52:54:00:71:4a:f8 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:NoKubernetes-246858 Clientid:01:52:54:00:71:4a:f8}
	I0920 18:03:28.379689   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | domain NoKubernetes-246858 has defined IP address 192.168.72.119 and MAC address 52:54:00:71:4a:f8 in network mk-NoKubernetes-246858
	I0920 18:03:28.379710   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | domain NoKubernetes-246858 has defined MAC address 52:54:00:71:4a:f8 in network mk-NoKubernetes-246858
	I0920 18:03:28.380006   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHPort
	I0920 18:03:28.380179   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHKeyPath
	I0920 18:03:28.380207   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:4a:f8", ip: ""} in network mk-NoKubernetes-246858: {Iface:virbr4 ExpiryTime:2024-09-20 19:03:17 +0000 UTC Type:0 Mac:52:54:00:71:4a:f8 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:NoKubernetes-246858 Clientid:01:52:54:00:71:4a:f8}
	I0920 18:03:28.380223   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | domain NoKubernetes-246858 has defined IP address 192.168.72.119 and MAC address 52:54:00:71:4a:f8 in network mk-NoKubernetes-246858
	I0920 18:03:28.380319   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHUsername
	I0920 18:03:28.380403   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHPort
	I0920 18:03:28.380446   58507 sshutil.go:53] new ssh client: &{IP:192.168.72.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/NoKubernetes-246858/id_rsa Username:docker}
	I0920 18:03:28.380512   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHKeyPath
	I0920 18:03:28.380728   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHUsername
	I0920 18:03:28.380897   58507 sshutil.go:53] new ssh client: &{IP:192.168.72.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/NoKubernetes-246858/id_rsa Username:docker}
	I0920 18:03:28.467306   58507 ssh_runner.go:195] Run: systemctl --version
	I0920 18:03:28.508152   58507 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 18:03:28.678757   58507 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 18:03:28.684863   58507 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 18:03:28.684935   58507 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 18:03:28.700567   58507 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 18:03:28.700580   58507 start.go:495] detecting cgroup driver to use...
	I0920 18:03:28.700652   58507 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 18:03:28.718368   58507 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 18:03:28.733560   58507 docker.go:217] disabling cri-docker service (if available) ...
	I0920 18:03:28.733608   58507 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 18:03:28.748692   58507 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 18:03:28.764821   58507 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 18:03:28.905166   58507 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 18:03:29.072381   58507 docker.go:233] disabling docker service ...
	I0920 18:03:29.072456   58507 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 18:03:29.091906   58507 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 18:03:29.106945   58507 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 18:03:29.256315   58507 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 18:03:29.392872   58507 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 18:03:29.408445   58507 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 18:03:29.429358   58507 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 18:03:29.429422   58507 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:03:29.440130   58507 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 18:03:29.440184   58507 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:03:29.450555   58507 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:03:29.461754   58507 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:03:29.472631   58507 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 18:03:29.483382   58507 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:03:29.493672   58507 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:03:29.510410   58507 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:03:29.520664   58507 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 18:03:29.529795   58507 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 18:03:29.529850   58507 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 18:03:29.541852   58507 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 18:03:29.551114   58507 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:03:29.669021   58507 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 18:03:29.760246   58507 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 18:03:29.760310   58507 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 18:03:29.764834   58507 start.go:563] Will wait 60s for crictl version
	I0920 18:03:29.764879   58507 ssh_runner.go:195] Run: which crictl
	I0920 18:03:29.768782   58507 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 18:03:29.808681   58507 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 18:03:29.808770   58507 ssh_runner.go:195] Run: crio --version
	I0920 18:03:29.837524   58507 ssh_runner.go:195] Run: crio --version
	I0920 18:03:29.868669   58507 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 18:03:28.398711   58846 machine.go:93] provisionDockerMachine start ...
	I0920 18:03:28.398736   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .DriverName
	I0920 18:03:28.398990   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHHostname
	I0920 18:03:28.402473   58846 main.go:141] libmachine: (cert-expiration-452691) DBG | domain cert-expiration-452691 has defined MAC address 52:54:00:e4:61:4e in network mk-cert-expiration-452691
	I0920 18:03:28.402970   58846 main.go:141] libmachine: (cert-expiration-452691) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:61:4e", ip: ""} in network mk-cert-expiration-452691: {Iface:virbr3 ExpiryTime:2024-09-20 18:59:52 +0000 UTC Type:0 Mac:52:54:00:e4:61:4e Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:cert-expiration-452691 Clientid:01:52:54:00:e4:61:4e}
	I0920 18:03:28.402992   58846 main.go:141] libmachine: (cert-expiration-452691) DBG | domain cert-expiration-452691 has defined IP address 192.168.61.65 and MAC address 52:54:00:e4:61:4e in network mk-cert-expiration-452691
	I0920 18:03:28.403183   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHPort
	I0920 18:03:28.403399   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHKeyPath
	I0920 18:03:28.403657   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHKeyPath
	I0920 18:03:28.403858   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHUsername
	I0920 18:03:28.404033   58846 main.go:141] libmachine: Using SSH client type: native
	I0920 18:03:28.404235   58846 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.65 22 <nil> <nil>}
	I0920 18:03:28.404241   58846 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 18:03:28.521263   58846 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-452691
	
	I0920 18:03:28.521282   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetMachineName
	I0920 18:03:28.521609   58846 buildroot.go:166] provisioning hostname "cert-expiration-452691"
	I0920 18:03:28.521629   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetMachineName
	I0920 18:03:28.521865   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHHostname
	I0920 18:03:28.525174   58846 main.go:141] libmachine: (cert-expiration-452691) DBG | domain cert-expiration-452691 has defined MAC address 52:54:00:e4:61:4e in network mk-cert-expiration-452691
	I0920 18:03:28.525645   58846 main.go:141] libmachine: (cert-expiration-452691) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:61:4e", ip: ""} in network mk-cert-expiration-452691: {Iface:virbr3 ExpiryTime:2024-09-20 18:59:52 +0000 UTC Type:0 Mac:52:54:00:e4:61:4e Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:cert-expiration-452691 Clientid:01:52:54:00:e4:61:4e}
	I0920 18:03:28.525670   58846 main.go:141] libmachine: (cert-expiration-452691) DBG | domain cert-expiration-452691 has defined IP address 192.168.61.65 and MAC address 52:54:00:e4:61:4e in network mk-cert-expiration-452691
	I0920 18:03:28.525885   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHPort
	I0920 18:03:28.526072   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHKeyPath
	I0920 18:03:28.526212   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHKeyPath
	I0920 18:03:28.526375   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHUsername
	I0920 18:03:28.526562   58846 main.go:141] libmachine: Using SSH client type: native
	I0920 18:03:28.526834   58846 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.65 22 <nil> <nil>}
	I0920 18:03:28.526854   58846 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-452691 && echo "cert-expiration-452691" | sudo tee /etc/hostname
	I0920 18:03:28.658309   58846 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-452691
	
	I0920 18:03:28.658331   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHHostname
	I0920 18:03:28.661040   58846 main.go:141] libmachine: (cert-expiration-452691) DBG | domain cert-expiration-452691 has defined MAC address 52:54:00:e4:61:4e in network mk-cert-expiration-452691
	I0920 18:03:28.661350   58846 main.go:141] libmachine: (cert-expiration-452691) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:61:4e", ip: ""} in network mk-cert-expiration-452691: {Iface:virbr3 ExpiryTime:2024-09-20 18:59:52 +0000 UTC Type:0 Mac:52:54:00:e4:61:4e Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:cert-expiration-452691 Clientid:01:52:54:00:e4:61:4e}
	I0920 18:03:28.661385   58846 main.go:141] libmachine: (cert-expiration-452691) DBG | domain cert-expiration-452691 has defined IP address 192.168.61.65 and MAC address 52:54:00:e4:61:4e in network mk-cert-expiration-452691
	I0920 18:03:28.661527   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHPort
	I0920 18:03:28.661705   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHKeyPath
	I0920 18:03:28.661866   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHKeyPath
	I0920 18:03:28.661985   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHUsername
	I0920 18:03:28.662134   58846 main.go:141] libmachine: Using SSH client type: native
	I0920 18:03:28.662348   58846 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.65 22 <nil> <nil>}
	I0920 18:03:28.662363   58846 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-452691' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-452691/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-452691' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 18:03:28.776208   58846 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:03:28.776229   58846 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-8777/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-8777/.minikube}
	I0920 18:03:28.776276   58846 buildroot.go:174] setting up certificates
	I0920 18:03:28.776299   58846 provision.go:84] configureAuth start
	I0920 18:03:28.776308   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetMachineName
	I0920 18:03:28.776585   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetIP
	I0920 18:03:28.779404   58846 main.go:141] libmachine: (cert-expiration-452691) DBG | domain cert-expiration-452691 has defined MAC address 52:54:00:e4:61:4e in network mk-cert-expiration-452691
	I0920 18:03:28.779799   58846 main.go:141] libmachine: (cert-expiration-452691) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:61:4e", ip: ""} in network mk-cert-expiration-452691: {Iface:virbr3 ExpiryTime:2024-09-20 18:59:52 +0000 UTC Type:0 Mac:52:54:00:e4:61:4e Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:cert-expiration-452691 Clientid:01:52:54:00:e4:61:4e}
	I0920 18:03:28.779832   58846 main.go:141] libmachine: (cert-expiration-452691) DBG | domain cert-expiration-452691 has defined IP address 192.168.61.65 and MAC address 52:54:00:e4:61:4e in network mk-cert-expiration-452691
	I0920 18:03:28.780052   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHHostname
	I0920 18:03:28.782564   58846 main.go:141] libmachine: (cert-expiration-452691) DBG | domain cert-expiration-452691 has defined MAC address 52:54:00:e4:61:4e in network mk-cert-expiration-452691
	I0920 18:03:28.782826   58846 main.go:141] libmachine: (cert-expiration-452691) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:61:4e", ip: ""} in network mk-cert-expiration-452691: {Iface:virbr3 ExpiryTime:2024-09-20 18:59:52 +0000 UTC Type:0 Mac:52:54:00:e4:61:4e Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:cert-expiration-452691 Clientid:01:52:54:00:e4:61:4e}
	I0920 18:03:28.782851   58846 main.go:141] libmachine: (cert-expiration-452691) DBG | domain cert-expiration-452691 has defined IP address 192.168.61.65 and MAC address 52:54:00:e4:61:4e in network mk-cert-expiration-452691
	I0920 18:03:28.783006   58846 provision.go:143] copyHostCerts
	I0920 18:03:28.783074   58846 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem, removing ...
	I0920 18:03:28.783092   58846 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem
	I0920 18:03:28.783173   58846 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem (1675 bytes)
	I0920 18:03:28.783359   58846 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem, removing ...
	I0920 18:03:28.783367   58846 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem
	I0920 18:03:28.783409   58846 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem (1082 bytes)
	I0920 18:03:28.783506   58846 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem, removing ...
	I0920 18:03:28.783510   58846 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem
	I0920 18:03:28.783536   58846 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem (1123 bytes)
	I0920 18:03:28.783605   58846 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-452691 san=[127.0.0.1 192.168.61.65 cert-expiration-452691 localhost minikube]
	I0920 18:03:28.988387   58846 provision.go:177] copyRemoteCerts
	I0920 18:03:28.988433   58846 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 18:03:28.988454   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHHostname
	I0920 18:03:28.991656   58846 main.go:141] libmachine: (cert-expiration-452691) DBG | domain cert-expiration-452691 has defined MAC address 52:54:00:e4:61:4e in network mk-cert-expiration-452691
	I0920 18:03:28.992093   58846 main.go:141] libmachine: (cert-expiration-452691) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:61:4e", ip: ""} in network mk-cert-expiration-452691: {Iface:virbr3 ExpiryTime:2024-09-20 18:59:52 +0000 UTC Type:0 Mac:52:54:00:e4:61:4e Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:cert-expiration-452691 Clientid:01:52:54:00:e4:61:4e}
	I0920 18:03:28.992117   58846 main.go:141] libmachine: (cert-expiration-452691) DBG | domain cert-expiration-452691 has defined IP address 192.168.61.65 and MAC address 52:54:00:e4:61:4e in network mk-cert-expiration-452691
	I0920 18:03:28.992326   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHPort
	I0920 18:03:28.992568   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHKeyPath
	I0920 18:03:28.992780   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHUsername
	I0920 18:03:28.992933   58846 sshutil.go:53] new ssh client: &{IP:192.168.61.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/cert-expiration-452691/id_rsa Username:docker}
	I0920 18:03:29.084879   58846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0920 18:03:29.114735   58846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0920 18:03:29.144451   58846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 18:03:29.172169   58846 provision.go:87] duration metric: took 395.857334ms to configureAuth
	I0920 18:03:29.172189   58846 buildroot.go:189] setting minikube options for container-runtime
	I0920 18:03:29.172346   58846 config.go:182] Loaded profile config "cert-expiration-452691": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:03:29.172445   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHHostname
	I0920 18:03:29.176041   58846 main.go:141] libmachine: (cert-expiration-452691) DBG | domain cert-expiration-452691 has defined MAC address 52:54:00:e4:61:4e in network mk-cert-expiration-452691
	I0920 18:03:29.176598   58846 main.go:141] libmachine: (cert-expiration-452691) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:61:4e", ip: ""} in network mk-cert-expiration-452691: {Iface:virbr3 ExpiryTime:2024-09-20 18:59:52 +0000 UTC Type:0 Mac:52:54:00:e4:61:4e Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:cert-expiration-452691 Clientid:01:52:54:00:e4:61:4e}
	I0920 18:03:29.176621   58846 main.go:141] libmachine: (cert-expiration-452691) DBG | domain cert-expiration-452691 has defined IP address 192.168.61.65 and MAC address 52:54:00:e4:61:4e in network mk-cert-expiration-452691
	I0920 18:03:29.176840   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHPort
	I0920 18:03:29.176998   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHKeyPath
	I0920 18:03:29.177122   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHKeyPath
	I0920 18:03:29.177271   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHUsername
	I0920 18:03:29.177508   58846 main.go:141] libmachine: Using SSH client type: native
	I0920 18:03:29.177763   58846 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.65 22 <nil> <nil>}
	I0920 18:03:29.177776   58846 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 18:03:27.695216   58226 pod_ready.go:103] pod "coredns-7c65d6cfc9-kzpzk" in "kube-system" namespace has status "Ready":"False"
	I0920 18:03:28.193952   58226 pod_ready.go:93] pod "coredns-7c65d6cfc9-kzpzk" in "kube-system" namespace has status "Ready":"True"
	I0920 18:03:28.193975   58226 pod_ready.go:82] duration metric: took 7.508201541s for pod "coredns-7c65d6cfc9-kzpzk" in "kube-system" namespace to be "Ready" ...
	I0920 18:03:28.193985   58226 pod_ready.go:79] waiting up to 4m0s for pod "etcd-pause-421146" in "kube-system" namespace to be "Ready" ...
	I0920 18:03:28.199211   58226 pod_ready.go:93] pod "etcd-pause-421146" in "kube-system" namespace has status "Ready":"True"
	I0920 18:03:28.199257   58226 pod_ready.go:82] duration metric: took 5.263845ms for pod "etcd-pause-421146" in "kube-system" namespace to be "Ready" ...
	I0920 18:03:28.199269   58226 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-pause-421146" in "kube-system" namespace to be "Ready" ...
	I0920 18:03:30.208287   58226 pod_ready.go:103] pod "kube-apiserver-pause-421146" in "kube-system" namespace has status "Ready":"False"
	I0920 18:03:31.207617   58226 pod_ready.go:93] pod "kube-apiserver-pause-421146" in "kube-system" namespace has status "Ready":"True"
	I0920 18:03:31.207648   58226 pod_ready.go:82] duration metric: took 3.008369415s for pod "kube-apiserver-pause-421146" in "kube-system" namespace to be "Ready" ...
	I0920 18:03:31.207663   58226 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-pause-421146" in "kube-system" namespace to be "Ready" ...
	I0920 18:03:29.870071   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetIP
	I0920 18:03:29.873286   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | domain NoKubernetes-246858 has defined MAC address 52:54:00:71:4a:f8 in network mk-NoKubernetes-246858
	I0920 18:03:29.873709   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:4a:f8", ip: ""} in network mk-NoKubernetes-246858: {Iface:virbr4 ExpiryTime:2024-09-20 19:03:17 +0000 UTC Type:0 Mac:52:54:00:71:4a:f8 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:NoKubernetes-246858 Clientid:01:52:54:00:71:4a:f8}
	I0920 18:03:29.873740   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | domain NoKubernetes-246858 has defined IP address 192.168.72.119 and MAC address 52:54:00:71:4a:f8 in network mk-NoKubernetes-246858
	I0920 18:03:29.874043   58507 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0920 18:03:29.878339   58507 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:03:29.891128   58507 kubeadm.go:883] updating cluster {Name:NoKubernetes-246858 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.31.1 ClusterName:NoKubernetes-246858 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.119 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 18:03:29.891222   58507 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:03:29.891269   58507 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:03:29.923387   58507 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 18:03:29.923457   58507 ssh_runner.go:195] Run: which lz4
	I0920 18:03:29.927177   58507 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 18:03:29.931014   58507 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 18:03:29.931036   58507 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0920 18:03:31.342933   58507 crio.go:462] duration metric: took 1.415782149s to copy over tarball
	I0920 18:03:31.342991   58507 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 18:03:33.216801   58226 pod_ready.go:103] pod "kube-controller-manager-pause-421146" in "kube-system" namespace has status "Ready":"False"
	I0920 18:03:34.714689   58226 pod_ready.go:93] pod "kube-controller-manager-pause-421146" in "kube-system" namespace has status "Ready":"True"
	I0920 18:03:34.714723   58226 pod_ready.go:82] duration metric: took 3.507050896s for pod "kube-controller-manager-pause-421146" in "kube-system" namespace to be "Ready" ...
	I0920 18:03:34.714739   58226 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-gcp8x" in "kube-system" namespace to be "Ready" ...
	I0920 18:03:34.720590   58226 pod_ready.go:93] pod "kube-proxy-gcp8x" in "kube-system" namespace has status "Ready":"True"
	I0920 18:03:34.720615   58226 pod_ready.go:82] duration metric: took 5.869378ms for pod "kube-proxy-gcp8x" in "kube-system" namespace to be "Ready" ...
	I0920 18:03:34.720625   58226 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-pause-421146" in "kube-system" namespace to be "Ready" ...
	I0920 18:03:34.726519   58226 pod_ready.go:93] pod "kube-scheduler-pause-421146" in "kube-system" namespace has status "Ready":"True"
	I0920 18:03:34.726547   58226 pod_ready.go:82] duration metric: took 5.91332ms for pod "kube-scheduler-pause-421146" in "kube-system" namespace to be "Ready" ...
	I0920 18:03:34.726556   58226 pod_ready.go:39] duration metric: took 14.04633779s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:03:34.726573   58226 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 18:03:34.743036   58226 ops.go:34] apiserver oom_adj: -16
	I0920 18:03:34.743061   58226 kubeadm.go:597] duration metric: took 41.802732087s to restartPrimaryControlPlane
	I0920 18:03:34.743072   58226 kubeadm.go:394] duration metric: took 41.969300358s to StartCluster
	I0920 18:03:34.743092   58226 settings.go:142] acquiring lock: {Name:mk2a13c58cdc0faf8cddca5d6716175d45db9bfd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:03:34.743184   58226 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19672-8777/kubeconfig
	I0920 18:03:34.744302   58226 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/kubeconfig: {Name:mkf32a4c736808e023459b2f0e40188618a38db1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:03:34.744530   58226 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.200 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:03:34.744651   58226 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 18:03:34.744798   58226 config.go:182] Loaded profile config "pause-421146": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:03:34.746560   58226 out.go:177] * Enabled addons: 
	I0920 18:03:34.746571   58226 out.go:177] * Verifying Kubernetes components...
	I0920 18:03:35.044142   59003 start.go:364] duration metric: took 12.51255782s to acquireMachinesLock for "auto-833505"
	I0920 18:03:35.044215   59003 start.go:93] Provisioning new machine with config: &{Name:auto-833505 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.31.1 ClusterName:auto-833505 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:03:35.044312   59003 start.go:125] createHost starting for "" (driver="kvm2")
	I0920 18:03:34.761062   58846 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 18:03:34.761079   58846 machine.go:96] duration metric: took 6.362357366s to provisionDockerMachine
	I0920 18:03:34.761092   58846 start.go:293] postStartSetup for "cert-expiration-452691" (driver="kvm2")
	I0920 18:03:34.761105   58846 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 18:03:34.761126   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .DriverName
	I0920 18:03:34.761638   58846 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 18:03:34.761664   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHHostname
	I0920 18:03:34.764776   58846 main.go:141] libmachine: (cert-expiration-452691) DBG | domain cert-expiration-452691 has defined MAC address 52:54:00:e4:61:4e in network mk-cert-expiration-452691
	I0920 18:03:34.765100   58846 main.go:141] libmachine: (cert-expiration-452691) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:61:4e", ip: ""} in network mk-cert-expiration-452691: {Iface:virbr3 ExpiryTime:2024-09-20 18:59:52 +0000 UTC Type:0 Mac:52:54:00:e4:61:4e Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:cert-expiration-452691 Clientid:01:52:54:00:e4:61:4e}
	I0920 18:03:34.765116   58846 main.go:141] libmachine: (cert-expiration-452691) DBG | domain cert-expiration-452691 has defined IP address 192.168.61.65 and MAC address 52:54:00:e4:61:4e in network mk-cert-expiration-452691
	I0920 18:03:34.765343   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHPort
	I0920 18:03:34.765534   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHKeyPath
	I0920 18:03:34.765699   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHUsername
	I0920 18:03:34.765826   58846 sshutil.go:53] new ssh client: &{IP:192.168.61.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/cert-expiration-452691/id_rsa Username:docker}
	I0920 18:03:34.860909   58846 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 18:03:34.865719   58846 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 18:03:34.865736   58846 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/addons for local assets ...
	I0920 18:03:34.865818   58846 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/files for local assets ...
	I0920 18:03:34.865962   58846 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem -> 159732.pem in /etc/ssl/certs
	I0920 18:03:34.866062   58846 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 18:03:34.877599   58846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /etc/ssl/certs/159732.pem (1708 bytes)
	I0920 18:03:34.917811   58846 start.go:296] duration metric: took 156.703909ms for postStartSetup
	I0920 18:03:34.917869   58846 fix.go:56] duration metric: took 6.546842303s for fixHost
	I0920 18:03:34.917909   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHHostname
	I0920 18:03:34.922026   58846 main.go:141] libmachine: (cert-expiration-452691) DBG | domain cert-expiration-452691 has defined MAC address 52:54:00:e4:61:4e in network mk-cert-expiration-452691
	I0920 18:03:34.922484   58846 main.go:141] libmachine: (cert-expiration-452691) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:61:4e", ip: ""} in network mk-cert-expiration-452691: {Iface:virbr3 ExpiryTime:2024-09-20 18:59:52 +0000 UTC Type:0 Mac:52:54:00:e4:61:4e Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:cert-expiration-452691 Clientid:01:52:54:00:e4:61:4e}
	I0920 18:03:34.922527   58846 main.go:141] libmachine: (cert-expiration-452691) DBG | domain cert-expiration-452691 has defined IP address 192.168.61.65 and MAC address 52:54:00:e4:61:4e in network mk-cert-expiration-452691
	I0920 18:03:34.922771   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHPort
	I0920 18:03:34.922984   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHKeyPath
	I0920 18:03:34.923206   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHKeyPath
	I0920 18:03:34.923395   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHUsername
	I0920 18:03:34.923572   58846 main.go:141] libmachine: Using SSH client type: native
	I0920 18:03:34.923781   58846 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.65 22 <nil> <nil>}
	I0920 18:03:34.923786   58846 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 18:03:35.044017   58846 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726855415.027122239
	
	I0920 18:03:35.044030   58846 fix.go:216] guest clock: 1726855415.027122239
	I0920 18:03:35.044037   58846 fix.go:229] Guest: 2024-09-20 18:03:35.027122239 +0000 UTC Remote: 2024-09-20 18:03:34.917872607 +0000 UTC m=+14.430685224 (delta=109.249632ms)
	I0920 18:03:35.044060   58846 fix.go:200] guest clock delta is within tolerance: 109.249632ms
	I0920 18:03:35.044065   58846 start.go:83] releasing machines lock for "cert-expiration-452691", held for 6.673072435s
	I0920 18:03:35.044092   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .DriverName
	I0920 18:03:35.044405   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetIP
	I0920 18:03:35.047724   58846 main.go:141] libmachine: (cert-expiration-452691) DBG | domain cert-expiration-452691 has defined MAC address 52:54:00:e4:61:4e in network mk-cert-expiration-452691
	I0920 18:03:35.047980   58846 main.go:141] libmachine: (cert-expiration-452691) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:61:4e", ip: ""} in network mk-cert-expiration-452691: {Iface:virbr3 ExpiryTime:2024-09-20 18:59:52 +0000 UTC Type:0 Mac:52:54:00:e4:61:4e Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:cert-expiration-452691 Clientid:01:52:54:00:e4:61:4e}
	I0920 18:03:35.048013   58846 main.go:141] libmachine: (cert-expiration-452691) DBG | domain cert-expiration-452691 has defined IP address 192.168.61.65 and MAC address 52:54:00:e4:61:4e in network mk-cert-expiration-452691
	I0920 18:03:35.048195   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .DriverName
	I0920 18:03:35.048911   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .DriverName
	I0920 18:03:35.049112   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .DriverName
	I0920 18:03:35.049197   58846 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 18:03:35.049229   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHHostname
	I0920 18:03:35.049374   58846 ssh_runner.go:195] Run: cat /version.json
	I0920 18:03:35.049393   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHHostname
	I0920 18:03:35.052189   58846 main.go:141] libmachine: (cert-expiration-452691) DBG | domain cert-expiration-452691 has defined MAC address 52:54:00:e4:61:4e in network mk-cert-expiration-452691
	I0920 18:03:35.052500   58846 main.go:141] libmachine: (cert-expiration-452691) DBG | domain cert-expiration-452691 has defined MAC address 52:54:00:e4:61:4e in network mk-cert-expiration-452691
	I0920 18:03:35.052642   58846 main.go:141] libmachine: (cert-expiration-452691) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:61:4e", ip: ""} in network mk-cert-expiration-452691: {Iface:virbr3 ExpiryTime:2024-09-20 18:59:52 +0000 UTC Type:0 Mac:52:54:00:e4:61:4e Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:cert-expiration-452691 Clientid:01:52:54:00:e4:61:4e}
	I0920 18:03:35.052674   58846 main.go:141] libmachine: (cert-expiration-452691) DBG | domain cert-expiration-452691 has defined IP address 192.168.61.65 and MAC address 52:54:00:e4:61:4e in network mk-cert-expiration-452691
	I0920 18:03:35.052926   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHPort
	I0920 18:03:35.053109   58846 main.go:141] libmachine: (cert-expiration-452691) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:61:4e", ip: ""} in network mk-cert-expiration-452691: {Iface:virbr3 ExpiryTime:2024-09-20 18:59:52 +0000 UTC Type:0 Mac:52:54:00:e4:61:4e Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:cert-expiration-452691 Clientid:01:52:54:00:e4:61:4e}
	I0920 18:03:35.053133   58846 main.go:141] libmachine: (cert-expiration-452691) DBG | domain cert-expiration-452691 has defined IP address 192.168.61.65 and MAC address 52:54:00:e4:61:4e in network mk-cert-expiration-452691
	I0920 18:03:35.053164   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHKeyPath
	I0920 18:03:35.053341   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHUsername
	I0920 18:03:35.053355   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHPort
	I0920 18:03:35.053547   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHKeyPath
	I0920 18:03:35.053540   58846 sshutil.go:53] new ssh client: &{IP:192.168.61.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/cert-expiration-452691/id_rsa Username:docker}
	I0920 18:03:35.053684   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHUsername
	I0920 18:03:35.053791   58846 sshutil.go:53] new ssh client: &{IP:192.168.61.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/cert-expiration-452691/id_rsa Username:docker}
	I0920 18:03:35.185207   58846 ssh_runner.go:195] Run: systemctl --version
	I0920 18:03:35.236966   58846 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 18:03:35.441793   58846 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 18:03:35.455299   58846 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 18:03:35.455360   58846 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 18:03:35.467336   58846 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0920 18:03:35.467354   58846 start.go:495] detecting cgroup driver to use...
	I0920 18:03:35.467424   58846 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 18:03:35.490111   58846 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 18:03:35.510612   58846 docker.go:217] disabling cri-docker service (if available) ...
	I0920 18:03:35.510671   58846 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 18:03:34.748314   58226 addons.go:510] duration metric: took 3.666918ms for enable addons: enabled=[]
	I0920 18:03:34.748395   58226 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:03:34.936425   58226 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:03:34.960297   58226 node_ready.go:35] waiting up to 6m0s for node "pause-421146" to be "Ready" ...
	I0920 18:03:34.964467   58226 node_ready.go:49] node "pause-421146" has status "Ready":"True"
	I0920 18:03:34.964497   58226 node_ready.go:38] duration metric: took 4.158459ms for node "pause-421146" to be "Ready" ...
	I0920 18:03:34.964507   58226 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:03:34.973021   58226 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-kzpzk" in "kube-system" namespace to be "Ready" ...
	I0920 18:03:34.979836   58226 pod_ready.go:93] pod "coredns-7c65d6cfc9-kzpzk" in "kube-system" namespace has status "Ready":"True"
	I0920 18:03:34.979868   58226 pod_ready.go:82] duration metric: took 6.81667ms for pod "coredns-7c65d6cfc9-kzpzk" in "kube-system" namespace to be "Ready" ...
	I0920 18:03:34.979880   58226 pod_ready.go:79] waiting up to 6m0s for pod "etcd-pause-421146" in "kube-system" namespace to be "Ready" ...
	I0920 18:03:35.111773   58226 pod_ready.go:93] pod "etcd-pause-421146" in "kube-system" namespace has status "Ready":"True"
	I0920 18:03:35.111803   58226 pod_ready.go:82] duration metric: took 131.914672ms for pod "etcd-pause-421146" in "kube-system" namespace to be "Ready" ...
	I0920 18:03:35.111819   58226 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-pause-421146" in "kube-system" namespace to be "Ready" ...
	I0920 18:03:35.515214   58226 pod_ready.go:93] pod "kube-apiserver-pause-421146" in "kube-system" namespace has status "Ready":"True"
	I0920 18:03:35.515253   58226 pod_ready.go:82] duration metric: took 403.425436ms for pod "kube-apiserver-pause-421146" in "kube-system" namespace to be "Ready" ...
	I0920 18:03:35.515267   58226 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-pause-421146" in "kube-system" namespace to be "Ready" ...
	I0920 18:03:35.913540   58226 pod_ready.go:93] pod "kube-controller-manager-pause-421146" in "kube-system" namespace has status "Ready":"True"
	I0920 18:03:35.913569   58226 pod_ready.go:82] duration metric: took 398.292185ms for pod "kube-controller-manager-pause-421146" in "kube-system" namespace to be "Ready" ...
	I0920 18:03:35.913583   58226 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gcp8x" in "kube-system" namespace to be "Ready" ...
	I0920 18:03:36.312895   58226 pod_ready.go:93] pod "kube-proxy-gcp8x" in "kube-system" namespace has status "Ready":"True"
	I0920 18:03:36.312927   58226 pod_ready.go:82] duration metric: took 399.334933ms for pod "kube-proxy-gcp8x" in "kube-system" namespace to be "Ready" ...
	I0920 18:03:36.312941   58226 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-pause-421146" in "kube-system" namespace to be "Ready" ...
	I0920 18:03:35.167499   59003 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0920 18:03:35.167786   59003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:03:35.167850   59003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:03:35.184819   59003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40859
	I0920 18:03:35.185274   59003 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:03:35.186045   59003 main.go:141] libmachine: Using API Version  1
	I0920 18:03:35.186072   59003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:03:35.186550   59003 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:03:35.186798   59003 main.go:141] libmachine: (auto-833505) Calling .GetMachineName
	I0920 18:03:35.186950   59003 main.go:141] libmachine: (auto-833505) Calling .DriverName
	I0920 18:03:35.187165   59003 start.go:159] libmachine.API.Create for "auto-833505" (driver="kvm2")
	I0920 18:03:35.187203   59003 client.go:168] LocalClient.Create starting
	I0920 18:03:35.187238   59003 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem
	I0920 18:03:35.187281   59003 main.go:141] libmachine: Decoding PEM data...
	I0920 18:03:35.187300   59003 main.go:141] libmachine: Parsing certificate...
	I0920 18:03:35.187376   59003 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem
	I0920 18:03:35.187405   59003 main.go:141] libmachine: Decoding PEM data...
	I0920 18:03:35.187432   59003 main.go:141] libmachine: Parsing certificate...
	I0920 18:03:35.187467   59003 main.go:141] libmachine: Running pre-create checks...
	I0920 18:03:35.187486   59003 main.go:141] libmachine: (auto-833505) Calling .PreCreateCheck
	I0920 18:03:35.188057   59003 main.go:141] libmachine: (auto-833505) Calling .GetConfigRaw
	I0920 18:03:35.188572   59003 main.go:141] libmachine: Creating machine...
	I0920 18:03:35.188590   59003 main.go:141] libmachine: (auto-833505) Calling .Create
	I0920 18:03:35.188857   59003 main.go:141] libmachine: (auto-833505) Creating KVM machine...
	I0920 18:03:35.190397   59003 main.go:141] libmachine: (auto-833505) DBG | found existing default KVM network
	I0920 18:03:35.192384   59003 main.go:141] libmachine: (auto-833505) DBG | I0920 18:03:35.192183   59113 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000204c20}
	I0920 18:03:35.192434   59003 main.go:141] libmachine: (auto-833505) DBG | created network xml: 
	I0920 18:03:35.192465   59003 main.go:141] libmachine: (auto-833505) DBG | <network>
	I0920 18:03:35.192479   59003 main.go:141] libmachine: (auto-833505) DBG |   <name>mk-auto-833505</name>
	I0920 18:03:35.192497   59003 main.go:141] libmachine: (auto-833505) DBG |   <dns enable='no'/>
	I0920 18:03:35.192508   59003 main.go:141] libmachine: (auto-833505) DBG |   
	I0920 18:03:35.192517   59003 main.go:141] libmachine: (auto-833505) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0920 18:03:35.192527   59003 main.go:141] libmachine: (auto-833505) DBG |     <dhcp>
	I0920 18:03:35.192535   59003 main.go:141] libmachine: (auto-833505) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0920 18:03:35.192556   59003 main.go:141] libmachine: (auto-833505) DBG |     </dhcp>
	I0920 18:03:35.192583   59003 main.go:141] libmachine: (auto-833505) DBG |   </ip>
	I0920 18:03:35.192596   59003 main.go:141] libmachine: (auto-833505) DBG |   
	I0920 18:03:35.192609   59003 main.go:141] libmachine: (auto-833505) DBG | </network>
	I0920 18:03:35.192619   59003 main.go:141] libmachine: (auto-833505) DBG | 
	I0920 18:03:35.407676   59003 main.go:141] libmachine: (auto-833505) DBG | trying to create private KVM network mk-auto-833505 192.168.39.0/24...
	I0920 18:03:35.513944   59003 main.go:141] libmachine: (auto-833505) Setting up store path in /home/jenkins/minikube-integration/19672-8777/.minikube/machines/auto-833505 ...
	I0920 18:03:35.513991   59003 main.go:141] libmachine: (auto-833505) DBG | private KVM network mk-auto-833505 192.168.39.0/24 created
	I0920 18:03:35.514006   59003 main.go:141] libmachine: (auto-833505) Building disk image from file:///home/jenkins/minikube-integration/19672-8777/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso
	I0920 18:03:35.514053   59003 main.go:141] libmachine: (auto-833505) Downloading /home/jenkins/minikube-integration/19672-8777/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19672-8777/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso...
	I0920 18:03:35.514077   59003 main.go:141] libmachine: (auto-833505) DBG | I0920 18:03:35.506879   59113 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19672-8777/.minikube
	I0920 18:03:35.778642   59003 main.go:141] libmachine: (auto-833505) DBG | I0920 18:03:35.778486   59113 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/auto-833505/id_rsa...
	I0920 18:03:35.955367   59003 main.go:141] libmachine: (auto-833505) DBG | I0920 18:03:35.955200   59113 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/auto-833505/auto-833505.rawdisk...
	I0920 18:03:35.955403   59003 main.go:141] libmachine: (auto-833505) DBG | Writing magic tar header
	I0920 18:03:35.955418   59003 main.go:141] libmachine: (auto-833505) DBG | Writing SSH key tar header
	I0920 18:03:35.955431   59003 main.go:141] libmachine: (auto-833505) DBG | I0920 18:03:35.955340   59113 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19672-8777/.minikube/machines/auto-833505 ...
	I0920 18:03:35.955460   59003 main.go:141] libmachine: (auto-833505) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/auto-833505
	I0920 18:03:35.955530   59003 main.go:141] libmachine: (auto-833505) Setting executable bit set on /home/jenkins/minikube-integration/19672-8777/.minikube/machines/auto-833505 (perms=drwx------)
	I0920 18:03:35.955559   59003 main.go:141] libmachine: (auto-833505) Setting executable bit set on /home/jenkins/minikube-integration/19672-8777/.minikube/machines (perms=drwxr-xr-x)
	I0920 18:03:35.955582   59003 main.go:141] libmachine: (auto-833505) Setting executable bit set on /home/jenkins/minikube-integration/19672-8777/.minikube (perms=drwxr-xr-x)
	I0920 18:03:35.955604   59003 main.go:141] libmachine: (auto-833505) Setting executable bit set on /home/jenkins/minikube-integration/19672-8777 (perms=drwxrwxr-x)
	I0920 18:03:35.955635   59003 main.go:141] libmachine: (auto-833505) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-8777/.minikube/machines
	I0920 18:03:35.955649   59003 main.go:141] libmachine: (auto-833505) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0920 18:03:35.955662   59003 main.go:141] libmachine: (auto-833505) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0920 18:03:35.955673   59003 main.go:141] libmachine: (auto-833505) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-8777/.minikube
	I0920 18:03:35.955685   59003 main.go:141] libmachine: (auto-833505) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-8777
	I0920 18:03:35.955696   59003 main.go:141] libmachine: (auto-833505) Creating domain...
	I0920 18:03:35.955764   59003 main.go:141] libmachine: (auto-833505) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0920 18:03:35.955791   59003 main.go:141] libmachine: (auto-833505) DBG | Checking permissions on dir: /home/jenkins
	I0920 18:03:35.955806   59003 main.go:141] libmachine: (auto-833505) DBG | Checking permissions on dir: /home
	I0920 18:03:35.955821   59003 main.go:141] libmachine: (auto-833505) DBG | Skipping /home - not owner
	I0920 18:03:35.956940   59003 main.go:141] libmachine: (auto-833505) define libvirt domain using xml: 
	I0920 18:03:35.956959   59003 main.go:141] libmachine: (auto-833505) <domain type='kvm'>
	I0920 18:03:35.956982   59003 main.go:141] libmachine: (auto-833505)   <name>auto-833505</name>
	I0920 18:03:35.956989   59003 main.go:141] libmachine: (auto-833505)   <memory unit='MiB'>3072</memory>
	I0920 18:03:35.956997   59003 main.go:141] libmachine: (auto-833505)   <vcpu>2</vcpu>
	I0920 18:03:35.957003   59003 main.go:141] libmachine: (auto-833505)   <features>
	I0920 18:03:35.957018   59003 main.go:141] libmachine: (auto-833505)     <acpi/>
	I0920 18:03:35.957024   59003 main.go:141] libmachine: (auto-833505)     <apic/>
	I0920 18:03:35.957031   59003 main.go:141] libmachine: (auto-833505)     <pae/>
	I0920 18:03:35.957039   59003 main.go:141] libmachine: (auto-833505)     
	I0920 18:03:35.957046   59003 main.go:141] libmachine: (auto-833505)   </features>
	I0920 18:03:35.957053   59003 main.go:141] libmachine: (auto-833505)   <cpu mode='host-passthrough'>
	I0920 18:03:35.957061   59003 main.go:141] libmachine: (auto-833505)   
	I0920 18:03:35.957068   59003 main.go:141] libmachine: (auto-833505)   </cpu>
	I0920 18:03:35.957075   59003 main.go:141] libmachine: (auto-833505)   <os>
	I0920 18:03:35.957092   59003 main.go:141] libmachine: (auto-833505)     <type>hvm</type>
	I0920 18:03:35.957100   59003 main.go:141] libmachine: (auto-833505)     <boot dev='cdrom'/>
	I0920 18:03:35.957106   59003 main.go:141] libmachine: (auto-833505)     <boot dev='hd'/>
	I0920 18:03:35.957118   59003 main.go:141] libmachine: (auto-833505)     <bootmenu enable='no'/>
	I0920 18:03:35.957126   59003 main.go:141] libmachine: (auto-833505)   </os>
	I0920 18:03:35.957136   59003 main.go:141] libmachine: (auto-833505)   <devices>
	I0920 18:03:35.957147   59003 main.go:141] libmachine: (auto-833505)     <disk type='file' device='cdrom'>
	I0920 18:03:35.957161   59003 main.go:141] libmachine: (auto-833505)       <source file='/home/jenkins/minikube-integration/19672-8777/.minikube/machines/auto-833505/boot2docker.iso'/>
	I0920 18:03:35.957175   59003 main.go:141] libmachine: (auto-833505)       <target dev='hdc' bus='scsi'/>
	I0920 18:03:35.957190   59003 main.go:141] libmachine: (auto-833505)       <readonly/>
	I0920 18:03:35.957199   59003 main.go:141] libmachine: (auto-833505)     </disk>
	I0920 18:03:35.957208   59003 main.go:141] libmachine: (auto-833505)     <disk type='file' device='disk'>
	I0920 18:03:35.957228   59003 main.go:141] libmachine: (auto-833505)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0920 18:03:35.957245   59003 main.go:141] libmachine: (auto-833505)       <source file='/home/jenkins/minikube-integration/19672-8777/.minikube/machines/auto-833505/auto-833505.rawdisk'/>
	I0920 18:03:35.957256   59003 main.go:141] libmachine: (auto-833505)       <target dev='hda' bus='virtio'/>
	I0920 18:03:35.957263   59003 main.go:141] libmachine: (auto-833505)     </disk>
	I0920 18:03:35.957277   59003 main.go:141] libmachine: (auto-833505)     <interface type='network'>
	I0920 18:03:35.957289   59003 main.go:141] libmachine: (auto-833505)       <source network='mk-auto-833505'/>
	I0920 18:03:35.957296   59003 main.go:141] libmachine: (auto-833505)       <model type='virtio'/>
	I0920 18:03:35.957306   59003 main.go:141] libmachine: (auto-833505)     </interface>
	I0920 18:03:35.957313   59003 main.go:141] libmachine: (auto-833505)     <interface type='network'>
	I0920 18:03:35.957324   59003 main.go:141] libmachine: (auto-833505)       <source network='default'/>
	I0920 18:03:35.957333   59003 main.go:141] libmachine: (auto-833505)       <model type='virtio'/>
	I0920 18:03:35.957341   59003 main.go:141] libmachine: (auto-833505)     </interface>
	I0920 18:03:35.957354   59003 main.go:141] libmachine: (auto-833505)     <serial type='pty'>
	I0920 18:03:35.957365   59003 main.go:141] libmachine: (auto-833505)       <target port='0'/>
	I0920 18:03:35.957374   59003 main.go:141] libmachine: (auto-833505)     </serial>
	I0920 18:03:35.957382   59003 main.go:141] libmachine: (auto-833505)     <console type='pty'>
	I0920 18:03:35.957391   59003 main.go:141] libmachine: (auto-833505)       <target type='serial' port='0'/>
	I0920 18:03:35.957399   59003 main.go:141] libmachine: (auto-833505)     </console>
	I0920 18:03:35.957409   59003 main.go:141] libmachine: (auto-833505)     <rng model='virtio'>
	I0920 18:03:35.957418   59003 main.go:141] libmachine: (auto-833505)       <backend model='random'>/dev/random</backend>
	I0920 18:03:35.957430   59003 main.go:141] libmachine: (auto-833505)     </rng>
	I0920 18:03:35.957440   59003 main.go:141] libmachine: (auto-833505)     
	I0920 18:03:35.957445   59003 main.go:141] libmachine: (auto-833505)     
	I0920 18:03:35.957453   59003 main.go:141] libmachine: (auto-833505)   </devices>
	I0920 18:03:35.957462   59003 main.go:141] libmachine: (auto-833505) </domain>
	I0920 18:03:35.957471   59003 main.go:141] libmachine: (auto-833505) 
	I0920 18:03:35.962905   59003 main.go:141] libmachine: (auto-833505) DBG | domain auto-833505 has defined MAC address 52:54:00:16:9e:25 in network default
	I0920 18:03:35.963648   59003 main.go:141] libmachine: (auto-833505) Ensuring networks are active...
	I0920 18:03:35.963676   59003 main.go:141] libmachine: (auto-833505) DBG | domain auto-833505 has defined MAC address 52:54:00:76:01:26 in network mk-auto-833505
	I0920 18:03:35.964436   59003 main.go:141] libmachine: (auto-833505) Ensuring network default is active
	I0920 18:03:35.964808   59003 main.go:141] libmachine: (auto-833505) Ensuring network mk-auto-833505 is active
	I0920 18:03:35.965525   59003 main.go:141] libmachine: (auto-833505) Getting domain xml...
	I0920 18:03:35.966351   59003 main.go:141] libmachine: (auto-833505) Creating domain...
	I0920 18:03:37.339090   59003 main.go:141] libmachine: (auto-833505) Waiting to get IP...
	I0920 18:03:37.340023   59003 main.go:141] libmachine: (auto-833505) DBG | domain auto-833505 has defined MAC address 52:54:00:76:01:26 in network mk-auto-833505
	I0920 18:03:37.340457   59003 main.go:141] libmachine: (auto-833505) DBG | unable to find current IP address of domain auto-833505 in network mk-auto-833505
	I0920 18:03:37.340519   59003 main.go:141] libmachine: (auto-833505) DBG | I0920 18:03:37.340450   59113 retry.go:31] will retry after 197.031197ms: waiting for machine to come up
	I0920 18:03:36.712671   58226 pod_ready.go:93] pod "kube-scheduler-pause-421146" in "kube-system" namespace has status "Ready":"True"
	I0920 18:03:36.712700   58226 pod_ready.go:82] duration metric: took 399.750704ms for pod "kube-scheduler-pause-421146" in "kube-system" namespace to be "Ready" ...
	I0920 18:03:36.712710   58226 pod_ready.go:39] duration metric: took 1.748192922s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:03:36.712727   58226 api_server.go:52] waiting for apiserver process to appear ...
	I0920 18:03:36.712787   58226 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:03:36.733911   58226 api_server.go:72] duration metric: took 1.989345988s to wait for apiserver process to appear ...
	I0920 18:03:36.733940   58226 api_server.go:88] waiting for apiserver healthz status ...
	I0920 18:03:36.733965   58226 api_server.go:253] Checking apiserver healthz at https://192.168.50.200:8443/healthz ...
	I0920 18:03:36.741503   58226 api_server.go:279] https://192.168.50.200:8443/healthz returned 200:
	ok
	I0920 18:03:36.742848   58226 api_server.go:141] control plane version: v1.31.1
	I0920 18:03:36.742881   58226 api_server.go:131] duration metric: took 8.933025ms to wait for apiserver health ...
	I0920 18:03:36.742893   58226 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 18:03:36.915989   58226 system_pods.go:59] 6 kube-system pods found
	I0920 18:03:36.916038   58226 system_pods.go:61] "coredns-7c65d6cfc9-kzpzk" [31a69166-0bab-4048-ab8b-39726652c348] Running
	I0920 18:03:36.916046   58226 system_pods.go:61] "etcd-pause-421146" [d68394c2-e3fa-4e8b-adee-def6caa2d4f9] Running
	I0920 18:03:36.916053   58226 system_pods.go:61] "kube-apiserver-pause-421146" [0b11bd21-711e-4979-a665-a74c28acf52a] Running
	I0920 18:03:36.916058   58226 system_pods.go:61] "kube-controller-manager-pause-421146" [58041d9e-5b1f-4660-8cf5-d9a801a56b06] Running
	I0920 18:03:36.916064   58226 system_pods.go:61] "kube-proxy-gcp8x" [e95cbd03-c3ac-4381-ba7e-dab67b046217] Running
	I0920 18:03:36.916069   58226 system_pods.go:61] "kube-scheduler-pause-421146" [8c87d091-7f76-432c-b14e-38167da26d6a] Running
	I0920 18:03:36.916077   58226 system_pods.go:74] duration metric: took 173.17525ms to wait for pod list to return data ...
	I0920 18:03:36.916101   58226 default_sa.go:34] waiting for default service account to be created ...
	I0920 18:03:37.112686   58226 default_sa.go:45] found service account: "default"
	I0920 18:03:37.112719   58226 default_sa.go:55] duration metric: took 196.608635ms for default service account to be created ...
	I0920 18:03:37.112732   58226 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 18:03:37.314309   58226 system_pods.go:86] 6 kube-system pods found
	I0920 18:03:37.314347   58226 system_pods.go:89] "coredns-7c65d6cfc9-kzpzk" [31a69166-0bab-4048-ab8b-39726652c348] Running
	I0920 18:03:37.314356   58226 system_pods.go:89] "etcd-pause-421146" [d68394c2-e3fa-4e8b-adee-def6caa2d4f9] Running
	I0920 18:03:37.314362   58226 system_pods.go:89] "kube-apiserver-pause-421146" [0b11bd21-711e-4979-a665-a74c28acf52a] Running
	I0920 18:03:37.314368   58226 system_pods.go:89] "kube-controller-manager-pause-421146" [58041d9e-5b1f-4660-8cf5-d9a801a56b06] Running
	I0920 18:03:37.314373   58226 system_pods.go:89] "kube-proxy-gcp8x" [e95cbd03-c3ac-4381-ba7e-dab67b046217] Running
	I0920 18:03:37.314378   58226 system_pods.go:89] "kube-scheduler-pause-421146" [8c87d091-7f76-432c-b14e-38167da26d6a] Running
	I0920 18:03:37.314387   58226 system_pods.go:126] duration metric: took 201.647874ms to wait for k8s-apps to be running ...
	I0920 18:03:37.314396   58226 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 18:03:37.314451   58226 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:03:37.337255   58226 system_svc.go:56] duration metric: took 22.815149ms WaitForService to wait for kubelet
	I0920 18:03:37.337302   58226 kubeadm.go:582] duration metric: took 2.592741549s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:03:37.337336   58226 node_conditions.go:102] verifying NodePressure condition ...
	I0920 18:03:37.513451   58226 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 18:03:37.513486   58226 node_conditions.go:123] node cpu capacity is 2
	I0920 18:03:37.513500   58226 node_conditions.go:105] duration metric: took 176.150709ms to run NodePressure ...
	I0920 18:03:37.513514   58226 start.go:241] waiting for startup goroutines ...
	I0920 18:03:37.513524   58226 start.go:246] waiting for cluster config update ...
	I0920 18:03:37.513536   58226 start.go:255] writing updated cluster config ...
	I0920 18:03:37.513937   58226 ssh_runner.go:195] Run: rm -f paused
	I0920 18:03:37.586056   58226 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 18:03:37.588399   58226 out.go:177] * Done! kubectl is now configured to use "pause-421146" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 20 18:03:38 pause-421146 crio[2065]: time="2024-09-20 18:03:38.380871492Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=932db2e3-419c-47db-987e-29831a2af67b name=/runtime.v1.RuntimeService/Version
	Sep 20 18:03:38 pause-421146 crio[2065]: time="2024-09-20 18:03:38.382631785Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5cb85bae-9ec9-4403-ae5e-4e37749cab2c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:03:38 pause-421146 crio[2065]: time="2024-09-20 18:03:38.383412467Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855418383381636,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5cb85bae-9ec9-4403-ae5e-4e37749cab2c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:03:38 pause-421146 crio[2065]: time="2024-09-20 18:03:38.384411217Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=aadceee3-bc4e-4967-9b63-e1db0bd953a6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:03:38 pause-421146 crio[2065]: time="2024-09-20 18:03:38.384491018Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=aadceee3-bc4e-4967-9b63-e1db0bd953a6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:03:38 pause-421146 crio[2065]: time="2024-09-20 18:03:38.384782528Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:52678e574e5cda3145f732e6a8b665d218fb10e16963e32d3dc9b209b7fccee5,PodSandboxId:a8b1775eebfd2b7231276d653a0caf54e71c433f59227ebc0d95ebf8fd842a7a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726855399523968093,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kzpzk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31a69166-0bab-4048-ab8b-39726652c348,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45767f66e4613897801d10ba7ef85f5203974b47f8a0f18a155269923c318b40,PodSandboxId:cd41238b0b4f051093b2f88ef744f1959b9365163c20500dd5856b10e4ec6879,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726855399478789134,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gcp8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: e95cbd03-c3ac-4381-ba7e-dab67b046217,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2b467124a385a8365850c1338b354a85bd06687fa90cfe3f841200fb6d9b74c,PodSandboxId:f6fd95e87f9f1f244c7052653c6e44dd79a1e9f974608f0b1f5ac0e9b7988f66,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726855395676136249,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-421146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65a92db064
d5149926f0585489dbd5f3,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2148bd1e1d3250b130059d85a48c37cb328f8ccf64f438e9dc63fdeee15ae56c,PodSandboxId:b0816ffaf87e51e03bb5d22fd2d157dbae85b9f04b5ae6d9ae11d54bf3e53ee3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726855395698234766,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-421146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32ebccbb68a7911977451c4f8550
1c3e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a6fc240ff380cc53597f9a78362928ffd5beb3133aef126be73281a5a058d71,PodSandboxId:18cee16364cf7a1b1d46012a7e095da7fef7fdd3e3b3a3ee1ba1b006ad44e673,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726855395662356389,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-421146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d667c1a7b1e7e272754e10db2315607d,},Annotations:map[string]string{io.kubernet
es.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d71881d204e3837d28568eb8b2c2abb6ea017b5b30910bd6ac080e4ccff72d72,PodSandboxId:a53ecbaa10b365d95631f0c9168517b23ccbfd706b42bab952158d51cfd81992,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726855395651764166,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-421146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57da3666fcd469516ca1d325c843ac23,},Annotations:map[string]string{io
.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75d3de29f781282cadcafa18080ca10c150d68ea2643c96305500091f11cd130,PodSandboxId:a8b1775eebfd2b7231276d653a0caf54e71c433f59227ebc0d95ebf8fd842a7a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726855372217768057,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kzpzk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31a69166-0bab-4048-ab8b-39726652c348,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a
204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e65068aabc970c4f3ded7fd5c3bbaef59385603701d9c00ed027e8385212b674,PodSandboxId:b0816ffaf87e51e03bb5d22fd2d157dbae85b9f04b5ae6d9ae11d54bf3e53ee3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726855370853142362,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kuber
netes.pod.name: kube-apiserver-pause-421146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32ebccbb68a7911977451c4f85501c3e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6caff0d1adf3a97c67a13c3e8be7618e77ea79c31f750fff59228a7be609fd4d,PodSandboxId:f6fd95e87f9f1f244c7052653c6e44dd79a1e9f974608f0b1f5ac0e9b7988f66,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726855370797197590,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kub
e-scheduler-pause-421146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65a92db064d5149926f0585489dbd5f3,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:998f1cab8849745ac4a858a8df2532e7b3bfc179f529b1172684f92fee861d0d,PodSandboxId:cd41238b0b4f051093b2f88ef744f1959b9365163c20500dd5856b10e4ec6879,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726855370693994465,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gcp8x,io.kubernetes
.pod.namespace: kube-system,io.kubernetes.pod.uid: e95cbd03-c3ac-4381-ba7e-dab67b046217,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acff960df015881f08db968c9ce85cea196679f2ba2dd86c8c0589365b50502c,PodSandboxId:18cee16364cf7a1b1d46012a7e095da7fef7fdd3e3b3a3ee1ba1b006ad44e673,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726855370718769024,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-421146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: d667c1a7b1e7e272754e10db2315607d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81a97c79e3ef43249f4d0285077861826dcf108662f81b26a4e9ea112ebc5f45,PodSandboxId:a53ecbaa10b365d95631f0c9168517b23ccbfd706b42bab952158d51cfd81992,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726855370542926466,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-421146,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 57da3666fcd469516ca1d325c843ac23,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=aadceee3-bc4e-4967-9b63-e1db0bd953a6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:03:38 pause-421146 crio[2065]: time="2024-09-20 18:03:38.432460475Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d5d8c289-ba02-4ff0-a0a3-8acba7b7e605 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:03:38 pause-421146 crio[2065]: time="2024-09-20 18:03:38.432549797Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d5d8c289-ba02-4ff0-a0a3-8acba7b7e605 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:03:38 pause-421146 crio[2065]: time="2024-09-20 18:03:38.433286976Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=49560d13-8b2a-43de-93b0-3a2ae627c874 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 20 18:03:38 pause-421146 crio[2065]: time="2024-09-20 18:03:38.433719774Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:a8b1775eebfd2b7231276d653a0caf54e71c433f59227ebc0d95ebf8fd842a7a,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-kzpzk,Uid:31a69166-0bab-4048-ab8b-39726652c348,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726855370598385223,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-kzpzk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31a69166-0bab-4048-ab8b-39726652c348,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-20T18:02:02.194727396Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f6fd95e87f9f1f244c7052653c6e44dd79a1e9f974608f0b1f5ac0e9b7988f66,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-421146,Uid:65a92db064d5149926f0585489dbd5f3,Namespace:kube-system,
Attempt:1,},State:SANDBOX_READY,CreatedAt:1726855370293227060,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-421146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65a92db064d5149926f0585489dbd5f3,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 65a92db064d5149926f0585489dbd5f3,kubernetes.io/config.seen: 2024-09-20T18:01:57.726112031Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b0816ffaf87e51e03bb5d22fd2d157dbae85b9f04b5ae6d9ae11d54bf3e53ee3,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-421146,Uid:32ebccbb68a7911977451c4f85501c3e,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726855370290778536,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-421146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32ebccbb68a7911977451c4f85501c3e,tier: c
ontrol-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.200:8443,kubernetes.io/config.hash: 32ebccbb68a7911977451c4f85501c3e,kubernetes.io/config.seen: 2024-09-20T18:01:57.726101460Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:cd41238b0b4f051093b2f88ef744f1959b9365163c20500dd5856b10e4ec6879,Metadata:&PodSandboxMetadata{Name:kube-proxy-gcp8x,Uid:e95cbd03-c3ac-4381-ba7e-dab67b046217,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726855370220092257,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-gcp8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e95cbd03-c3ac-4381-ba7e-dab67b046217,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-20T18:02:02.039524583Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:18cee16364cf7a1b1d46012
a7e095da7fef7fdd3e3b3a3ee1ba1b006ad44e673,Metadata:&PodSandboxMetadata{Name:etcd-pause-421146,Uid:d667c1a7b1e7e272754e10db2315607d,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726855370170730579,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-421146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d667c1a7b1e7e272754e10db2315607d,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.200:2379,kubernetes.io/config.hash: d667c1a7b1e7e272754e10db2315607d,kubernetes.io/config.seen: 2024-09-20T18:01:57.726095860Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a53ecbaa10b365d95631f0c9168517b23ccbfd706b42bab952158d51cfd81992,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-421146,Uid:57da3666fcd469516ca1d325c843ac23,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726855370144622992,Labels:map[string]str
ing{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-421146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57da3666fcd469516ca1d325c843ac23,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 57da3666fcd469516ca1d325c843ac23,kubernetes.io/config.seen: 2024-09-20T18:01:57.726103133Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=49560d13-8b2a-43de-93b0-3a2ae627c874 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 20 18:03:38 pause-421146 crio[2065]: time="2024-09-20 18:03:38.434991914Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ed90019a-edb2-4ce9-8b08-e89ab6e2322a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:03:38 pause-421146 crio[2065]: time="2024-09-20 18:03:38.435793089Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a242d170-b0b7-4d5c-9b01-6314fa1e3833 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:03:38 pause-421146 crio[2065]: time="2024-09-20 18:03:38.435895716Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a242d170-b0b7-4d5c-9b01-6314fa1e3833 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:03:38 pause-421146 crio[2065]: time="2024-09-20 18:03:38.436378022Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:52678e574e5cda3145f732e6a8b665d218fb10e16963e32d3dc9b209b7fccee5,PodSandboxId:a8b1775eebfd2b7231276d653a0caf54e71c433f59227ebc0d95ebf8fd842a7a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726855399523968093,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kzpzk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31a69166-0bab-4048-ab8b-39726652c348,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45767f66e4613897801d10ba7ef85f5203974b47f8a0f18a155269923c318b40,PodSandboxId:cd41238b0b4f051093b2f88ef744f1959b9365163c20500dd5856b10e4ec6879,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726855399478789134,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gcp8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: e95cbd03-c3ac-4381-ba7e-dab67b046217,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2b467124a385a8365850c1338b354a85bd06687fa90cfe3f841200fb6d9b74c,PodSandboxId:f6fd95e87f9f1f244c7052653c6e44dd79a1e9f974608f0b1f5ac0e9b7988f66,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726855395676136249,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-421146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65a92db064
d5149926f0585489dbd5f3,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2148bd1e1d3250b130059d85a48c37cb328f8ccf64f438e9dc63fdeee15ae56c,PodSandboxId:b0816ffaf87e51e03bb5d22fd2d157dbae85b9f04b5ae6d9ae11d54bf3e53ee3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726855395698234766,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-421146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32ebccbb68a7911977451c4f8550
1c3e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a6fc240ff380cc53597f9a78362928ffd5beb3133aef126be73281a5a058d71,PodSandboxId:18cee16364cf7a1b1d46012a7e095da7fef7fdd3e3b3a3ee1ba1b006ad44e673,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726855395662356389,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-421146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d667c1a7b1e7e272754e10db2315607d,},Annotations:map[string]string{io.kubernet
es.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d71881d204e3837d28568eb8b2c2abb6ea017b5b30910bd6ac080e4ccff72d72,PodSandboxId:a53ecbaa10b365d95631f0c9168517b23ccbfd706b42bab952158d51cfd81992,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726855395651764166,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-421146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57da3666fcd469516ca1d325c843ac23,},Annotations:map[string]string{io
.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75d3de29f781282cadcafa18080ca10c150d68ea2643c96305500091f11cd130,PodSandboxId:a8b1775eebfd2b7231276d653a0caf54e71c433f59227ebc0d95ebf8fd842a7a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726855372217768057,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kzpzk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31a69166-0bab-4048-ab8b-39726652c348,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a
204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e65068aabc970c4f3ded7fd5c3bbaef59385603701d9c00ed027e8385212b674,PodSandboxId:b0816ffaf87e51e03bb5d22fd2d157dbae85b9f04b5ae6d9ae11d54bf3e53ee3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726855370853142362,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kuber
netes.pod.name: kube-apiserver-pause-421146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32ebccbb68a7911977451c4f85501c3e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6caff0d1adf3a97c67a13c3e8be7618e77ea79c31f750fff59228a7be609fd4d,PodSandboxId:f6fd95e87f9f1f244c7052653c6e44dd79a1e9f974608f0b1f5ac0e9b7988f66,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726855370797197590,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kub
e-scheduler-pause-421146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65a92db064d5149926f0585489dbd5f3,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:998f1cab8849745ac4a858a8df2532e7b3bfc179f529b1172684f92fee861d0d,PodSandboxId:cd41238b0b4f051093b2f88ef744f1959b9365163c20500dd5856b10e4ec6879,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726855370693994465,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gcp8x,io.kubernetes
.pod.namespace: kube-system,io.kubernetes.pod.uid: e95cbd03-c3ac-4381-ba7e-dab67b046217,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acff960df015881f08db968c9ce85cea196679f2ba2dd86c8c0589365b50502c,PodSandboxId:18cee16364cf7a1b1d46012a7e095da7fef7fdd3e3b3a3ee1ba1b006ad44e673,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726855370718769024,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-421146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: d667c1a7b1e7e272754e10db2315607d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81a97c79e3ef43249f4d0285077861826dcf108662f81b26a4e9ea112ebc5f45,PodSandboxId:a53ecbaa10b365d95631f0c9168517b23ccbfd706b42bab952158d51cfd81992,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726855370542926466,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-421146,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 57da3666fcd469516ca1d325c843ac23,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a242d170-b0b7-4d5c-9b01-6314fa1e3833 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:03:38 pause-421146 crio[2065]: time="2024-09-20 18:03:38.436481846Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855418436453283,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ed90019a-edb2-4ce9-8b08-e89ab6e2322a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:03:38 pause-421146 crio[2065]: time="2024-09-20 18:03:38.437743604Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2364383b-277f-4689-a8ab-3f56c2ec887e name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:03:38 pause-421146 crio[2065]: time="2024-09-20 18:03:38.437808719Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2364383b-277f-4689-a8ab-3f56c2ec887e name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:03:38 pause-421146 crio[2065]: time="2024-09-20 18:03:38.438056481Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:52678e574e5cda3145f732e6a8b665d218fb10e16963e32d3dc9b209b7fccee5,PodSandboxId:a8b1775eebfd2b7231276d653a0caf54e71c433f59227ebc0d95ebf8fd842a7a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726855399523968093,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kzpzk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31a69166-0bab-4048-ab8b-39726652c348,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45767f66e4613897801d10ba7ef85f5203974b47f8a0f18a155269923c318b40,PodSandboxId:cd41238b0b4f051093b2f88ef744f1959b9365163c20500dd5856b10e4ec6879,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726855399478789134,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gcp8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: e95cbd03-c3ac-4381-ba7e-dab67b046217,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2b467124a385a8365850c1338b354a85bd06687fa90cfe3f841200fb6d9b74c,PodSandboxId:f6fd95e87f9f1f244c7052653c6e44dd79a1e9f974608f0b1f5ac0e9b7988f66,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726855395676136249,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-421146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65a92db064
d5149926f0585489dbd5f3,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2148bd1e1d3250b130059d85a48c37cb328f8ccf64f438e9dc63fdeee15ae56c,PodSandboxId:b0816ffaf87e51e03bb5d22fd2d157dbae85b9f04b5ae6d9ae11d54bf3e53ee3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726855395698234766,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-421146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32ebccbb68a7911977451c4f8550
1c3e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a6fc240ff380cc53597f9a78362928ffd5beb3133aef126be73281a5a058d71,PodSandboxId:18cee16364cf7a1b1d46012a7e095da7fef7fdd3e3b3a3ee1ba1b006ad44e673,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726855395662356389,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-421146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d667c1a7b1e7e272754e10db2315607d,},Annotations:map[string]string{io.kubernet
es.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d71881d204e3837d28568eb8b2c2abb6ea017b5b30910bd6ac080e4ccff72d72,PodSandboxId:a53ecbaa10b365d95631f0c9168517b23ccbfd706b42bab952158d51cfd81992,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726855395651764166,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-421146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57da3666fcd469516ca1d325c843ac23,},Annotations:map[string]string{io
.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75d3de29f781282cadcafa18080ca10c150d68ea2643c96305500091f11cd130,PodSandboxId:a8b1775eebfd2b7231276d653a0caf54e71c433f59227ebc0d95ebf8fd842a7a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726855372217768057,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kzpzk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31a69166-0bab-4048-ab8b-39726652c348,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a
204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e65068aabc970c4f3ded7fd5c3bbaef59385603701d9c00ed027e8385212b674,PodSandboxId:b0816ffaf87e51e03bb5d22fd2d157dbae85b9f04b5ae6d9ae11d54bf3e53ee3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726855370853142362,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kuber
netes.pod.name: kube-apiserver-pause-421146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32ebccbb68a7911977451c4f85501c3e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6caff0d1adf3a97c67a13c3e8be7618e77ea79c31f750fff59228a7be609fd4d,PodSandboxId:f6fd95e87f9f1f244c7052653c6e44dd79a1e9f974608f0b1f5ac0e9b7988f66,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726855370797197590,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kub
e-scheduler-pause-421146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65a92db064d5149926f0585489dbd5f3,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:998f1cab8849745ac4a858a8df2532e7b3bfc179f529b1172684f92fee861d0d,PodSandboxId:cd41238b0b4f051093b2f88ef744f1959b9365163c20500dd5856b10e4ec6879,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726855370693994465,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gcp8x,io.kubernetes
.pod.namespace: kube-system,io.kubernetes.pod.uid: e95cbd03-c3ac-4381-ba7e-dab67b046217,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acff960df015881f08db968c9ce85cea196679f2ba2dd86c8c0589365b50502c,PodSandboxId:18cee16364cf7a1b1d46012a7e095da7fef7fdd3e3b3a3ee1ba1b006ad44e673,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726855370718769024,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-421146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: d667c1a7b1e7e272754e10db2315607d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81a97c79e3ef43249f4d0285077861826dcf108662f81b26a4e9ea112ebc5f45,PodSandboxId:a53ecbaa10b365d95631f0c9168517b23ccbfd706b42bab952158d51cfd81992,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726855370542926466,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-421146,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 57da3666fcd469516ca1d325c843ac23,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2364383b-277f-4689-a8ab-3f56c2ec887e name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:03:38 pause-421146 crio[2065]: time="2024-09-20 18:03:38.492837020Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a637188b-9ffc-4c4e-acb9-8556b46f28e4 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:03:38 pause-421146 crio[2065]: time="2024-09-20 18:03:38.492938508Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a637188b-9ffc-4c4e-acb9-8556b46f28e4 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:03:38 pause-421146 crio[2065]: time="2024-09-20 18:03:38.494150058Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=64a57692-7aad-4f56-a2f8-aa64d8d9a005 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:03:38 pause-421146 crio[2065]: time="2024-09-20 18:03:38.494618574Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855418494591397,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=64a57692-7aad-4f56-a2f8-aa64d8d9a005 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:03:38 pause-421146 crio[2065]: time="2024-09-20 18:03:38.495224152Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6bfeaba8-e941-4d22-acf0-9b868b8ffe10 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:03:38 pause-421146 crio[2065]: time="2024-09-20 18:03:38.495364191Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6bfeaba8-e941-4d22-acf0-9b868b8ffe10 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:03:38 pause-421146 crio[2065]: time="2024-09-20 18:03:38.495747057Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:52678e574e5cda3145f732e6a8b665d218fb10e16963e32d3dc9b209b7fccee5,PodSandboxId:a8b1775eebfd2b7231276d653a0caf54e71c433f59227ebc0d95ebf8fd842a7a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726855399523968093,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kzpzk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31a69166-0bab-4048-ab8b-39726652c348,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45767f66e4613897801d10ba7ef85f5203974b47f8a0f18a155269923c318b40,PodSandboxId:cd41238b0b4f051093b2f88ef744f1959b9365163c20500dd5856b10e4ec6879,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726855399478789134,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gcp8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: e95cbd03-c3ac-4381-ba7e-dab67b046217,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2b467124a385a8365850c1338b354a85bd06687fa90cfe3f841200fb6d9b74c,PodSandboxId:f6fd95e87f9f1f244c7052653c6e44dd79a1e9f974608f0b1f5ac0e9b7988f66,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726855395676136249,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-421146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65a92db064
d5149926f0585489dbd5f3,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2148bd1e1d3250b130059d85a48c37cb328f8ccf64f438e9dc63fdeee15ae56c,PodSandboxId:b0816ffaf87e51e03bb5d22fd2d157dbae85b9f04b5ae6d9ae11d54bf3e53ee3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726855395698234766,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-421146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32ebccbb68a7911977451c4f8550
1c3e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a6fc240ff380cc53597f9a78362928ffd5beb3133aef126be73281a5a058d71,PodSandboxId:18cee16364cf7a1b1d46012a7e095da7fef7fdd3e3b3a3ee1ba1b006ad44e673,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726855395662356389,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-421146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d667c1a7b1e7e272754e10db2315607d,},Annotations:map[string]string{io.kubernet
es.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d71881d204e3837d28568eb8b2c2abb6ea017b5b30910bd6ac080e4ccff72d72,PodSandboxId:a53ecbaa10b365d95631f0c9168517b23ccbfd706b42bab952158d51cfd81992,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726855395651764166,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-421146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57da3666fcd469516ca1d325c843ac23,},Annotations:map[string]string{io
.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75d3de29f781282cadcafa18080ca10c150d68ea2643c96305500091f11cd130,PodSandboxId:a8b1775eebfd2b7231276d653a0caf54e71c433f59227ebc0d95ebf8fd842a7a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726855372217768057,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kzpzk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31a69166-0bab-4048-ab8b-39726652c348,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a
204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e65068aabc970c4f3ded7fd5c3bbaef59385603701d9c00ed027e8385212b674,PodSandboxId:b0816ffaf87e51e03bb5d22fd2d157dbae85b9f04b5ae6d9ae11d54bf3e53ee3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726855370853142362,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kuber
netes.pod.name: kube-apiserver-pause-421146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32ebccbb68a7911977451c4f85501c3e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6caff0d1adf3a97c67a13c3e8be7618e77ea79c31f750fff59228a7be609fd4d,PodSandboxId:f6fd95e87f9f1f244c7052653c6e44dd79a1e9f974608f0b1f5ac0e9b7988f66,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726855370797197590,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kub
e-scheduler-pause-421146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65a92db064d5149926f0585489dbd5f3,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:998f1cab8849745ac4a858a8df2532e7b3bfc179f529b1172684f92fee861d0d,PodSandboxId:cd41238b0b4f051093b2f88ef744f1959b9365163c20500dd5856b10e4ec6879,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726855370693994465,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gcp8x,io.kubernetes
.pod.namespace: kube-system,io.kubernetes.pod.uid: e95cbd03-c3ac-4381-ba7e-dab67b046217,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acff960df015881f08db968c9ce85cea196679f2ba2dd86c8c0589365b50502c,PodSandboxId:18cee16364cf7a1b1d46012a7e095da7fef7fdd3e3b3a3ee1ba1b006ad44e673,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726855370718769024,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-421146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: d667c1a7b1e7e272754e10db2315607d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81a97c79e3ef43249f4d0285077861826dcf108662f81b26a4e9ea112ebc5f45,PodSandboxId:a53ecbaa10b365d95631f0c9168517b23ccbfd706b42bab952158d51cfd81992,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726855370542926466,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-421146,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 57da3666fcd469516ca1d325c843ac23,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6bfeaba8-e941-4d22-acf0-9b868b8ffe10 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	52678e574e5cd       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   19 seconds ago      Running             coredns                   2                   a8b1775eebfd2       coredns-7c65d6cfc9-kzpzk
	45767f66e4613       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   19 seconds ago      Running             kube-proxy                2                   cd41238b0b4f0       kube-proxy-gcp8x
	2148bd1e1d325       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   22 seconds ago      Running             kube-apiserver            2                   b0816ffaf87e5       kube-apiserver-pause-421146
	c2b467124a385       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   22 seconds ago      Running             kube-scheduler            2                   f6fd95e87f9f1       kube-scheduler-pause-421146
	1a6fc240ff380       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   22 seconds ago      Running             etcd                      2                   18cee16364cf7       etcd-pause-421146
	d71881d204e38       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   22 seconds ago      Running             kube-controller-manager   2                   a53ecbaa10b36       kube-controller-manager-pause-421146
	75d3de29f7812       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   46 seconds ago      Exited              coredns                   1                   a8b1775eebfd2       coredns-7c65d6cfc9-kzpzk
	e65068aabc970       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   47 seconds ago      Exited              kube-apiserver            1                   b0816ffaf87e5       kube-apiserver-pause-421146
	6caff0d1adf3a       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   47 seconds ago      Exited              kube-scheduler            1                   f6fd95e87f9f1       kube-scheduler-pause-421146
	acff960df0158       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   47 seconds ago      Exited              etcd                      1                   18cee16364cf7       etcd-pause-421146
	998f1cab88497       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   47 seconds ago      Exited              kube-proxy                1                   cd41238b0b4f0       kube-proxy-gcp8x
	81a97c79e3ef4       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   48 seconds ago      Exited              kube-controller-manager   1                   a53ecbaa10b36       kube-controller-manager-pause-421146
	
	
	==> coredns [52678e574e5cda3145f732e6a8b665d218fb10e16963e32d3dc9b209b7fccee5] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:50811 - 33353 "HINFO IN 8666522759006030779.9000534369695820997. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020431597s
	
	
	==> coredns [75d3de29f781282cadcafa18080ca10c150d68ea2643c96305500091f11cd130] <==
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:40169 - 12964 "HINFO IN 6180363540378383076.2113421048419907267. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.024052508s
	
	
	==> describe nodes <==
	Name:               pause-421146
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-421146
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0626f22cf0d915d75e291a5bce701f94395056e1
	                    minikube.k8s.io/name=pause-421146
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T18_01_59_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 18:01:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-421146
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 18:03:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 18:03:19 +0000   Fri, 20 Sep 2024 18:01:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 18:03:19 +0000   Fri, 20 Sep 2024 18:01:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 18:03:19 +0000   Fri, 20 Sep 2024 18:01:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 18:03:19 +0000   Fri, 20 Sep 2024 18:01:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.200
	  Hostname:    pause-421146
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 75eb7f358fe44d58b6e7a0bb3b5ede52
	  System UUID:                75eb7f35-8fe4-4d58-b6e7-a0bb3b5ede52
	  Boot ID:                    24144fb2-ed29-4a8e-b3b6-7385de781a0d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-kzpzk                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     96s
	  kube-system                 etcd-pause-421146                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         101s
	  kube-system                 kube-apiserver-pause-421146             250m (12%)    0 (0%)      0 (0%)           0 (0%)         102s
	  kube-system                 kube-controller-manager-pause-421146    200m (10%)    0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 kube-proxy-gcp8x                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 kube-scheduler-pause-421146             100m (5%)     0 (0%)      0 (0%)           0 (0%)         101s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 95s                kube-proxy       
	  Normal  Starting                 19s                kube-proxy       
	  Normal  Starting                 43s                kube-proxy       
	  Normal  Starting                 101s               kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  101s               kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  101s               kubelet          Node pause-421146 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    101s               kubelet          Node pause-421146 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     101s               kubelet          Node pause-421146 status is now: NodeHasSufficientPID
	  Normal  NodeReady                99s                kubelet          Node pause-421146 status is now: NodeReady
	  Normal  RegisteredNode           97s                node-controller  Node pause-421146 event: Registered Node pause-421146 in Controller
	  Normal  RegisteredNode           40s                node-controller  Node pause-421146 event: Registered Node pause-421146 in Controller
	  Normal  Starting                 23s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  23s (x8 over 23s)  kubelet          Node pause-421146 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x8 over 23s)  kubelet          Node pause-421146 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x7 over 23s)  kubelet          Node pause-421146 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           16s                node-controller  Node pause-421146 event: Registered Node pause-421146 in Controller
	
	
	==> dmesg <==
	[  +0.061527] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063031] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.215225] systemd-fstab-generator[615]: Ignoring "noauto" option for root device
	[  +0.135769] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +0.304925] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +4.425917] systemd-fstab-generator[746]: Ignoring "noauto" option for root device
	[  +0.063054] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.640369] systemd-fstab-generator[882]: Ignoring "noauto" option for root device
	[  +0.556219] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.904187] systemd-fstab-generator[1217]: Ignoring "noauto" option for root device
	[  +0.094341] kauditd_printk_skb: 41 callbacks suppressed
	[Sep20 18:02] systemd-fstab-generator[1332]: Ignoring "noauto" option for root device
	[  +0.856108] kauditd_printk_skb: 48 callbacks suppressed
	[ +37.578100] kauditd_printk_skb: 42 callbacks suppressed
	[  +8.666960] systemd-fstab-generator[1989]: Ignoring "noauto" option for root device
	[  +0.146137] systemd-fstab-generator[2001]: Ignoring "noauto" option for root device
	[  +0.193224] systemd-fstab-generator[2017]: Ignoring "noauto" option for root device
	[  +0.163445] systemd-fstab-generator[2029]: Ignoring "noauto" option for root device
	[  +0.327985] systemd-fstab-generator[2058]: Ignoring "noauto" option for root device
	[  +2.093217] systemd-fstab-generator[2616]: Ignoring "noauto" option for root device
	[  +3.506745] kauditd_printk_skb: 195 callbacks suppressed
	[Sep20 18:03] systemd-fstab-generator[3058]: Ignoring "noauto" option for root device
	[  +4.729432] kauditd_printk_skb: 43 callbacks suppressed
	[  +8.397007] kauditd_printk_skb: 4 callbacks suppressed
	[  +6.841746] systemd-fstab-generator[3533]: Ignoring "noauto" option for root device
	
	
	==> etcd [1a6fc240ff380cc53597f9a78362928ffd5beb3133aef126be73281a5a058d71] <==
	{"level":"info","ts":"2024-09-20T18:03:16.077033Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"26c34286e2a4509a","local-member-id":"b6017a462bf4c740","added-peer-id":"b6017a462bf4c740","added-peer-peer-urls":["https://192.168.50.200:2380"]}
	{"level":"info","ts":"2024-09-20T18:03:16.077107Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"26c34286e2a4509a","local-member-id":"b6017a462bf4c740","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T18:03:16.077143Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T18:03:16.081822Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T18:03:16.084173Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-20T18:03:16.084642Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"b6017a462bf4c740","initial-advertise-peer-urls":["https://192.168.50.200:2380"],"listen-peer-urls":["https://192.168.50.200:2380"],"advertise-client-urls":["https://192.168.50.200:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.200:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-20T18:03:16.084682Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-20T18:03:16.084714Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.200:2380"}
	{"level":"info","ts":"2024-09-20T18:03:16.084731Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.200:2380"}
	{"level":"info","ts":"2024-09-20T18:03:17.650928Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6017a462bf4c740 is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-20T18:03:17.650994Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6017a462bf4c740 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-20T18:03:17.651041Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6017a462bf4c740 received MsgPreVoteResp from b6017a462bf4c740 at term 3"}
	{"level":"info","ts":"2024-09-20T18:03:17.651065Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6017a462bf4c740 became candidate at term 4"}
	{"level":"info","ts":"2024-09-20T18:03:17.651073Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6017a462bf4c740 received MsgVoteResp from b6017a462bf4c740 at term 4"}
	{"level":"info","ts":"2024-09-20T18:03:17.651093Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6017a462bf4c740 became leader at term 4"}
	{"level":"info","ts":"2024-09-20T18:03:17.651105Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b6017a462bf4c740 elected leader b6017a462bf4c740 at term 4"}
	{"level":"info","ts":"2024-09-20T18:03:17.657502Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T18:03:17.658348Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"b6017a462bf4c740","local-member-attributes":"{Name:pause-421146 ClientURLs:[https://192.168.50.200:2379]}","request-path":"/0/members/b6017a462bf4c740/attributes","cluster-id":"26c34286e2a4509a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-20T18:03:17.658596Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T18:03:17.658781Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T18:03:17.658791Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-20T18:03:17.658811Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-20T18:03:17.659411Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T18:03:17.659798Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.200:2379"}
	{"level":"info","ts":"2024-09-20T18:03:17.660165Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [acff960df015881f08db968c9ce85cea196679f2ba2dd86c8c0589365b50502c] <==
	{"level":"info","ts":"2024-09-20T18:02:52.968212Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6017a462bf4c740 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-20T18:02:52.968239Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6017a462bf4c740 received MsgPreVoteResp from b6017a462bf4c740 at term 2"}
	{"level":"info","ts":"2024-09-20T18:02:52.968311Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6017a462bf4c740 became candidate at term 3"}
	{"level":"info","ts":"2024-09-20T18:02:52.968320Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6017a462bf4c740 received MsgVoteResp from b6017a462bf4c740 at term 3"}
	{"level":"info","ts":"2024-09-20T18:02:52.968330Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6017a462bf4c740 became leader at term 3"}
	{"level":"info","ts":"2024-09-20T18:02:52.968339Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b6017a462bf4c740 elected leader b6017a462bf4c740 at term 3"}
	{"level":"info","ts":"2024-09-20T18:02:52.974415Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"b6017a462bf4c740","local-member-attributes":"{Name:pause-421146 ClientURLs:[https://192.168.50.200:2379]}","request-path":"/0/members/b6017a462bf4c740/attributes","cluster-id":"26c34286e2a4509a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-20T18:02:52.974498Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T18:02:52.976644Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T18:02:52.977654Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T18:02:52.995178Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T18:02:52.999945Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-20T18:02:52.996432Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.200:2379"}
	{"level":"info","ts":"2024-09-20T18:02:53.007709Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-20T18:02:53.007841Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-20T18:03:13.078083Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-20T18:03:13.078201Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"pause-421146","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.200:2380"],"advertise-client-urls":["https://192.168.50.200:2379"]}
	{"level":"warn","ts":"2024-09-20T18:03:13.078375Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-20T18:03:13.078442Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-20T18:03:13.080175Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.200:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-20T18:03:13.080222Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.200:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-20T18:03:13.080330Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"b6017a462bf4c740","current-leader-member-id":"b6017a462bf4c740"}
	{"level":"info","ts":"2024-09-20T18:03:13.083875Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.50.200:2380"}
	{"level":"info","ts":"2024-09-20T18:03:13.084051Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.50.200:2380"}
	{"level":"info","ts":"2024-09-20T18:03:13.084082Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"pause-421146","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.200:2380"],"advertise-client-urls":["https://192.168.50.200:2379"]}
	
	
	==> kernel <==
	 18:03:39 up 2 min,  0 users,  load average: 1.17, 0.63, 0.24
	Linux pause-421146 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [2148bd1e1d3250b130059d85a48c37cb328f8ccf64f438e9dc63fdeee15ae56c] <==
	I0920 18:03:19.123620       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0920 18:03:19.124552       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0920 18:03:19.123729       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0920 18:03:19.124484       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0920 18:03:19.128348       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0920 18:03:19.128413       1 policy_source.go:224] refreshing policies
	I0920 18:03:19.124504       1 shared_informer.go:320] Caches are synced for configmaps
	I0920 18:03:19.125229       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0920 18:03:19.125245       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0920 18:03:19.125988       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0920 18:03:19.131581       1 aggregator.go:171] initial CRD sync complete...
	I0920 18:03:19.131601       1 autoregister_controller.go:144] Starting autoregister controller
	I0920 18:03:19.131609       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0920 18:03:19.131617       1 cache.go:39] Caches are synced for autoregister controller
	I0920 18:03:19.138005       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0920 18:03:19.182780       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0920 18:03:20.029248       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0920 18:03:20.237214       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.50.200]
	I0920 18:03:20.239159       1 controller.go:615] quota admission added evaluator for: endpoints
	I0920 18:03:20.246571       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0920 18:03:20.499874       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0920 18:03:20.523017       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0920 18:03:20.585380       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0920 18:03:20.638177       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0920 18:03:20.649500       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-apiserver [e65068aabc970c4f3ded7fd5c3bbaef59385603701d9c00ed027e8385212b674] <==
	I0920 18:03:02.869033       1 system_namespaces_controller.go:76] Shutting down system namespaces controller
	I0920 18:03:02.869064       1 customresource_discovery_controller.go:328] Shutting down DiscoveryController
	I0920 18:03:02.869103       1 controller.go:132] Ending legacy_token_tracking_controller
	I0920 18:03:02.869130       1 controller.go:133] Shutting down legacy_token_tracking_controller
	I0920 18:03:02.869170       1 cluster_authentication_trust_controller.go:466] Shutting down cluster_authentication_trust_controller controller
	I0920 18:03:02.869212       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I0920 18:03:02.869497       1 dynamic_cafile_content.go:174] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0920 18:03:02.869577       1 establishing_controller.go:92] Shutting down EstablishingController
	I0920 18:03:02.869603       1 naming_controller.go:305] Shutting down NamingConditionController
	I0920 18:03:02.869657       1 controller.go:170] Shutting down OpenAPI controller
	I0920 18:03:02.870078       1 autoregister_controller.go:168] Shutting down autoregister controller
	I0920 18:03:02.870172       1 crdregistration_controller.go:145] Shutting down crd-autoregister controller
	I0920 18:03:02.870219       1 apiapproval_controller.go:201] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
	I0920 18:03:02.870337       1 crd_finalizer.go:281] Shutting down CRDFinalizer
	I0920 18:03:02.870379       1 controller.go:120] Shutting down OpenAPI V3 controller
	I0920 18:03:02.870526       1 dynamic_cafile_content.go:174] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0920 18:03:02.870683       1 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0920 18:03:02.870729       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0920 18:03:02.870790       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0920 18:03:02.870823       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
	I0920 18:03:02.871119       1 dynamic_serving_content.go:149] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0920 18:03:02.871397       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0920 18:03:02.871474       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0920 18:03:02.871717       1 dynamic_serving_content.go:149] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0920 18:03:02.871986       1 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	
	
	==> kube-controller-manager [81a97c79e3ef43249f4d0285077861826dcf108662f81b26a4e9ea112ebc5f45] <==
	I0920 18:02:58.233350       1 shared_informer.go:320] Caches are synced for service account
	I0920 18:02:58.233434       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0920 18:02:58.234389       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0920 18:02:58.234466       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0920 18:02:58.234899       1 shared_informer.go:320] Caches are synced for disruption
	I0920 18:02:58.234948       1 shared_informer.go:320] Caches are synced for PVC protection
	I0920 18:02:58.235033       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0920 18:02:58.235069       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0920 18:02:58.235095       1 shared_informer.go:320] Caches are synced for daemon sets
	I0920 18:02:58.253375       1 shared_informer.go:320] Caches are synced for namespace
	I0920 18:02:58.256952       1 shared_informer.go:320] Caches are synced for ephemeral
	I0920 18:02:58.260463       1 shared_informer.go:320] Caches are synced for deployment
	I0920 18:02:58.264668       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0920 18:02:58.270339       1 shared_informer.go:320] Caches are synced for endpoint
	I0920 18:02:58.283695       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0920 18:02:58.332070       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0920 18:02:58.392317       1 shared_informer.go:320] Caches are synced for attach detach
	I0920 18:02:58.422393       1 shared_informer.go:320] Caches are synced for resource quota
	I0920 18:02:58.436333       1 shared_informer.go:320] Caches are synced for HPA
	I0920 18:02:58.447057       1 shared_informer.go:320] Caches are synced for resource quota
	I0920 18:02:58.500248       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="266.755611ms"
	I0920 18:02:58.501379       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="40.904µs"
	I0920 18:02:58.884964       1 shared_informer.go:320] Caches are synced for garbage collector
	I0920 18:02:58.885030       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0920 18:02:58.892531       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-controller-manager [d71881d204e3837d28568eb8b2c2abb6ea017b5b30910bd6ac080e4ccff72d72] <==
	I0920 18:03:22.452334       1 shared_informer.go:320] Caches are synced for job
	I0920 18:03:22.452864       1 shared_informer.go:320] Caches are synced for expand
	I0920 18:03:22.454016       1 shared_informer.go:320] Caches are synced for TTL
	I0920 18:03:22.467744       1 shared_informer.go:320] Caches are synced for node
	I0920 18:03:22.467836       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I0920 18:03:22.467875       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0920 18:03:22.467880       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0920 18:03:22.467890       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0920 18:03:22.468007       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="pause-421146"
	I0920 18:03:22.470799       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0920 18:03:22.474426       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0920 18:03:22.501750       1 shared_informer.go:320] Caches are synced for attach detach
	I0920 18:03:22.522228       1 shared_informer.go:320] Caches are synced for cronjob
	I0920 18:03:22.552375       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0920 18:03:22.600804       1 shared_informer.go:320] Caches are synced for stateful set
	I0920 18:03:22.642324       1 shared_informer.go:320] Caches are synced for resource quota
	I0920 18:03:22.644639       1 shared_informer.go:320] Caches are synced for daemon sets
	I0920 18:03:22.658867       1 shared_informer.go:320] Caches are synced for resource quota
	I0920 18:03:22.681635       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0920 18:03:22.703008       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0920 18:03:23.095770       1 shared_informer.go:320] Caches are synced for garbage collector
	I0920 18:03:23.152058       1 shared_informer.go:320] Caches are synced for garbage collector
	I0920 18:03:23.152175       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0920 18:03:27.947442       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="22.107317ms"
	I0920 18:03:27.950956       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="119.783µs"
	
	
	==> kube-proxy [45767f66e4613897801d10ba7ef85f5203974b47f8a0f18a155269923c318b40] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 18:03:19.775617       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0920 18:03:19.789574       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.200"]
	E0920 18:03:19.789834       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 18:03:19.830365       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 18:03:19.830477       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 18:03:19.830523       1 server_linux.go:169] "Using iptables Proxier"
	I0920 18:03:19.833839       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 18:03:19.834382       1 server.go:483] "Version info" version="v1.31.1"
	I0920 18:03:19.834459       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 18:03:19.836571       1 config.go:199] "Starting service config controller"
	I0920 18:03:19.836632       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 18:03:19.836757       1 config.go:328] "Starting node config controller"
	I0920 18:03:19.836786       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 18:03:19.836889       1 config.go:105] "Starting endpoint slice config controller"
	I0920 18:03:19.836986       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 18:03:19.937792       1 shared_informer.go:320] Caches are synced for node config
	I0920 18:03:19.937897       1 shared_informer.go:320] Caches are synced for service config
	I0920 18:03:19.937819       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [998f1cab8849745ac4a858a8df2532e7b3bfc179f529b1172684f92fee861d0d] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 18:02:53.185410       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0920 18:02:54.935831       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.200"]
	E0920 18:02:54.936143       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 18:02:54.998740       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 18:02:54.998882       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 18:02:54.998922       1 server_linux.go:169] "Using iptables Proxier"
	I0920 18:02:55.010235       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 18:02:55.011070       1 server.go:483] "Version info" version="v1.31.1"
	I0920 18:02:55.011343       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 18:02:55.013333       1 config.go:199] "Starting service config controller"
	I0920 18:02:55.013392       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 18:02:55.013449       1 config.go:105] "Starting endpoint slice config controller"
	I0920 18:02:55.013467       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 18:02:55.014083       1 config.go:328] "Starting node config controller"
	I0920 18:02:55.014114       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 18:02:55.114171       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 18:02:55.114437       1 shared_informer.go:320] Caches are synced for node config
	I0920 18:02:55.114470       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [6caff0d1adf3a97c67a13c3e8be7618e77ea79c31f750fff59228a7be609fd4d] <==
	I0920 18:02:53.338698       1 serving.go:386] Generated self-signed cert in-memory
	W0920 18:02:54.899155       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0920 18:02:54.899206       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0920 18:02:54.899216       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0920 18:02:54.899228       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0920 18:02:54.940963       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0920 18:02:54.942769       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 18:02:54.945769       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0920 18:02:54.945890       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0920 18:02:54.946072       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0920 18:02:54.946182       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0920 18:02:55.046151       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0920 18:03:12.918961       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0920 18:03:12.919112       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E0920 18:03:12.919244       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [c2b467124a385a8365850c1338b354a85bd06687fa90cfe3f841200fb6d9b74c] <==
	I0920 18:03:16.569979       1 serving.go:386] Generated self-signed cert in-memory
	W0920 18:03:19.085791       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0920 18:03:19.085846       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0920 18:03:19.085856       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0920 18:03:19.085862       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0920 18:03:19.109535       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0920 18:03:19.109570       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 18:03:19.111650       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0920 18:03:19.111812       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0920 18:03:19.111846       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0920 18:03:19.111862       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0920 18:03:19.214550       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 20 18:03:15 pause-421146 kubelet[3065]: I0920 18:03:15.629254    3065 scope.go:117] "RemoveContainer" containerID="81a97c79e3ef43249f4d0285077861826dcf108662f81b26a4e9ea112ebc5f45"
	Sep 20 18:03:15 pause-421146 kubelet[3065]: I0920 18:03:15.631926    3065 scope.go:117] "RemoveContainer" containerID="6caff0d1adf3a97c67a13c3e8be7618e77ea79c31f750fff59228a7be609fd4d"
	Sep 20 18:03:15 pause-421146 kubelet[3065]: I0920 18:03:15.632689    3065 scope.go:117] "RemoveContainer" containerID="acff960df015881f08db968c9ce85cea196679f2ba2dd86c8c0589365b50502c"
	Sep 20 18:03:15 pause-421146 kubelet[3065]: E0920 18:03:15.770325    3065 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-421146?timeout=10s\": dial tcp 192.168.50.200:8443: connect: connection refused" interval="800ms"
	Sep 20 18:03:15 pause-421146 kubelet[3065]: I0920 18:03:15.961792    3065 kubelet_node_status.go:72] "Attempting to register node" node="pause-421146"
	Sep 20 18:03:15 pause-421146 kubelet[3065]: E0920 18:03:15.962879    3065 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.200:8443: connect: connection refused" node="pause-421146"
	Sep 20 18:03:15 pause-421146 kubelet[3065]: W0920 18:03:15.991072    3065 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dpause-421146&limit=500&resourceVersion=0": dial tcp 192.168.50.200:8443: connect: connection refused
	Sep 20 18:03:15 pause-421146 kubelet[3065]: E0920 18:03:15.991158    3065 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dpause-421146&limit=500&resourceVersion=0\": dial tcp 192.168.50.200:8443: connect: connection refused" logger="UnhandledError"
	Sep 20 18:03:16 pause-421146 kubelet[3065]: I0920 18:03:16.765100    3065 kubelet_node_status.go:72] "Attempting to register node" node="pause-421146"
	Sep 20 18:03:19 pause-421146 kubelet[3065]: I0920 18:03:19.135086    3065 apiserver.go:52] "Watching apiserver"
	Sep 20 18:03:19 pause-421146 kubelet[3065]: I0920 18:03:19.165715    3065 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 20 18:03:19 pause-421146 kubelet[3065]: I0920 18:03:19.174609    3065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e95cbd03-c3ac-4381-ba7e-dab67b046217-xtables-lock\") pod \"kube-proxy-gcp8x\" (UID: \"e95cbd03-c3ac-4381-ba7e-dab67b046217\") " pod="kube-system/kube-proxy-gcp8x"
	Sep 20 18:03:19 pause-421146 kubelet[3065]: I0920 18:03:19.174658    3065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e95cbd03-c3ac-4381-ba7e-dab67b046217-lib-modules\") pod \"kube-proxy-gcp8x\" (UID: \"e95cbd03-c3ac-4381-ba7e-dab67b046217\") " pod="kube-system/kube-proxy-gcp8x"
	Sep 20 18:03:19 pause-421146 kubelet[3065]: I0920 18:03:19.185898    3065 kubelet_node_status.go:111] "Node was previously registered" node="pause-421146"
	Sep 20 18:03:19 pause-421146 kubelet[3065]: I0920 18:03:19.186030    3065 kubelet_node_status.go:75] "Successfully registered node" node="pause-421146"
	Sep 20 18:03:19 pause-421146 kubelet[3065]: I0920 18:03:19.186131    3065 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 20 18:03:19 pause-421146 kubelet[3065]: I0920 18:03:19.187196    3065 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 20 18:03:19 pause-421146 kubelet[3065]: I0920 18:03:19.461317    3065 scope.go:117] "RemoveContainer" containerID="998f1cab8849745ac4a858a8df2532e7b3bfc179f529b1172684f92fee861d0d"
	Sep 20 18:03:19 pause-421146 kubelet[3065]: I0920 18:03:19.461536    3065 scope.go:117] "RemoveContainer" containerID="75d3de29f781282cadcafa18080ca10c150d68ea2643c96305500091f11cd130"
	Sep 20 18:03:21 pause-421146 kubelet[3065]: I0920 18:03:21.358446    3065 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 20 18:03:25 pause-421146 kubelet[3065]: E0920 18:03:25.259614    3065 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855405258911971,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:03:25 pause-421146 kubelet[3065]: E0920 18:03:25.259674    3065 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855405258911971,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:03:27 pause-421146 kubelet[3065]: I0920 18:03:27.901365    3065 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 20 18:03:35 pause-421146 kubelet[3065]: E0920 18:03:35.263436    3065 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855415261465865,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:03:35 pause-421146 kubelet[3065]: E0920 18:03:35.263484    3065 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855415261465865,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-421146 -n pause-421146
helpers_test.go:261: (dbg) Run:  kubectl --context pause-421146 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-421146 -n pause-421146
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-421146 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-421146 logs -n 25: (1.556407079s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | force-systemd-flag-956160 ssh cat     | force-systemd-flag-956160 | jenkins | v1.34.0 | 20 Sep 24 17:59 UTC | 20 Sep 24 17:59 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-956160          | force-systemd-flag-956160 | jenkins | v1.34.0 | 20 Sep 24 17:59 UTC | 20 Sep 24 17:59 UTC |
	| start   | -p cert-expiration-452691             | cert-expiration-452691    | jenkins | v1.34.0 | 20 Sep 24 17:59 UTC | 20 Sep 24 18:00 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-299391 stop           | minikube                  | jenkins | v1.26.0 | 20 Sep 24 17:59 UTC | 20 Sep 24 17:59 UTC |
	| start   | -p stopped-upgrade-299391             | stopped-upgrade-299391    | jenkins | v1.34.0 | 20 Sep 24 17:59 UTC | 20 Sep 24 18:00 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-030548           | force-systemd-env-030548  | jenkins | v1.34.0 | 20 Sep 24 17:59 UTC | 20 Sep 24 17:59 UTC |
	| start   | -p cert-options-815898                | cert-options-815898       | jenkins | v1.34.0 | 20 Sep 24 17:59 UTC | 20 Sep 24 18:01 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-299391             | stopped-upgrade-299391    | jenkins | v1.34.0 | 20 Sep 24 18:00 UTC | 20 Sep 24 18:00 UTC |
	| start   | -p running-upgrade-267014             | minikube                  | jenkins | v1.26.0 | 20 Sep 24 18:00 UTC | 20 Sep 24 18:01 UTC |
	|         | --memory=2200 --vm-driver=kvm2        |                           |         |         |                     |                     |
	|         |  --container-runtime=crio             |                           |         |         |                     |                     |
	| ssh     | cert-options-815898 ssh               | cert-options-815898       | jenkins | v1.34.0 | 20 Sep 24 18:01 UTC | 20 Sep 24 18:01 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-815898 -- sudo        | cert-options-815898       | jenkins | v1.34.0 | 20 Sep 24 18:01 UTC | 20 Sep 24 18:01 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-815898                | cert-options-815898       | jenkins | v1.34.0 | 20 Sep 24 18:01 UTC | 20 Sep 24 18:01 UTC |
	| start   | -p pause-421146 --memory=2048         | pause-421146              | jenkins | v1.34.0 | 20 Sep 24 18:01 UTC | 20 Sep 24 18:02 UTC |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2              |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p running-upgrade-267014             | running-upgrade-267014    | jenkins | v1.34.0 | 20 Sep 24 18:01 UTC | 20 Sep 24 18:02 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-299508          | kubernetes-upgrade-299508 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	| start   | -p kubernetes-upgrade-299508          | kubernetes-upgrade-299508 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p pause-421146                       | pause-421146              | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:03 UTC |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-299508          | kubernetes-upgrade-299508 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-299508          | kubernetes-upgrade-299508 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:03 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-267014             | running-upgrade-267014    | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	| start   | -p NoKubernetes-246858                | NoKubernetes-246858       | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC |                     |
	|         | --no-kubernetes                       |                           |         |         |                     |                     |
	|         | --kubernetes-version=1.20             |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-246858                | NoKubernetes-246858       | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p cert-expiration-452691             | cert-expiration-452691    | jenkins | v1.34.0 | 20 Sep 24 18:03 UTC |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-299508          | kubernetes-upgrade-299508 | jenkins | v1.34.0 | 20 Sep 24 18:03 UTC | 20 Sep 24 18:03 UTC |
	| start   | -p auto-833505 --memory=3072          | auto-833505               | jenkins | v1.34.0 | 20 Sep 24 18:03 UTC |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 18:03:22
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 18:03:22.450914   59003 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:03:22.451201   59003 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:03:22.451212   59003 out.go:358] Setting ErrFile to fd 2...
	I0920 18:03:22.451216   59003 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:03:22.451456   59003 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-8777/.minikube/bin
	I0920 18:03:22.452130   59003 out.go:352] Setting JSON to false
	I0920 18:03:22.453116   59003 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6345,"bootTime":1726849057,"procs":225,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 18:03:22.453222   59003 start.go:139] virtualization: kvm guest
	I0920 18:03:22.455588   59003 out.go:177] * [auto-833505] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 18:03:22.457292   59003 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 18:03:22.457342   59003 notify.go:220] Checking for updates...
	I0920 18:03:22.460798   59003 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 18:03:22.462605   59003 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-8777/kubeconfig
	I0920 18:03:22.464220   59003 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-8777/.minikube
	I0920 18:03:22.465744   59003 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 18:03:22.467309   59003 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 18:03:22.469537   59003 config.go:182] Loaded profile config "NoKubernetes-246858": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:03:22.469705   59003 config.go:182] Loaded profile config "cert-expiration-452691": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:03:22.469944   59003 config.go:182] Loaded profile config "pause-421146": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:03:22.470067   59003 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 18:03:22.508402   59003 out.go:177] * Using the kvm2 driver based on user configuration
	I0920 18:03:22.509926   59003 start.go:297] selected driver: kvm2
	I0920 18:03:22.509945   59003 start.go:901] validating driver "kvm2" against <nil>
	I0920 18:03:22.509957   59003 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 18:03:22.510873   59003 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:03:22.510960   59003 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19672-8777/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 18:03:22.527072   59003 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 18:03:22.527148   59003 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 18:03:22.527403   59003 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:03:22.527433   59003 cni.go:84] Creating CNI manager for ""
	I0920 18:03:22.527475   59003 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:03:22.527484   59003 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 18:03:22.527540   59003 start.go:340] cluster config:
	{Name:auto-833505 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:auto-833505 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:03:22.527632   59003 iso.go:125] acquiring lock: {Name:mkba95ef0488e46f622333e9f317f43def93040b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:03:22.529625   59003 out.go:177] * Starting "auto-833505" primary control-plane node in "auto-833505" cluster
	I0920 18:03:18.478888   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | domain NoKubernetes-246858 has defined MAC address 52:54:00:71:4a:f8 in network mk-NoKubernetes-246858
	I0920 18:03:18.479454   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | unable to find current IP address of domain NoKubernetes-246858 in network mk-NoKubernetes-246858
	I0920 18:03:18.479477   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | I0920 18:03:18.479406   58551 retry.go:31] will retry after 4.255494758s: waiting for machine to come up
	I0920 18:03:22.737279   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | domain NoKubernetes-246858 has defined MAC address 52:54:00:71:4a:f8 in network mk-NoKubernetes-246858
	I0920 18:03:22.738031   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | unable to find current IP address of domain NoKubernetes-246858 in network mk-NoKubernetes-246858
	I0920 18:03:22.738041   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | I0920 18:03:22.737987   58551 retry.go:31] will retry after 4.164560114s: waiting for machine to come up
	I0920 18:03:20.647601   58846 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:03:20.647643   58846 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0920 18:03:20.647649   58846 cache.go:56] Caching tarball of preloaded images
	I0920 18:03:20.647726   58846 preload.go:172] Found /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 18:03:20.647734   58846 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 18:03:20.647855   58846 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/cert-expiration-452691/config.json ...
	I0920 18:03:20.648103   58846 start.go:360] acquireMachinesLock for cert-expiration-452691: {Name:mkfeedb385cf08b5d2aa00913e85815d02a180c2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 18:03:22.692648   58226 pod_ready.go:103] pod "coredns-7c65d6cfc9-kzpzk" in "kube-system" namespace has status "Ready":"False"
	I0920 18:03:25.193160   58226 pod_ready.go:103] pod "coredns-7c65d6cfc9-kzpzk" in "kube-system" namespace has status "Ready":"False"
	I0920 18:03:22.531007   59003 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:03:22.531077   59003 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0920 18:03:22.531089   59003 cache.go:56] Caching tarball of preloaded images
	I0920 18:03:22.531218   59003 preload.go:172] Found /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 18:03:22.531234   59003 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 18:03:22.531344   59003 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/auto-833505/config.json ...
	I0920 18:03:22.531364   59003 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/auto-833505/config.json: {Name:mkff9238b5c083de46a1bc752a696dc0589463c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:03:22.531560   59003 start.go:360] acquireMachinesLock for auto-833505: {Name:mkfeedb385cf08b5d2aa00913e85815d02a180c2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 18:03:26.907388   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | domain NoKubernetes-246858 has defined MAC address 52:54:00:71:4a:f8 in network mk-NoKubernetes-246858
	I0920 18:03:26.908055   58507 main.go:141] libmachine: (NoKubernetes-246858) Found IP for machine: 192.168.72.119
	I0920 18:03:26.908068   58507 main.go:141] libmachine: (NoKubernetes-246858) Reserving static IP address...
	I0920 18:03:26.908080   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | domain NoKubernetes-246858 has current primary IP address 192.168.72.119 and MAC address 52:54:00:71:4a:f8 in network mk-NoKubernetes-246858
	I0920 18:03:26.908404   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | unable to find host DHCP lease matching {name: "NoKubernetes-246858", mac: "52:54:00:71:4a:f8", ip: "192.168.72.119"} in network mk-NoKubernetes-246858
	I0920 18:03:26.995579   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | Getting to WaitForSSH function...
	I0920 18:03:26.995603   58507 main.go:141] libmachine: (NoKubernetes-246858) Reserved static IP address: 192.168.72.119
	I0920 18:03:26.995641   58507 main.go:141] libmachine: (NoKubernetes-246858) Waiting for SSH to be available...
	I0920 18:03:26.997980   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | domain NoKubernetes-246858 has defined MAC address 52:54:00:71:4a:f8 in network mk-NoKubernetes-246858
	I0920 18:03:26.998332   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:4a:f8", ip: ""} in network mk-NoKubernetes-246858: {Iface:virbr4 ExpiryTime:2024-09-20 19:03:17 +0000 UTC Type:0 Mac:52:54:00:71:4a:f8 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:minikube Clientid:01:52:54:00:71:4a:f8}
	I0920 18:03:26.998349   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | domain NoKubernetes-246858 has defined IP address 192.168.72.119 and MAC address 52:54:00:71:4a:f8 in network mk-NoKubernetes-246858
	I0920 18:03:26.998473   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | Using SSH client type: external
	I0920 18:03:26.998495   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/NoKubernetes-246858/id_rsa (-rw-------)
	I0920 18:03:26.998582   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.119 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-8777/.minikube/machines/NoKubernetes-246858/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 18:03:26.998597   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | About to run SSH command:
	I0920 18:03:26.998610   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | exit 0
	I0920 18:03:27.125807   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | SSH cmd err, output: <nil>: 
	I0920 18:03:27.126122   58507 main.go:141] libmachine: (NoKubernetes-246858) KVM machine creation complete!
	I0920 18:03:27.126367   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetConfigRaw
	I0920 18:03:27.127008   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .DriverName
	I0920 18:03:27.127223   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .DriverName
	I0920 18:03:27.127396   58507 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0920 18:03:27.127404   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetState
	I0920 18:03:27.128998   58507 main.go:141] libmachine: Detecting operating system of created instance...
	I0920 18:03:27.129014   58507 main.go:141] libmachine: Waiting for SSH to be available...
	I0920 18:03:27.129020   58507 main.go:141] libmachine: Getting to WaitForSSH function...
	I0920 18:03:27.129027   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHHostname
	I0920 18:03:27.131934   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | domain NoKubernetes-246858 has defined MAC address 52:54:00:71:4a:f8 in network mk-NoKubernetes-246858
	I0920 18:03:27.132352   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:4a:f8", ip: ""} in network mk-NoKubernetes-246858: {Iface:virbr4 ExpiryTime:2024-09-20 19:03:17 +0000 UTC Type:0 Mac:52:54:00:71:4a:f8 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:NoKubernetes-246858 Clientid:01:52:54:00:71:4a:f8}
	I0920 18:03:27.132373   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | domain NoKubernetes-246858 has defined IP address 192.168.72.119 and MAC address 52:54:00:71:4a:f8 in network mk-NoKubernetes-246858
	I0920 18:03:27.132505   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHPort
	I0920 18:03:27.132643   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHKeyPath
	I0920 18:03:27.132842   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHKeyPath
	I0920 18:03:27.133005   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHUsername
	I0920 18:03:27.133194   58507 main.go:141] libmachine: Using SSH client type: native
	I0920 18:03:27.133382   58507 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.119 22 <nil> <nil>}
	I0920 18:03:27.133387   58507 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0920 18:03:27.245390   58507 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:03:27.245404   58507 main.go:141] libmachine: Detecting the provisioner...
	I0920 18:03:27.245447   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHHostname
	I0920 18:03:27.248407   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | domain NoKubernetes-246858 has defined MAC address 52:54:00:71:4a:f8 in network mk-NoKubernetes-246858
	I0920 18:03:27.248880   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:4a:f8", ip: ""} in network mk-NoKubernetes-246858: {Iface:virbr4 ExpiryTime:2024-09-20 19:03:17 +0000 UTC Type:0 Mac:52:54:00:71:4a:f8 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:NoKubernetes-246858 Clientid:01:52:54:00:71:4a:f8}
	I0920 18:03:27.248907   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | domain NoKubernetes-246858 has defined IP address 192.168.72.119 and MAC address 52:54:00:71:4a:f8 in network mk-NoKubernetes-246858
	I0920 18:03:27.249060   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHPort
	I0920 18:03:27.249269   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHKeyPath
	I0920 18:03:27.249424   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHKeyPath
	I0920 18:03:27.249566   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHUsername
	I0920 18:03:27.249714   58507 main.go:141] libmachine: Using SSH client type: native
	I0920 18:03:27.249911   58507 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.119 22 <nil> <nil>}
	I0920 18:03:27.249916   58507 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0920 18:03:27.362973   58507 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0920 18:03:27.363049   58507 main.go:141] libmachine: found compatible host: buildroot
	I0920 18:03:27.363057   58507 main.go:141] libmachine: Provisioning with buildroot...
	I0920 18:03:27.363063   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetMachineName
	I0920 18:03:27.363345   58507 buildroot.go:166] provisioning hostname "NoKubernetes-246858"
	I0920 18:03:27.363365   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetMachineName
	I0920 18:03:27.363582   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHHostname
	I0920 18:03:27.366286   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | domain NoKubernetes-246858 has defined MAC address 52:54:00:71:4a:f8 in network mk-NoKubernetes-246858
	I0920 18:03:27.366665   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:4a:f8", ip: ""} in network mk-NoKubernetes-246858: {Iface:virbr4 ExpiryTime:2024-09-20 19:03:17 +0000 UTC Type:0 Mac:52:54:00:71:4a:f8 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:NoKubernetes-246858 Clientid:01:52:54:00:71:4a:f8}
	I0920 18:03:27.366681   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | domain NoKubernetes-246858 has defined IP address 192.168.72.119 and MAC address 52:54:00:71:4a:f8 in network mk-NoKubernetes-246858
	I0920 18:03:27.366808   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHPort
	I0920 18:03:27.366989   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHKeyPath
	I0920 18:03:27.367131   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHKeyPath
	I0920 18:03:27.367271   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHUsername
	I0920 18:03:27.367431   58507 main.go:141] libmachine: Using SSH client type: native
	I0920 18:03:27.367612   58507 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.119 22 <nil> <nil>}
	I0920 18:03:27.367619   58507 main.go:141] libmachine: About to run SSH command:
	sudo hostname NoKubernetes-246858 && echo "NoKubernetes-246858" | sudo tee /etc/hostname
	I0920 18:03:27.496101   58507 main.go:141] libmachine: SSH cmd err, output: <nil>: NoKubernetes-246858
	
	I0920 18:03:27.496124   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHHostname
	I0920 18:03:27.498852   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | domain NoKubernetes-246858 has defined MAC address 52:54:00:71:4a:f8 in network mk-NoKubernetes-246858
	I0920 18:03:27.499260   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:4a:f8", ip: ""} in network mk-NoKubernetes-246858: {Iface:virbr4 ExpiryTime:2024-09-20 19:03:17 +0000 UTC Type:0 Mac:52:54:00:71:4a:f8 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:NoKubernetes-246858 Clientid:01:52:54:00:71:4a:f8}
	I0920 18:03:27.499298   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | domain NoKubernetes-246858 has defined IP address 192.168.72.119 and MAC address 52:54:00:71:4a:f8 in network mk-NoKubernetes-246858
	I0920 18:03:27.499648   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHPort
	I0920 18:03:27.499842   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHKeyPath
	I0920 18:03:27.499979   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHKeyPath
	I0920 18:03:27.500171   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHUsername
	I0920 18:03:27.500331   58507 main.go:141] libmachine: Using SSH client type: native
	I0920 18:03:27.500496   58507 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.119 22 <nil> <nil>}
	I0920 18:03:27.500505   58507 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sNoKubernetes-246858' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 NoKubernetes-246858/g' /etc/hosts;
				else 
					echo '127.0.1.1 NoKubernetes-246858' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 18:03:27.618885   58507 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:03:27.618902   58507 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-8777/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-8777/.minikube}
	I0920 18:03:27.618962   58507 buildroot.go:174] setting up certificates
	I0920 18:03:27.618969   58507 provision.go:84] configureAuth start
	I0920 18:03:27.618978   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetMachineName
	I0920 18:03:27.619319   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetIP
	I0920 18:03:27.621826   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | domain NoKubernetes-246858 has defined MAC address 52:54:00:71:4a:f8 in network mk-NoKubernetes-246858
	I0920 18:03:27.622156   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:4a:f8", ip: ""} in network mk-NoKubernetes-246858: {Iface:virbr4 ExpiryTime:2024-09-20 19:03:17 +0000 UTC Type:0 Mac:52:54:00:71:4a:f8 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:NoKubernetes-246858 Clientid:01:52:54:00:71:4a:f8}
	I0920 18:03:27.622177   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | domain NoKubernetes-246858 has defined IP address 192.168.72.119 and MAC address 52:54:00:71:4a:f8 in network mk-NoKubernetes-246858
	I0920 18:03:27.622315   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHHostname
	I0920 18:03:27.624649   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | domain NoKubernetes-246858 has defined MAC address 52:54:00:71:4a:f8 in network mk-NoKubernetes-246858
	I0920 18:03:27.625042   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:4a:f8", ip: ""} in network mk-NoKubernetes-246858: {Iface:virbr4 ExpiryTime:2024-09-20 19:03:17 +0000 UTC Type:0 Mac:52:54:00:71:4a:f8 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:NoKubernetes-246858 Clientid:01:52:54:00:71:4a:f8}
	I0920 18:03:27.625051   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | domain NoKubernetes-246858 has defined IP address 192.168.72.119 and MAC address 52:54:00:71:4a:f8 in network mk-NoKubernetes-246858
	I0920 18:03:27.625248   58507 provision.go:143] copyHostCerts
	I0920 18:03:27.625304   58507 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem, removing ...
	I0920 18:03:27.625321   58507 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem
	I0920 18:03:27.625398   58507 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem (1082 bytes)
	I0920 18:03:27.625551   58507 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem, removing ...
	I0920 18:03:27.625562   58507 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem
	I0920 18:03:27.625605   58507 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem (1123 bytes)
	I0920 18:03:27.625699   58507 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem, removing ...
	I0920 18:03:27.625703   58507 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem
	I0920 18:03:27.625732   58507 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem (1675 bytes)
	I0920 18:03:27.625791   58507 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem org=jenkins.NoKubernetes-246858 san=[127.0.0.1 192.168.72.119 NoKubernetes-246858 localhost minikube]
	I0920 18:03:27.711762   58507 provision.go:177] copyRemoteCerts
	I0920 18:03:27.711821   58507 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 18:03:27.711845   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHHostname
	I0920 18:03:27.714570   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | domain NoKubernetes-246858 has defined MAC address 52:54:00:71:4a:f8 in network mk-NoKubernetes-246858
	I0920 18:03:27.714971   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:4a:f8", ip: ""} in network mk-NoKubernetes-246858: {Iface:virbr4 ExpiryTime:2024-09-20 19:03:17 +0000 UTC Type:0 Mac:52:54:00:71:4a:f8 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:NoKubernetes-246858 Clientid:01:52:54:00:71:4a:f8}
	I0920 18:03:27.714995   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | domain NoKubernetes-246858 has defined IP address 192.168.72.119 and MAC address 52:54:00:71:4a:f8 in network mk-NoKubernetes-246858
	I0920 18:03:27.715155   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHPort
	I0920 18:03:27.715361   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHKeyPath
	I0920 18:03:27.715502   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHUsername
	I0920 18:03:27.715614   58507 sshutil.go:53] new ssh client: &{IP:192.168.72.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/NoKubernetes-246858/id_rsa Username:docker}
	I0920 18:03:27.801169   58507 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0920 18:03:27.825744   58507 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0920 18:03:27.852721   58507 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 18:03:27.879448   58507 provision.go:87] duration metric: took 260.465719ms to configureAuth
	I0920 18:03:27.879471   58507 buildroot.go:189] setting minikube options for container-runtime
	I0920 18:03:27.879648   58507 config.go:182] Loaded profile config "NoKubernetes-246858": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:03:27.879709   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHHostname
	I0920 18:03:27.882736   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | domain NoKubernetes-246858 has defined MAC address 52:54:00:71:4a:f8 in network mk-NoKubernetes-246858
	I0920 18:03:27.883068   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:4a:f8", ip: ""} in network mk-NoKubernetes-246858: {Iface:virbr4 ExpiryTime:2024-09-20 19:03:17 +0000 UTC Type:0 Mac:52:54:00:71:4a:f8 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:NoKubernetes-246858 Clientid:01:52:54:00:71:4a:f8}
	I0920 18:03:27.883090   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | domain NoKubernetes-246858 has defined IP address 192.168.72.119 and MAC address 52:54:00:71:4a:f8 in network mk-NoKubernetes-246858
	I0920 18:03:27.883320   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHPort
	I0920 18:03:27.883505   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHKeyPath
	I0920 18:03:27.883650   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHKeyPath
	I0920 18:03:27.883826   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHUsername
	I0920 18:03:27.883956   58507 main.go:141] libmachine: Using SSH client type: native
	I0920 18:03:27.884112   58507 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.119 22 <nil> <nil>}
	I0920 18:03:27.884128   58507 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 18:03:28.116432   58507 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 18:03:28.116445   58507 main.go:141] libmachine: Checking connection to Docker...
	I0920 18:03:28.116451   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetURL
	I0920 18:03:28.117794   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | Using libvirt version 6000000
	I0920 18:03:28.120302   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | domain NoKubernetes-246858 has defined MAC address 52:54:00:71:4a:f8 in network mk-NoKubernetes-246858
	I0920 18:03:28.120721   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:4a:f8", ip: ""} in network mk-NoKubernetes-246858: {Iface:virbr4 ExpiryTime:2024-09-20 19:03:17 +0000 UTC Type:0 Mac:52:54:00:71:4a:f8 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:NoKubernetes-246858 Clientid:01:52:54:00:71:4a:f8}
	I0920 18:03:28.120750   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | domain NoKubernetes-246858 has defined IP address 192.168.72.119 and MAC address 52:54:00:71:4a:f8 in network mk-NoKubernetes-246858
	I0920 18:03:28.120886   58507 main.go:141] libmachine: Docker is up and running!
	I0920 18:03:28.120894   58507 main.go:141] libmachine: Reticulating splines...
	I0920 18:03:28.120899   58507 client.go:171] duration metric: took 25.979136446s to LocalClient.Create
	I0920 18:03:28.120919   58507 start.go:167] duration metric: took 25.979197926s to libmachine.API.Create "NoKubernetes-246858"
	I0920 18:03:28.120925   58507 start.go:293] postStartSetup for "NoKubernetes-246858" (driver="kvm2")
	I0920 18:03:28.120935   58507 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 18:03:28.120966   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .DriverName
	I0920 18:03:28.121236   58507 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 18:03:28.121257   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHHostname
	I0920 18:03:28.123933   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | domain NoKubernetes-246858 has defined MAC address 52:54:00:71:4a:f8 in network mk-NoKubernetes-246858
	I0920 18:03:28.124359   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:4a:f8", ip: ""} in network mk-NoKubernetes-246858: {Iface:virbr4 ExpiryTime:2024-09-20 19:03:17 +0000 UTC Type:0 Mac:52:54:00:71:4a:f8 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:NoKubernetes-246858 Clientid:01:52:54:00:71:4a:f8}
	I0920 18:03:28.124392   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | domain NoKubernetes-246858 has defined IP address 192.168.72.119 and MAC address 52:54:00:71:4a:f8 in network mk-NoKubernetes-246858
	I0920 18:03:28.124581   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHPort
	I0920 18:03:28.124757   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHKeyPath
	I0920 18:03:28.124944   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHUsername
	I0920 18:03:28.125083   58507 sshutil.go:53] new ssh client: &{IP:192.168.72.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/NoKubernetes-246858/id_rsa Username:docker}
	I0920 18:03:28.213203   58507 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 18:03:28.217617   58507 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 18:03:28.217630   58507 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/addons for local assets ...
	I0920 18:03:28.217697   58507 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/files for local assets ...
	I0920 18:03:28.217762   58507 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem -> 159732.pem in /etc/ssl/certs
	I0920 18:03:28.217857   58507 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 18:03:28.370975   58846 start.go:364] duration metric: took 7.722827835s to acquireMachinesLock for "cert-expiration-452691"
	I0920 18:03:28.371021   58846 start.go:96] Skipping create...Using existing machine configuration
	I0920 18:03:28.371027   58846 fix.go:54] fixHost starting: 
	I0920 18:03:28.371428   58846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:03:28.371494   58846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:03:28.389804   58846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37835
	I0920 18:03:28.390250   58846 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:03:28.390822   58846 main.go:141] libmachine: Using API Version  1
	I0920 18:03:28.390842   58846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:03:28.391159   58846 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:03:28.391418   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .DriverName
	I0920 18:03:28.391556   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetState
	I0920 18:03:28.393562   58846 fix.go:112] recreateIfNeeded on cert-expiration-452691: state=Running err=<nil>
	W0920 18:03:28.393578   58846 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 18:03:28.396501   58846 out.go:177] * Updating the running kvm2 "cert-expiration-452691" VM ...
	I0920 18:03:28.227687   58507 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /etc/ssl/certs/159732.pem (1708 bytes)
	I0920 18:03:28.251550   58507 start.go:296] duration metric: took 130.612071ms for postStartSetup
	I0920 18:03:28.251603   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetConfigRaw
	I0920 18:03:28.252235   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetIP
	I0920 18:03:28.255155   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | domain NoKubernetes-246858 has defined MAC address 52:54:00:71:4a:f8 in network mk-NoKubernetes-246858
	I0920 18:03:28.255641   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:4a:f8", ip: ""} in network mk-NoKubernetes-246858: {Iface:virbr4 ExpiryTime:2024-09-20 19:03:17 +0000 UTC Type:0 Mac:52:54:00:71:4a:f8 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:NoKubernetes-246858 Clientid:01:52:54:00:71:4a:f8}
	I0920 18:03:28.255660   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | domain NoKubernetes-246858 has defined IP address 192.168.72.119 and MAC address 52:54:00:71:4a:f8 in network mk-NoKubernetes-246858
	I0920 18:03:28.255998   58507 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/NoKubernetes-246858/config.json ...
	I0920 18:03:28.256254   58507 start.go:128] duration metric: took 26.137020322s to createHost
	I0920 18:03:28.256293   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHHostname
	I0920 18:03:28.258648   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | domain NoKubernetes-246858 has defined MAC address 52:54:00:71:4a:f8 in network mk-NoKubernetes-246858
	I0920 18:03:28.259090   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:4a:f8", ip: ""} in network mk-NoKubernetes-246858: {Iface:virbr4 ExpiryTime:2024-09-20 19:03:17 +0000 UTC Type:0 Mac:52:54:00:71:4a:f8 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:NoKubernetes-246858 Clientid:01:52:54:00:71:4a:f8}
	I0920 18:03:28.259101   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | domain NoKubernetes-246858 has defined IP address 192.168.72.119 and MAC address 52:54:00:71:4a:f8 in network mk-NoKubernetes-246858
	I0920 18:03:28.259219   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHPort
	I0920 18:03:28.259416   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHKeyPath
	I0920 18:03:28.259573   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHKeyPath
	I0920 18:03:28.259745   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHUsername
	I0920 18:03:28.259908   58507 main.go:141] libmachine: Using SSH client type: native
	I0920 18:03:28.260067   58507 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.119 22 <nil> <nil>}
	I0920 18:03:28.260071   58507 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 18:03:28.370808   58507 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726855408.329383665
	
	I0920 18:03:28.370820   58507 fix.go:216] guest clock: 1726855408.329383665
	I0920 18:03:28.370826   58507 fix.go:229] Guest: 2024-09-20 18:03:28.329383665 +0000 UTC Remote: 2024-09-20 18:03:28.256278374 +0000 UTC m=+35.079973559 (delta=73.105291ms)
	I0920 18:03:28.370844   58507 fix.go:200] guest clock delta is within tolerance: 73.105291ms
	I0920 18:03:28.370853   58507 start.go:83] releasing machines lock for "NoKubernetes-246858", held for 26.251846012s
	I0920 18:03:28.370890   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .DriverName
	I0920 18:03:28.371141   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetIP
	I0920 18:03:28.374684   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | domain NoKubernetes-246858 has defined MAC address 52:54:00:71:4a:f8 in network mk-NoKubernetes-246858
	I0920 18:03:28.375082   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:4a:f8", ip: ""} in network mk-NoKubernetes-246858: {Iface:virbr4 ExpiryTime:2024-09-20 19:03:17 +0000 UTC Type:0 Mac:52:54:00:71:4a:f8 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:NoKubernetes-246858 Clientid:01:52:54:00:71:4a:f8}
	I0920 18:03:28.375119   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | domain NoKubernetes-246858 has defined IP address 192.168.72.119 and MAC address 52:54:00:71:4a:f8 in network mk-NoKubernetes-246858
	I0920 18:03:28.375350   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .DriverName
	I0920 18:03:28.375935   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .DriverName
	I0920 18:03:28.376094   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .DriverName
	I0920 18:03:28.376200   58507 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 18:03:28.376234   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHHostname
	I0920 18:03:28.376278   58507 ssh_runner.go:195] Run: cat /version.json
	I0920 18:03:28.376296   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHHostname
	I0920 18:03:28.379291   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | domain NoKubernetes-246858 has defined MAC address 52:54:00:71:4a:f8 in network mk-NoKubernetes-246858
	I0920 18:03:28.379650   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:4a:f8", ip: ""} in network mk-NoKubernetes-246858: {Iface:virbr4 ExpiryTime:2024-09-20 19:03:17 +0000 UTC Type:0 Mac:52:54:00:71:4a:f8 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:NoKubernetes-246858 Clientid:01:52:54:00:71:4a:f8}
	I0920 18:03:28.379689   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | domain NoKubernetes-246858 has defined IP address 192.168.72.119 and MAC address 52:54:00:71:4a:f8 in network mk-NoKubernetes-246858
	I0920 18:03:28.379710   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | domain NoKubernetes-246858 has defined MAC address 52:54:00:71:4a:f8 in network mk-NoKubernetes-246858
	I0920 18:03:28.380006   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHPort
	I0920 18:03:28.380179   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHKeyPath
	I0920 18:03:28.380207   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:4a:f8", ip: ""} in network mk-NoKubernetes-246858: {Iface:virbr4 ExpiryTime:2024-09-20 19:03:17 +0000 UTC Type:0 Mac:52:54:00:71:4a:f8 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:NoKubernetes-246858 Clientid:01:52:54:00:71:4a:f8}
	I0920 18:03:28.380223   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | domain NoKubernetes-246858 has defined IP address 192.168.72.119 and MAC address 52:54:00:71:4a:f8 in network mk-NoKubernetes-246858
	I0920 18:03:28.380319   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHUsername
	I0920 18:03:28.380403   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHPort
	I0920 18:03:28.380446   58507 sshutil.go:53] new ssh client: &{IP:192.168.72.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/NoKubernetes-246858/id_rsa Username:docker}
	I0920 18:03:28.380512   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHKeyPath
	I0920 18:03:28.380728   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetSSHUsername
	I0920 18:03:28.380897   58507 sshutil.go:53] new ssh client: &{IP:192.168.72.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/NoKubernetes-246858/id_rsa Username:docker}
	I0920 18:03:28.467306   58507 ssh_runner.go:195] Run: systemctl --version
	I0920 18:03:28.508152   58507 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 18:03:28.678757   58507 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 18:03:28.684863   58507 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 18:03:28.684935   58507 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 18:03:28.700567   58507 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 18:03:28.700580   58507 start.go:495] detecting cgroup driver to use...
	I0920 18:03:28.700652   58507 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 18:03:28.718368   58507 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 18:03:28.733560   58507 docker.go:217] disabling cri-docker service (if available) ...
	I0920 18:03:28.733608   58507 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 18:03:28.748692   58507 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 18:03:28.764821   58507 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 18:03:28.905166   58507 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 18:03:29.072381   58507 docker.go:233] disabling docker service ...
	I0920 18:03:29.072456   58507 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 18:03:29.091906   58507 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 18:03:29.106945   58507 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 18:03:29.256315   58507 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 18:03:29.392872   58507 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 18:03:29.408445   58507 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 18:03:29.429358   58507 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 18:03:29.429422   58507 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:03:29.440130   58507 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 18:03:29.440184   58507 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:03:29.450555   58507 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:03:29.461754   58507 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:03:29.472631   58507 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 18:03:29.483382   58507 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:03:29.493672   58507 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:03:29.510410   58507 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:03:29.520664   58507 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 18:03:29.529795   58507 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 18:03:29.529850   58507 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 18:03:29.541852   58507 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 18:03:29.551114   58507 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:03:29.669021   58507 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 18:03:29.760246   58507 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 18:03:29.760310   58507 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 18:03:29.764834   58507 start.go:563] Will wait 60s for crictl version
	I0920 18:03:29.764879   58507 ssh_runner.go:195] Run: which crictl
	I0920 18:03:29.768782   58507 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 18:03:29.808681   58507 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 18:03:29.808770   58507 ssh_runner.go:195] Run: crio --version
	I0920 18:03:29.837524   58507 ssh_runner.go:195] Run: crio --version
	I0920 18:03:29.868669   58507 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 18:03:28.398711   58846 machine.go:93] provisionDockerMachine start ...
	I0920 18:03:28.398736   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .DriverName
	I0920 18:03:28.398990   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHHostname
	I0920 18:03:28.402473   58846 main.go:141] libmachine: (cert-expiration-452691) DBG | domain cert-expiration-452691 has defined MAC address 52:54:00:e4:61:4e in network mk-cert-expiration-452691
	I0920 18:03:28.402970   58846 main.go:141] libmachine: (cert-expiration-452691) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:61:4e", ip: ""} in network mk-cert-expiration-452691: {Iface:virbr3 ExpiryTime:2024-09-20 18:59:52 +0000 UTC Type:0 Mac:52:54:00:e4:61:4e Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:cert-expiration-452691 Clientid:01:52:54:00:e4:61:4e}
	I0920 18:03:28.402992   58846 main.go:141] libmachine: (cert-expiration-452691) DBG | domain cert-expiration-452691 has defined IP address 192.168.61.65 and MAC address 52:54:00:e4:61:4e in network mk-cert-expiration-452691
	I0920 18:03:28.403183   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHPort
	I0920 18:03:28.403399   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHKeyPath
	I0920 18:03:28.403657   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHKeyPath
	I0920 18:03:28.403858   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHUsername
	I0920 18:03:28.404033   58846 main.go:141] libmachine: Using SSH client type: native
	I0920 18:03:28.404235   58846 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.65 22 <nil> <nil>}
	I0920 18:03:28.404241   58846 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 18:03:28.521263   58846 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-452691
	
	I0920 18:03:28.521282   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetMachineName
	I0920 18:03:28.521609   58846 buildroot.go:166] provisioning hostname "cert-expiration-452691"
	I0920 18:03:28.521629   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetMachineName
	I0920 18:03:28.521865   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHHostname
	I0920 18:03:28.525174   58846 main.go:141] libmachine: (cert-expiration-452691) DBG | domain cert-expiration-452691 has defined MAC address 52:54:00:e4:61:4e in network mk-cert-expiration-452691
	I0920 18:03:28.525645   58846 main.go:141] libmachine: (cert-expiration-452691) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:61:4e", ip: ""} in network mk-cert-expiration-452691: {Iface:virbr3 ExpiryTime:2024-09-20 18:59:52 +0000 UTC Type:0 Mac:52:54:00:e4:61:4e Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:cert-expiration-452691 Clientid:01:52:54:00:e4:61:4e}
	I0920 18:03:28.525670   58846 main.go:141] libmachine: (cert-expiration-452691) DBG | domain cert-expiration-452691 has defined IP address 192.168.61.65 and MAC address 52:54:00:e4:61:4e in network mk-cert-expiration-452691
	I0920 18:03:28.525885   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHPort
	I0920 18:03:28.526072   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHKeyPath
	I0920 18:03:28.526212   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHKeyPath
	I0920 18:03:28.526375   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHUsername
	I0920 18:03:28.526562   58846 main.go:141] libmachine: Using SSH client type: native
	I0920 18:03:28.526834   58846 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.65 22 <nil> <nil>}
	I0920 18:03:28.526854   58846 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-452691 && echo "cert-expiration-452691" | sudo tee /etc/hostname
	I0920 18:03:28.658309   58846 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-452691
	
	I0920 18:03:28.658331   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHHostname
	I0920 18:03:28.661040   58846 main.go:141] libmachine: (cert-expiration-452691) DBG | domain cert-expiration-452691 has defined MAC address 52:54:00:e4:61:4e in network mk-cert-expiration-452691
	I0920 18:03:28.661350   58846 main.go:141] libmachine: (cert-expiration-452691) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:61:4e", ip: ""} in network mk-cert-expiration-452691: {Iface:virbr3 ExpiryTime:2024-09-20 18:59:52 +0000 UTC Type:0 Mac:52:54:00:e4:61:4e Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:cert-expiration-452691 Clientid:01:52:54:00:e4:61:4e}
	I0920 18:03:28.661385   58846 main.go:141] libmachine: (cert-expiration-452691) DBG | domain cert-expiration-452691 has defined IP address 192.168.61.65 and MAC address 52:54:00:e4:61:4e in network mk-cert-expiration-452691
	I0920 18:03:28.661527   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHPort
	I0920 18:03:28.661705   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHKeyPath
	I0920 18:03:28.661866   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHKeyPath
	I0920 18:03:28.661985   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHUsername
	I0920 18:03:28.662134   58846 main.go:141] libmachine: Using SSH client type: native
	I0920 18:03:28.662348   58846 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.65 22 <nil> <nil>}
	I0920 18:03:28.662363   58846 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-452691' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-452691/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-452691' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 18:03:28.776208   58846 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:03:28.776229   58846 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-8777/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-8777/.minikube}
	I0920 18:03:28.776276   58846 buildroot.go:174] setting up certificates
	I0920 18:03:28.776299   58846 provision.go:84] configureAuth start
	I0920 18:03:28.776308   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetMachineName
	I0920 18:03:28.776585   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetIP
	I0920 18:03:28.779404   58846 main.go:141] libmachine: (cert-expiration-452691) DBG | domain cert-expiration-452691 has defined MAC address 52:54:00:e4:61:4e in network mk-cert-expiration-452691
	I0920 18:03:28.779799   58846 main.go:141] libmachine: (cert-expiration-452691) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:61:4e", ip: ""} in network mk-cert-expiration-452691: {Iface:virbr3 ExpiryTime:2024-09-20 18:59:52 +0000 UTC Type:0 Mac:52:54:00:e4:61:4e Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:cert-expiration-452691 Clientid:01:52:54:00:e4:61:4e}
	I0920 18:03:28.779832   58846 main.go:141] libmachine: (cert-expiration-452691) DBG | domain cert-expiration-452691 has defined IP address 192.168.61.65 and MAC address 52:54:00:e4:61:4e in network mk-cert-expiration-452691
	I0920 18:03:28.780052   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHHostname
	I0920 18:03:28.782564   58846 main.go:141] libmachine: (cert-expiration-452691) DBG | domain cert-expiration-452691 has defined MAC address 52:54:00:e4:61:4e in network mk-cert-expiration-452691
	I0920 18:03:28.782826   58846 main.go:141] libmachine: (cert-expiration-452691) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:61:4e", ip: ""} in network mk-cert-expiration-452691: {Iface:virbr3 ExpiryTime:2024-09-20 18:59:52 +0000 UTC Type:0 Mac:52:54:00:e4:61:4e Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:cert-expiration-452691 Clientid:01:52:54:00:e4:61:4e}
	I0920 18:03:28.782851   58846 main.go:141] libmachine: (cert-expiration-452691) DBG | domain cert-expiration-452691 has defined IP address 192.168.61.65 and MAC address 52:54:00:e4:61:4e in network mk-cert-expiration-452691
	I0920 18:03:28.783006   58846 provision.go:143] copyHostCerts
	I0920 18:03:28.783074   58846 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem, removing ...
	I0920 18:03:28.783092   58846 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem
	I0920 18:03:28.783173   58846 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem (1675 bytes)
	I0920 18:03:28.783359   58846 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem, removing ...
	I0920 18:03:28.783367   58846 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem
	I0920 18:03:28.783409   58846 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem (1082 bytes)
	I0920 18:03:28.783506   58846 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem, removing ...
	I0920 18:03:28.783510   58846 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem
	I0920 18:03:28.783536   58846 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem (1123 bytes)
	I0920 18:03:28.783605   58846 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-452691 san=[127.0.0.1 192.168.61.65 cert-expiration-452691 localhost minikube]
	I0920 18:03:28.988387   58846 provision.go:177] copyRemoteCerts
	I0920 18:03:28.988433   58846 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 18:03:28.988454   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHHostname
	I0920 18:03:28.991656   58846 main.go:141] libmachine: (cert-expiration-452691) DBG | domain cert-expiration-452691 has defined MAC address 52:54:00:e4:61:4e in network mk-cert-expiration-452691
	I0920 18:03:28.992093   58846 main.go:141] libmachine: (cert-expiration-452691) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:61:4e", ip: ""} in network mk-cert-expiration-452691: {Iface:virbr3 ExpiryTime:2024-09-20 18:59:52 +0000 UTC Type:0 Mac:52:54:00:e4:61:4e Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:cert-expiration-452691 Clientid:01:52:54:00:e4:61:4e}
	I0920 18:03:28.992117   58846 main.go:141] libmachine: (cert-expiration-452691) DBG | domain cert-expiration-452691 has defined IP address 192.168.61.65 and MAC address 52:54:00:e4:61:4e in network mk-cert-expiration-452691
	I0920 18:03:28.992326   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHPort
	I0920 18:03:28.992568   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHKeyPath
	I0920 18:03:28.992780   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHUsername
	I0920 18:03:28.992933   58846 sshutil.go:53] new ssh client: &{IP:192.168.61.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/cert-expiration-452691/id_rsa Username:docker}
	I0920 18:03:29.084879   58846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0920 18:03:29.114735   58846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0920 18:03:29.144451   58846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 18:03:29.172169   58846 provision.go:87] duration metric: took 395.857334ms to configureAuth
	I0920 18:03:29.172189   58846 buildroot.go:189] setting minikube options for container-runtime
	I0920 18:03:29.172346   58846 config.go:182] Loaded profile config "cert-expiration-452691": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:03:29.172445   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHHostname
	I0920 18:03:29.176041   58846 main.go:141] libmachine: (cert-expiration-452691) DBG | domain cert-expiration-452691 has defined MAC address 52:54:00:e4:61:4e in network mk-cert-expiration-452691
	I0920 18:03:29.176598   58846 main.go:141] libmachine: (cert-expiration-452691) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:61:4e", ip: ""} in network mk-cert-expiration-452691: {Iface:virbr3 ExpiryTime:2024-09-20 18:59:52 +0000 UTC Type:0 Mac:52:54:00:e4:61:4e Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:cert-expiration-452691 Clientid:01:52:54:00:e4:61:4e}
	I0920 18:03:29.176621   58846 main.go:141] libmachine: (cert-expiration-452691) DBG | domain cert-expiration-452691 has defined IP address 192.168.61.65 and MAC address 52:54:00:e4:61:4e in network mk-cert-expiration-452691
	I0920 18:03:29.176840   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHPort
	I0920 18:03:29.176998   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHKeyPath
	I0920 18:03:29.177122   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHKeyPath
	I0920 18:03:29.177271   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHUsername
	I0920 18:03:29.177508   58846 main.go:141] libmachine: Using SSH client type: native
	I0920 18:03:29.177763   58846 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.65 22 <nil> <nil>}
	I0920 18:03:29.177776   58846 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 18:03:27.695216   58226 pod_ready.go:103] pod "coredns-7c65d6cfc9-kzpzk" in "kube-system" namespace has status "Ready":"False"
	I0920 18:03:28.193952   58226 pod_ready.go:93] pod "coredns-7c65d6cfc9-kzpzk" in "kube-system" namespace has status "Ready":"True"
	I0920 18:03:28.193975   58226 pod_ready.go:82] duration metric: took 7.508201541s for pod "coredns-7c65d6cfc9-kzpzk" in "kube-system" namespace to be "Ready" ...
	I0920 18:03:28.193985   58226 pod_ready.go:79] waiting up to 4m0s for pod "etcd-pause-421146" in "kube-system" namespace to be "Ready" ...
	I0920 18:03:28.199211   58226 pod_ready.go:93] pod "etcd-pause-421146" in "kube-system" namespace has status "Ready":"True"
	I0920 18:03:28.199257   58226 pod_ready.go:82] duration metric: took 5.263845ms for pod "etcd-pause-421146" in "kube-system" namespace to be "Ready" ...
	I0920 18:03:28.199269   58226 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-pause-421146" in "kube-system" namespace to be "Ready" ...
	I0920 18:03:30.208287   58226 pod_ready.go:103] pod "kube-apiserver-pause-421146" in "kube-system" namespace has status "Ready":"False"
	I0920 18:03:31.207617   58226 pod_ready.go:93] pod "kube-apiserver-pause-421146" in "kube-system" namespace has status "Ready":"True"
	I0920 18:03:31.207648   58226 pod_ready.go:82] duration metric: took 3.008369415s for pod "kube-apiserver-pause-421146" in "kube-system" namespace to be "Ready" ...
	I0920 18:03:31.207663   58226 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-pause-421146" in "kube-system" namespace to be "Ready" ...
	I0920 18:03:29.870071   58507 main.go:141] libmachine: (NoKubernetes-246858) Calling .GetIP
	I0920 18:03:29.873286   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | domain NoKubernetes-246858 has defined MAC address 52:54:00:71:4a:f8 in network mk-NoKubernetes-246858
	I0920 18:03:29.873709   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:4a:f8", ip: ""} in network mk-NoKubernetes-246858: {Iface:virbr4 ExpiryTime:2024-09-20 19:03:17 +0000 UTC Type:0 Mac:52:54:00:71:4a:f8 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:NoKubernetes-246858 Clientid:01:52:54:00:71:4a:f8}
	I0920 18:03:29.873740   58507 main.go:141] libmachine: (NoKubernetes-246858) DBG | domain NoKubernetes-246858 has defined IP address 192.168.72.119 and MAC address 52:54:00:71:4a:f8 in network mk-NoKubernetes-246858
	I0920 18:03:29.874043   58507 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0920 18:03:29.878339   58507 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:03:29.891128   58507 kubeadm.go:883] updating cluster {Name:NoKubernetes-246858 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.31.1 ClusterName:NoKubernetes-246858 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.119 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 18:03:29.891222   58507 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:03:29.891269   58507 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:03:29.923387   58507 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 18:03:29.923457   58507 ssh_runner.go:195] Run: which lz4
	I0920 18:03:29.927177   58507 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 18:03:29.931014   58507 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 18:03:29.931036   58507 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0920 18:03:31.342933   58507 crio.go:462] duration metric: took 1.415782149s to copy over tarball
	I0920 18:03:31.342991   58507 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 18:03:33.216801   58226 pod_ready.go:103] pod "kube-controller-manager-pause-421146" in "kube-system" namespace has status "Ready":"False"
	I0920 18:03:34.714689   58226 pod_ready.go:93] pod "kube-controller-manager-pause-421146" in "kube-system" namespace has status "Ready":"True"
	I0920 18:03:34.714723   58226 pod_ready.go:82] duration metric: took 3.507050896s for pod "kube-controller-manager-pause-421146" in "kube-system" namespace to be "Ready" ...
	I0920 18:03:34.714739   58226 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-gcp8x" in "kube-system" namespace to be "Ready" ...
	I0920 18:03:34.720590   58226 pod_ready.go:93] pod "kube-proxy-gcp8x" in "kube-system" namespace has status "Ready":"True"
	I0920 18:03:34.720615   58226 pod_ready.go:82] duration metric: took 5.869378ms for pod "kube-proxy-gcp8x" in "kube-system" namespace to be "Ready" ...
	I0920 18:03:34.720625   58226 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-pause-421146" in "kube-system" namespace to be "Ready" ...
	I0920 18:03:34.726519   58226 pod_ready.go:93] pod "kube-scheduler-pause-421146" in "kube-system" namespace has status "Ready":"True"
	I0920 18:03:34.726547   58226 pod_ready.go:82] duration metric: took 5.91332ms for pod "kube-scheduler-pause-421146" in "kube-system" namespace to be "Ready" ...
	I0920 18:03:34.726556   58226 pod_ready.go:39] duration metric: took 14.04633779s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:03:34.726573   58226 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 18:03:34.743036   58226 ops.go:34] apiserver oom_adj: -16
	I0920 18:03:34.743061   58226 kubeadm.go:597] duration metric: took 41.802732087s to restartPrimaryControlPlane
	I0920 18:03:34.743072   58226 kubeadm.go:394] duration metric: took 41.969300358s to StartCluster
	I0920 18:03:34.743092   58226 settings.go:142] acquiring lock: {Name:mk2a13c58cdc0faf8cddca5d6716175d45db9bfd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:03:34.743184   58226 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19672-8777/kubeconfig
	I0920 18:03:34.744302   58226 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/kubeconfig: {Name:mkf32a4c736808e023459b2f0e40188618a38db1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:03:34.744530   58226 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.200 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:03:34.744651   58226 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 18:03:34.744798   58226 config.go:182] Loaded profile config "pause-421146": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:03:34.746560   58226 out.go:177] * Enabled addons: 
	I0920 18:03:34.746571   58226 out.go:177] * Verifying Kubernetes components...
	I0920 18:03:35.044142   59003 start.go:364] duration metric: took 12.51255782s to acquireMachinesLock for "auto-833505"
	I0920 18:03:35.044215   59003 start.go:93] Provisioning new machine with config: &{Name:auto-833505 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.31.1 ClusterName:auto-833505 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:03:35.044312   59003 start.go:125] createHost starting for "" (driver="kvm2")
	I0920 18:03:34.761062   58846 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 18:03:34.761079   58846 machine.go:96] duration metric: took 6.362357366s to provisionDockerMachine
	I0920 18:03:34.761092   58846 start.go:293] postStartSetup for "cert-expiration-452691" (driver="kvm2")
	I0920 18:03:34.761105   58846 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 18:03:34.761126   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .DriverName
	I0920 18:03:34.761638   58846 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 18:03:34.761664   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHHostname
	I0920 18:03:34.764776   58846 main.go:141] libmachine: (cert-expiration-452691) DBG | domain cert-expiration-452691 has defined MAC address 52:54:00:e4:61:4e in network mk-cert-expiration-452691
	I0920 18:03:34.765100   58846 main.go:141] libmachine: (cert-expiration-452691) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:61:4e", ip: ""} in network mk-cert-expiration-452691: {Iface:virbr3 ExpiryTime:2024-09-20 18:59:52 +0000 UTC Type:0 Mac:52:54:00:e4:61:4e Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:cert-expiration-452691 Clientid:01:52:54:00:e4:61:4e}
	I0920 18:03:34.765116   58846 main.go:141] libmachine: (cert-expiration-452691) DBG | domain cert-expiration-452691 has defined IP address 192.168.61.65 and MAC address 52:54:00:e4:61:4e in network mk-cert-expiration-452691
	I0920 18:03:34.765343   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHPort
	I0920 18:03:34.765534   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHKeyPath
	I0920 18:03:34.765699   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHUsername
	I0920 18:03:34.765826   58846 sshutil.go:53] new ssh client: &{IP:192.168.61.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/cert-expiration-452691/id_rsa Username:docker}
	I0920 18:03:34.860909   58846 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 18:03:34.865719   58846 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 18:03:34.865736   58846 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/addons for local assets ...
	I0920 18:03:34.865818   58846 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/files for local assets ...
	I0920 18:03:34.865962   58846 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem -> 159732.pem in /etc/ssl/certs
	I0920 18:03:34.866062   58846 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 18:03:34.877599   58846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /etc/ssl/certs/159732.pem (1708 bytes)
	I0920 18:03:34.917811   58846 start.go:296] duration metric: took 156.703909ms for postStartSetup
	I0920 18:03:34.917869   58846 fix.go:56] duration metric: took 6.546842303s for fixHost
	I0920 18:03:34.917909   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHHostname
	I0920 18:03:34.922026   58846 main.go:141] libmachine: (cert-expiration-452691) DBG | domain cert-expiration-452691 has defined MAC address 52:54:00:e4:61:4e in network mk-cert-expiration-452691
	I0920 18:03:34.922484   58846 main.go:141] libmachine: (cert-expiration-452691) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:61:4e", ip: ""} in network mk-cert-expiration-452691: {Iface:virbr3 ExpiryTime:2024-09-20 18:59:52 +0000 UTC Type:0 Mac:52:54:00:e4:61:4e Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:cert-expiration-452691 Clientid:01:52:54:00:e4:61:4e}
	I0920 18:03:34.922527   58846 main.go:141] libmachine: (cert-expiration-452691) DBG | domain cert-expiration-452691 has defined IP address 192.168.61.65 and MAC address 52:54:00:e4:61:4e in network mk-cert-expiration-452691
	I0920 18:03:34.922771   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHPort
	I0920 18:03:34.922984   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHKeyPath
	I0920 18:03:34.923206   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHKeyPath
	I0920 18:03:34.923395   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHUsername
	I0920 18:03:34.923572   58846 main.go:141] libmachine: Using SSH client type: native
	I0920 18:03:34.923781   58846 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.65 22 <nil> <nil>}
	I0920 18:03:34.923786   58846 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 18:03:35.044017   58846 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726855415.027122239
	
	I0920 18:03:35.044030   58846 fix.go:216] guest clock: 1726855415.027122239
	I0920 18:03:35.044037   58846 fix.go:229] Guest: 2024-09-20 18:03:35.027122239 +0000 UTC Remote: 2024-09-20 18:03:34.917872607 +0000 UTC m=+14.430685224 (delta=109.249632ms)
	I0920 18:03:35.044060   58846 fix.go:200] guest clock delta is within tolerance: 109.249632ms
	I0920 18:03:35.044065   58846 start.go:83] releasing machines lock for "cert-expiration-452691", held for 6.673072435s
	I0920 18:03:35.044092   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .DriverName
	I0920 18:03:35.044405   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetIP
	I0920 18:03:35.047724   58846 main.go:141] libmachine: (cert-expiration-452691) DBG | domain cert-expiration-452691 has defined MAC address 52:54:00:e4:61:4e in network mk-cert-expiration-452691
	I0920 18:03:35.047980   58846 main.go:141] libmachine: (cert-expiration-452691) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:61:4e", ip: ""} in network mk-cert-expiration-452691: {Iface:virbr3 ExpiryTime:2024-09-20 18:59:52 +0000 UTC Type:0 Mac:52:54:00:e4:61:4e Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:cert-expiration-452691 Clientid:01:52:54:00:e4:61:4e}
	I0920 18:03:35.048013   58846 main.go:141] libmachine: (cert-expiration-452691) DBG | domain cert-expiration-452691 has defined IP address 192.168.61.65 and MAC address 52:54:00:e4:61:4e in network mk-cert-expiration-452691
	I0920 18:03:35.048195   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .DriverName
	I0920 18:03:35.048911   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .DriverName
	I0920 18:03:35.049112   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .DriverName
	I0920 18:03:35.049197   58846 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 18:03:35.049229   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHHostname
	I0920 18:03:35.049374   58846 ssh_runner.go:195] Run: cat /version.json
	I0920 18:03:35.049393   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHHostname
	I0920 18:03:35.052189   58846 main.go:141] libmachine: (cert-expiration-452691) DBG | domain cert-expiration-452691 has defined MAC address 52:54:00:e4:61:4e in network mk-cert-expiration-452691
	I0920 18:03:35.052500   58846 main.go:141] libmachine: (cert-expiration-452691) DBG | domain cert-expiration-452691 has defined MAC address 52:54:00:e4:61:4e in network mk-cert-expiration-452691
	I0920 18:03:35.052642   58846 main.go:141] libmachine: (cert-expiration-452691) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:61:4e", ip: ""} in network mk-cert-expiration-452691: {Iface:virbr3 ExpiryTime:2024-09-20 18:59:52 +0000 UTC Type:0 Mac:52:54:00:e4:61:4e Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:cert-expiration-452691 Clientid:01:52:54:00:e4:61:4e}
	I0920 18:03:35.052674   58846 main.go:141] libmachine: (cert-expiration-452691) DBG | domain cert-expiration-452691 has defined IP address 192.168.61.65 and MAC address 52:54:00:e4:61:4e in network mk-cert-expiration-452691
	I0920 18:03:35.052926   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHPort
	I0920 18:03:35.053109   58846 main.go:141] libmachine: (cert-expiration-452691) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:61:4e", ip: ""} in network mk-cert-expiration-452691: {Iface:virbr3 ExpiryTime:2024-09-20 18:59:52 +0000 UTC Type:0 Mac:52:54:00:e4:61:4e Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:cert-expiration-452691 Clientid:01:52:54:00:e4:61:4e}
	I0920 18:03:35.053133   58846 main.go:141] libmachine: (cert-expiration-452691) DBG | domain cert-expiration-452691 has defined IP address 192.168.61.65 and MAC address 52:54:00:e4:61:4e in network mk-cert-expiration-452691
	I0920 18:03:35.053164   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHKeyPath
	I0920 18:03:35.053341   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHUsername
	I0920 18:03:35.053355   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHPort
	I0920 18:03:35.053547   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHKeyPath
	I0920 18:03:35.053540   58846 sshutil.go:53] new ssh client: &{IP:192.168.61.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/cert-expiration-452691/id_rsa Username:docker}
	I0920 18:03:35.053684   58846 main.go:141] libmachine: (cert-expiration-452691) Calling .GetSSHUsername
	I0920 18:03:35.053791   58846 sshutil.go:53] new ssh client: &{IP:192.168.61.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/cert-expiration-452691/id_rsa Username:docker}
	I0920 18:03:35.185207   58846 ssh_runner.go:195] Run: systemctl --version
	I0920 18:03:35.236966   58846 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 18:03:35.441793   58846 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 18:03:35.455299   58846 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 18:03:35.455360   58846 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 18:03:35.467336   58846 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0920 18:03:35.467354   58846 start.go:495] detecting cgroup driver to use...
	I0920 18:03:35.467424   58846 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 18:03:35.490111   58846 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 18:03:35.510612   58846 docker.go:217] disabling cri-docker service (if available) ...
	I0920 18:03:35.510671   58846 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 18:03:34.748314   58226 addons.go:510] duration metric: took 3.666918ms for enable addons: enabled=[]
	I0920 18:03:34.748395   58226 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:03:34.936425   58226 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:03:34.960297   58226 node_ready.go:35] waiting up to 6m0s for node "pause-421146" to be "Ready" ...
	I0920 18:03:34.964467   58226 node_ready.go:49] node "pause-421146" has status "Ready":"True"
	I0920 18:03:34.964497   58226 node_ready.go:38] duration metric: took 4.158459ms for node "pause-421146" to be "Ready" ...
	I0920 18:03:34.964507   58226 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:03:34.973021   58226 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-kzpzk" in "kube-system" namespace to be "Ready" ...
	I0920 18:03:34.979836   58226 pod_ready.go:93] pod "coredns-7c65d6cfc9-kzpzk" in "kube-system" namespace has status "Ready":"True"
	I0920 18:03:34.979868   58226 pod_ready.go:82] duration metric: took 6.81667ms for pod "coredns-7c65d6cfc9-kzpzk" in "kube-system" namespace to be "Ready" ...
	I0920 18:03:34.979880   58226 pod_ready.go:79] waiting up to 6m0s for pod "etcd-pause-421146" in "kube-system" namespace to be "Ready" ...
	I0920 18:03:35.111773   58226 pod_ready.go:93] pod "etcd-pause-421146" in "kube-system" namespace has status "Ready":"True"
	I0920 18:03:35.111803   58226 pod_ready.go:82] duration metric: took 131.914672ms for pod "etcd-pause-421146" in "kube-system" namespace to be "Ready" ...
	I0920 18:03:35.111819   58226 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-pause-421146" in "kube-system" namespace to be "Ready" ...
	I0920 18:03:35.515214   58226 pod_ready.go:93] pod "kube-apiserver-pause-421146" in "kube-system" namespace has status "Ready":"True"
	I0920 18:03:35.515253   58226 pod_ready.go:82] duration metric: took 403.425436ms for pod "kube-apiserver-pause-421146" in "kube-system" namespace to be "Ready" ...
	I0920 18:03:35.515267   58226 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-pause-421146" in "kube-system" namespace to be "Ready" ...
	I0920 18:03:35.913540   58226 pod_ready.go:93] pod "kube-controller-manager-pause-421146" in "kube-system" namespace has status "Ready":"True"
	I0920 18:03:35.913569   58226 pod_ready.go:82] duration metric: took 398.292185ms for pod "kube-controller-manager-pause-421146" in "kube-system" namespace to be "Ready" ...
	I0920 18:03:35.913583   58226 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gcp8x" in "kube-system" namespace to be "Ready" ...
	I0920 18:03:36.312895   58226 pod_ready.go:93] pod "kube-proxy-gcp8x" in "kube-system" namespace has status "Ready":"True"
	I0920 18:03:36.312927   58226 pod_ready.go:82] duration metric: took 399.334933ms for pod "kube-proxy-gcp8x" in "kube-system" namespace to be "Ready" ...
	I0920 18:03:36.312941   58226 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-pause-421146" in "kube-system" namespace to be "Ready" ...
	I0920 18:03:35.167499   59003 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0920 18:03:35.167786   59003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:03:35.167850   59003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:03:35.184819   59003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40859
	I0920 18:03:35.185274   59003 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:03:35.186045   59003 main.go:141] libmachine: Using API Version  1
	I0920 18:03:35.186072   59003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:03:35.186550   59003 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:03:35.186798   59003 main.go:141] libmachine: (auto-833505) Calling .GetMachineName
	I0920 18:03:35.186950   59003 main.go:141] libmachine: (auto-833505) Calling .DriverName
	I0920 18:03:35.187165   59003 start.go:159] libmachine.API.Create for "auto-833505" (driver="kvm2")
	I0920 18:03:35.187203   59003 client.go:168] LocalClient.Create starting
	I0920 18:03:35.187238   59003 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem
	I0920 18:03:35.187281   59003 main.go:141] libmachine: Decoding PEM data...
	I0920 18:03:35.187300   59003 main.go:141] libmachine: Parsing certificate...
	I0920 18:03:35.187376   59003 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem
	I0920 18:03:35.187405   59003 main.go:141] libmachine: Decoding PEM data...
	I0920 18:03:35.187432   59003 main.go:141] libmachine: Parsing certificate...
	I0920 18:03:35.187467   59003 main.go:141] libmachine: Running pre-create checks...
	I0920 18:03:35.187486   59003 main.go:141] libmachine: (auto-833505) Calling .PreCreateCheck
	I0920 18:03:35.188057   59003 main.go:141] libmachine: (auto-833505) Calling .GetConfigRaw
	I0920 18:03:35.188572   59003 main.go:141] libmachine: Creating machine...
	I0920 18:03:35.188590   59003 main.go:141] libmachine: (auto-833505) Calling .Create
	I0920 18:03:35.188857   59003 main.go:141] libmachine: (auto-833505) Creating KVM machine...
	I0920 18:03:35.190397   59003 main.go:141] libmachine: (auto-833505) DBG | found existing default KVM network
	I0920 18:03:35.192384   59003 main.go:141] libmachine: (auto-833505) DBG | I0920 18:03:35.192183   59113 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000204c20}
	I0920 18:03:35.192434   59003 main.go:141] libmachine: (auto-833505) DBG | created network xml: 
	I0920 18:03:35.192465   59003 main.go:141] libmachine: (auto-833505) DBG | <network>
	I0920 18:03:35.192479   59003 main.go:141] libmachine: (auto-833505) DBG |   <name>mk-auto-833505</name>
	I0920 18:03:35.192497   59003 main.go:141] libmachine: (auto-833505) DBG |   <dns enable='no'/>
	I0920 18:03:35.192508   59003 main.go:141] libmachine: (auto-833505) DBG |   
	I0920 18:03:35.192517   59003 main.go:141] libmachine: (auto-833505) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0920 18:03:35.192527   59003 main.go:141] libmachine: (auto-833505) DBG |     <dhcp>
	I0920 18:03:35.192535   59003 main.go:141] libmachine: (auto-833505) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0920 18:03:35.192556   59003 main.go:141] libmachine: (auto-833505) DBG |     </dhcp>
	I0920 18:03:35.192583   59003 main.go:141] libmachine: (auto-833505) DBG |   </ip>
	I0920 18:03:35.192596   59003 main.go:141] libmachine: (auto-833505) DBG |   
	I0920 18:03:35.192609   59003 main.go:141] libmachine: (auto-833505) DBG | </network>
	I0920 18:03:35.192619   59003 main.go:141] libmachine: (auto-833505) DBG | 
	I0920 18:03:35.407676   59003 main.go:141] libmachine: (auto-833505) DBG | trying to create private KVM network mk-auto-833505 192.168.39.0/24...
	I0920 18:03:35.513944   59003 main.go:141] libmachine: (auto-833505) Setting up store path in /home/jenkins/minikube-integration/19672-8777/.minikube/machines/auto-833505 ...
	I0920 18:03:35.513991   59003 main.go:141] libmachine: (auto-833505) DBG | private KVM network mk-auto-833505 192.168.39.0/24 created
	I0920 18:03:35.514006   59003 main.go:141] libmachine: (auto-833505) Building disk image from file:///home/jenkins/minikube-integration/19672-8777/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso
	I0920 18:03:35.514053   59003 main.go:141] libmachine: (auto-833505) Downloading /home/jenkins/minikube-integration/19672-8777/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19672-8777/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso...
	I0920 18:03:35.514077   59003 main.go:141] libmachine: (auto-833505) DBG | I0920 18:03:35.506879   59113 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19672-8777/.minikube
	I0920 18:03:35.778642   59003 main.go:141] libmachine: (auto-833505) DBG | I0920 18:03:35.778486   59113 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/auto-833505/id_rsa...
	I0920 18:03:35.955367   59003 main.go:141] libmachine: (auto-833505) DBG | I0920 18:03:35.955200   59113 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/auto-833505/auto-833505.rawdisk...
	I0920 18:03:35.955403   59003 main.go:141] libmachine: (auto-833505) DBG | Writing magic tar header
	I0920 18:03:35.955418   59003 main.go:141] libmachine: (auto-833505) DBG | Writing SSH key tar header
	I0920 18:03:35.955431   59003 main.go:141] libmachine: (auto-833505) DBG | I0920 18:03:35.955340   59113 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19672-8777/.minikube/machines/auto-833505 ...
	I0920 18:03:35.955460   59003 main.go:141] libmachine: (auto-833505) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/auto-833505
	I0920 18:03:35.955530   59003 main.go:141] libmachine: (auto-833505) Setting executable bit set on /home/jenkins/minikube-integration/19672-8777/.minikube/machines/auto-833505 (perms=drwx------)
	I0920 18:03:35.955559   59003 main.go:141] libmachine: (auto-833505) Setting executable bit set on /home/jenkins/minikube-integration/19672-8777/.minikube/machines (perms=drwxr-xr-x)
	I0920 18:03:35.955582   59003 main.go:141] libmachine: (auto-833505) Setting executable bit set on /home/jenkins/minikube-integration/19672-8777/.minikube (perms=drwxr-xr-x)
	I0920 18:03:35.955604   59003 main.go:141] libmachine: (auto-833505) Setting executable bit set on /home/jenkins/minikube-integration/19672-8777 (perms=drwxrwxr-x)
	I0920 18:03:35.955635   59003 main.go:141] libmachine: (auto-833505) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-8777/.minikube/machines
	I0920 18:03:35.955649   59003 main.go:141] libmachine: (auto-833505) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0920 18:03:35.955662   59003 main.go:141] libmachine: (auto-833505) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0920 18:03:35.955673   59003 main.go:141] libmachine: (auto-833505) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-8777/.minikube
	I0920 18:03:35.955685   59003 main.go:141] libmachine: (auto-833505) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-8777
	I0920 18:03:35.955696   59003 main.go:141] libmachine: (auto-833505) Creating domain...
	I0920 18:03:35.955764   59003 main.go:141] libmachine: (auto-833505) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0920 18:03:35.955791   59003 main.go:141] libmachine: (auto-833505) DBG | Checking permissions on dir: /home/jenkins
	I0920 18:03:35.955806   59003 main.go:141] libmachine: (auto-833505) DBG | Checking permissions on dir: /home
	I0920 18:03:35.955821   59003 main.go:141] libmachine: (auto-833505) DBG | Skipping /home - not owner
	I0920 18:03:35.956940   59003 main.go:141] libmachine: (auto-833505) define libvirt domain using xml: 
	I0920 18:03:35.956959   59003 main.go:141] libmachine: (auto-833505) <domain type='kvm'>
	I0920 18:03:35.956982   59003 main.go:141] libmachine: (auto-833505)   <name>auto-833505</name>
	I0920 18:03:35.956989   59003 main.go:141] libmachine: (auto-833505)   <memory unit='MiB'>3072</memory>
	I0920 18:03:35.956997   59003 main.go:141] libmachine: (auto-833505)   <vcpu>2</vcpu>
	I0920 18:03:35.957003   59003 main.go:141] libmachine: (auto-833505)   <features>
	I0920 18:03:35.957018   59003 main.go:141] libmachine: (auto-833505)     <acpi/>
	I0920 18:03:35.957024   59003 main.go:141] libmachine: (auto-833505)     <apic/>
	I0920 18:03:35.957031   59003 main.go:141] libmachine: (auto-833505)     <pae/>
	I0920 18:03:35.957039   59003 main.go:141] libmachine: (auto-833505)     
	I0920 18:03:35.957046   59003 main.go:141] libmachine: (auto-833505)   </features>
	I0920 18:03:35.957053   59003 main.go:141] libmachine: (auto-833505)   <cpu mode='host-passthrough'>
	I0920 18:03:35.957061   59003 main.go:141] libmachine: (auto-833505)   
	I0920 18:03:35.957068   59003 main.go:141] libmachine: (auto-833505)   </cpu>
	I0920 18:03:35.957075   59003 main.go:141] libmachine: (auto-833505)   <os>
	I0920 18:03:35.957092   59003 main.go:141] libmachine: (auto-833505)     <type>hvm</type>
	I0920 18:03:35.957100   59003 main.go:141] libmachine: (auto-833505)     <boot dev='cdrom'/>
	I0920 18:03:35.957106   59003 main.go:141] libmachine: (auto-833505)     <boot dev='hd'/>
	I0920 18:03:35.957118   59003 main.go:141] libmachine: (auto-833505)     <bootmenu enable='no'/>
	I0920 18:03:35.957126   59003 main.go:141] libmachine: (auto-833505)   </os>
	I0920 18:03:35.957136   59003 main.go:141] libmachine: (auto-833505)   <devices>
	I0920 18:03:35.957147   59003 main.go:141] libmachine: (auto-833505)     <disk type='file' device='cdrom'>
	I0920 18:03:35.957161   59003 main.go:141] libmachine: (auto-833505)       <source file='/home/jenkins/minikube-integration/19672-8777/.minikube/machines/auto-833505/boot2docker.iso'/>
	I0920 18:03:35.957175   59003 main.go:141] libmachine: (auto-833505)       <target dev='hdc' bus='scsi'/>
	I0920 18:03:35.957190   59003 main.go:141] libmachine: (auto-833505)       <readonly/>
	I0920 18:03:35.957199   59003 main.go:141] libmachine: (auto-833505)     </disk>
	I0920 18:03:35.957208   59003 main.go:141] libmachine: (auto-833505)     <disk type='file' device='disk'>
	I0920 18:03:35.957228   59003 main.go:141] libmachine: (auto-833505)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0920 18:03:35.957245   59003 main.go:141] libmachine: (auto-833505)       <source file='/home/jenkins/minikube-integration/19672-8777/.minikube/machines/auto-833505/auto-833505.rawdisk'/>
	I0920 18:03:35.957256   59003 main.go:141] libmachine: (auto-833505)       <target dev='hda' bus='virtio'/>
	I0920 18:03:35.957263   59003 main.go:141] libmachine: (auto-833505)     </disk>
	I0920 18:03:35.957277   59003 main.go:141] libmachine: (auto-833505)     <interface type='network'>
	I0920 18:03:35.957289   59003 main.go:141] libmachine: (auto-833505)       <source network='mk-auto-833505'/>
	I0920 18:03:35.957296   59003 main.go:141] libmachine: (auto-833505)       <model type='virtio'/>
	I0920 18:03:35.957306   59003 main.go:141] libmachine: (auto-833505)     </interface>
	I0920 18:03:35.957313   59003 main.go:141] libmachine: (auto-833505)     <interface type='network'>
	I0920 18:03:35.957324   59003 main.go:141] libmachine: (auto-833505)       <source network='default'/>
	I0920 18:03:35.957333   59003 main.go:141] libmachine: (auto-833505)       <model type='virtio'/>
	I0920 18:03:35.957341   59003 main.go:141] libmachine: (auto-833505)     </interface>
	I0920 18:03:35.957354   59003 main.go:141] libmachine: (auto-833505)     <serial type='pty'>
	I0920 18:03:35.957365   59003 main.go:141] libmachine: (auto-833505)       <target port='0'/>
	I0920 18:03:35.957374   59003 main.go:141] libmachine: (auto-833505)     </serial>
	I0920 18:03:35.957382   59003 main.go:141] libmachine: (auto-833505)     <console type='pty'>
	I0920 18:03:35.957391   59003 main.go:141] libmachine: (auto-833505)       <target type='serial' port='0'/>
	I0920 18:03:35.957399   59003 main.go:141] libmachine: (auto-833505)     </console>
	I0920 18:03:35.957409   59003 main.go:141] libmachine: (auto-833505)     <rng model='virtio'>
	I0920 18:03:35.957418   59003 main.go:141] libmachine: (auto-833505)       <backend model='random'>/dev/random</backend>
	I0920 18:03:35.957430   59003 main.go:141] libmachine: (auto-833505)     </rng>
	I0920 18:03:35.957440   59003 main.go:141] libmachine: (auto-833505)     
	I0920 18:03:35.957445   59003 main.go:141] libmachine: (auto-833505)     
	I0920 18:03:35.957453   59003 main.go:141] libmachine: (auto-833505)   </devices>
	I0920 18:03:35.957462   59003 main.go:141] libmachine: (auto-833505) </domain>
	I0920 18:03:35.957471   59003 main.go:141] libmachine: (auto-833505) 
	I0920 18:03:35.962905   59003 main.go:141] libmachine: (auto-833505) DBG | domain auto-833505 has defined MAC address 52:54:00:16:9e:25 in network default
	I0920 18:03:35.963648   59003 main.go:141] libmachine: (auto-833505) Ensuring networks are active...
	I0920 18:03:35.963676   59003 main.go:141] libmachine: (auto-833505) DBG | domain auto-833505 has defined MAC address 52:54:00:76:01:26 in network mk-auto-833505
	I0920 18:03:35.964436   59003 main.go:141] libmachine: (auto-833505) Ensuring network default is active
	I0920 18:03:35.964808   59003 main.go:141] libmachine: (auto-833505) Ensuring network mk-auto-833505 is active
	I0920 18:03:35.965525   59003 main.go:141] libmachine: (auto-833505) Getting domain xml...
	I0920 18:03:35.966351   59003 main.go:141] libmachine: (auto-833505) Creating domain...
	I0920 18:03:37.339090   59003 main.go:141] libmachine: (auto-833505) Waiting to get IP...
	I0920 18:03:37.340023   59003 main.go:141] libmachine: (auto-833505) DBG | domain auto-833505 has defined MAC address 52:54:00:76:01:26 in network mk-auto-833505
	I0920 18:03:37.340457   59003 main.go:141] libmachine: (auto-833505) DBG | unable to find current IP address of domain auto-833505 in network mk-auto-833505
	I0920 18:03:37.340519   59003 main.go:141] libmachine: (auto-833505) DBG | I0920 18:03:37.340450   59113 retry.go:31] will retry after 197.031197ms: waiting for machine to come up
	I0920 18:03:36.712671   58226 pod_ready.go:93] pod "kube-scheduler-pause-421146" in "kube-system" namespace has status "Ready":"True"
	I0920 18:03:36.712700   58226 pod_ready.go:82] duration metric: took 399.750704ms for pod "kube-scheduler-pause-421146" in "kube-system" namespace to be "Ready" ...
	I0920 18:03:36.712710   58226 pod_ready.go:39] duration metric: took 1.748192922s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:03:36.712727   58226 api_server.go:52] waiting for apiserver process to appear ...
	I0920 18:03:36.712787   58226 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:03:36.733911   58226 api_server.go:72] duration metric: took 1.989345988s to wait for apiserver process to appear ...
	I0920 18:03:36.733940   58226 api_server.go:88] waiting for apiserver healthz status ...
	I0920 18:03:36.733965   58226 api_server.go:253] Checking apiserver healthz at https://192.168.50.200:8443/healthz ...
	I0920 18:03:36.741503   58226 api_server.go:279] https://192.168.50.200:8443/healthz returned 200:
	ok
	I0920 18:03:36.742848   58226 api_server.go:141] control plane version: v1.31.1
	I0920 18:03:36.742881   58226 api_server.go:131] duration metric: took 8.933025ms to wait for apiserver health ...
	I0920 18:03:36.742893   58226 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 18:03:36.915989   58226 system_pods.go:59] 6 kube-system pods found
	I0920 18:03:36.916038   58226 system_pods.go:61] "coredns-7c65d6cfc9-kzpzk" [31a69166-0bab-4048-ab8b-39726652c348] Running
	I0920 18:03:36.916046   58226 system_pods.go:61] "etcd-pause-421146" [d68394c2-e3fa-4e8b-adee-def6caa2d4f9] Running
	I0920 18:03:36.916053   58226 system_pods.go:61] "kube-apiserver-pause-421146" [0b11bd21-711e-4979-a665-a74c28acf52a] Running
	I0920 18:03:36.916058   58226 system_pods.go:61] "kube-controller-manager-pause-421146" [58041d9e-5b1f-4660-8cf5-d9a801a56b06] Running
	I0920 18:03:36.916064   58226 system_pods.go:61] "kube-proxy-gcp8x" [e95cbd03-c3ac-4381-ba7e-dab67b046217] Running
	I0920 18:03:36.916069   58226 system_pods.go:61] "kube-scheduler-pause-421146" [8c87d091-7f76-432c-b14e-38167da26d6a] Running
	I0920 18:03:36.916077   58226 system_pods.go:74] duration metric: took 173.17525ms to wait for pod list to return data ...
	I0920 18:03:36.916101   58226 default_sa.go:34] waiting for default service account to be created ...
	I0920 18:03:37.112686   58226 default_sa.go:45] found service account: "default"
	I0920 18:03:37.112719   58226 default_sa.go:55] duration metric: took 196.608635ms for default service account to be created ...
	I0920 18:03:37.112732   58226 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 18:03:37.314309   58226 system_pods.go:86] 6 kube-system pods found
	I0920 18:03:37.314347   58226 system_pods.go:89] "coredns-7c65d6cfc9-kzpzk" [31a69166-0bab-4048-ab8b-39726652c348] Running
	I0920 18:03:37.314356   58226 system_pods.go:89] "etcd-pause-421146" [d68394c2-e3fa-4e8b-adee-def6caa2d4f9] Running
	I0920 18:03:37.314362   58226 system_pods.go:89] "kube-apiserver-pause-421146" [0b11bd21-711e-4979-a665-a74c28acf52a] Running
	I0920 18:03:37.314368   58226 system_pods.go:89] "kube-controller-manager-pause-421146" [58041d9e-5b1f-4660-8cf5-d9a801a56b06] Running
	I0920 18:03:37.314373   58226 system_pods.go:89] "kube-proxy-gcp8x" [e95cbd03-c3ac-4381-ba7e-dab67b046217] Running
	I0920 18:03:37.314378   58226 system_pods.go:89] "kube-scheduler-pause-421146" [8c87d091-7f76-432c-b14e-38167da26d6a] Running
	I0920 18:03:37.314387   58226 system_pods.go:126] duration metric: took 201.647874ms to wait for k8s-apps to be running ...
	I0920 18:03:37.314396   58226 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 18:03:37.314451   58226 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:03:37.337255   58226 system_svc.go:56] duration metric: took 22.815149ms WaitForService to wait for kubelet
	I0920 18:03:37.337302   58226 kubeadm.go:582] duration metric: took 2.592741549s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:03:37.337336   58226 node_conditions.go:102] verifying NodePressure condition ...
	I0920 18:03:37.513451   58226 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 18:03:37.513486   58226 node_conditions.go:123] node cpu capacity is 2
	I0920 18:03:37.513500   58226 node_conditions.go:105] duration metric: took 176.150709ms to run NodePressure ...
	I0920 18:03:37.513514   58226 start.go:241] waiting for startup goroutines ...
	I0920 18:03:37.513524   58226 start.go:246] waiting for cluster config update ...
	I0920 18:03:37.513536   58226 start.go:255] writing updated cluster config ...
	I0920 18:03:37.513937   58226 ssh_runner.go:195] Run: rm -f paused
	I0920 18:03:37.586056   58226 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 18:03:37.588399   58226 out.go:177] * Done! kubectl is now configured to use "pause-421146" cluster and "default" namespace by default
	I0920 18:03:33.814875   58507 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.471860078s)
	I0920 18:03:33.814895   58507 crio.go:469] duration metric: took 2.471942627s to extract the tarball
	I0920 18:03:33.814903   58507 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 18:03:33.856544   58507 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:03:33.904663   58507 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 18:03:33.904675   58507 cache_images.go:84] Images are preloaded, skipping loading
	I0920 18:03:33.904680   58507 kubeadm.go:934] updating node { 192.168.72.119 8443 v1.31.1 crio true true} ...
	I0920 18:03:33.904781   58507 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=NoKubernetes-246858 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.119
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:NoKubernetes-246858 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 18:03:33.904847   58507 ssh_runner.go:195] Run: crio config
	I0920 18:03:33.956951   58507 cni.go:84] Creating CNI manager for ""
	I0920 18:03:33.956964   58507 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:03:33.956972   58507 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 18:03:33.956996   58507 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.119 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:NoKubernetes-246858 NodeName:NoKubernetes-246858 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.119"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.119 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 18:03:33.957143   58507 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.119
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "NoKubernetes-246858"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.119
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.119"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 18:03:33.957203   58507 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 18:03:33.970217   58507 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 18:03:33.970276   58507 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 18:03:33.981020   58507 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I0920 18:03:33.999358   58507 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 18:03:34.016276   58507 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2163 bytes)
	I0920 18:03:34.032625   58507 ssh_runner.go:195] Run: grep 192.168.72.119	control-plane.minikube.internal$ /etc/hosts
	I0920 18:03:34.036626   58507 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.119	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:03:34.049269   58507 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:03:34.185689   58507 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:03:34.203713   58507 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/NoKubernetes-246858 for IP: 192.168.72.119
	I0920 18:03:34.203725   58507 certs.go:194] generating shared ca certs ...
	I0920 18:03:34.203739   58507 certs.go:226] acquiring lock for ca certs: {Name:mkc7ef6c737c6bdc3fdd9dcff8f57029c020d8f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:03:34.203927   58507 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key
	I0920 18:03:34.204024   58507 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key
	I0920 18:03:34.204033   58507 certs.go:256] generating profile certs ...
	I0920 18:03:34.204101   58507 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/NoKubernetes-246858/client.key
	I0920 18:03:34.204113   58507 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/NoKubernetes-246858/client.crt with IP's: []
	I0920 18:03:34.244808   58507 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/NoKubernetes-246858/client.crt ...
	I0920 18:03:34.244824   58507 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/NoKubernetes-246858/client.crt: {Name:mk8d3e6e85f2cc7b20afec328f8e417646ff89b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:03:34.245004   58507 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/NoKubernetes-246858/client.key ...
	I0920 18:03:34.245012   58507 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/NoKubernetes-246858/client.key: {Name:mk1efebd916439e6ef55e1a837c4bec7a4607c5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:03:34.245112   58507 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/NoKubernetes-246858/apiserver.key.275d290b
	I0920 18:03:34.245123   58507 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/NoKubernetes-246858/apiserver.crt.275d290b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.119]
	I0920 18:03:34.622657   58507 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/NoKubernetes-246858/apiserver.crt.275d290b ...
	I0920 18:03:34.622676   58507 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/NoKubernetes-246858/apiserver.crt.275d290b: {Name:mk47ce2312fedfaec37479dd188a673b67f11522 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:03:34.622850   58507 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/NoKubernetes-246858/apiserver.key.275d290b ...
	I0920 18:03:34.622858   58507 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/NoKubernetes-246858/apiserver.key.275d290b: {Name:mk14e076fccb6e0b4c9aa1a1231d6cb86512ca36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:03:34.622931   58507 certs.go:381] copying /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/NoKubernetes-246858/apiserver.crt.275d290b -> /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/NoKubernetes-246858/apiserver.crt
	I0920 18:03:34.623014   58507 certs.go:385] copying /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/NoKubernetes-246858/apiserver.key.275d290b -> /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/NoKubernetes-246858/apiserver.key
	I0920 18:03:34.623061   58507 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/NoKubernetes-246858/proxy-client.key
	I0920 18:03:34.623073   58507 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/NoKubernetes-246858/proxy-client.crt with IP's: []
	I0920 18:03:34.754948   58507 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/NoKubernetes-246858/proxy-client.crt ...
	I0920 18:03:34.754964   58507 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/NoKubernetes-246858/proxy-client.crt: {Name:mkf59a75ae39c43a5ec858e493e9a3cc68dd4cf6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:03:34.755143   58507 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/NoKubernetes-246858/proxy-client.key ...
	I0920 18:03:34.755151   58507 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/NoKubernetes-246858/proxy-client.key: {Name:mk0d81299f0da0268565738f09c11d60ee074b8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:03:34.755369   58507 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem (1338 bytes)
	W0920 18:03:34.755404   58507 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973_empty.pem, impossibly tiny 0 bytes
	I0920 18:03:34.755410   58507 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 18:03:34.755430   58507 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem (1082 bytes)
	I0920 18:03:34.755449   58507 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem (1123 bytes)
	I0920 18:03:34.755467   58507 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem (1675 bytes)
	I0920 18:03:34.755502   58507 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem (1708 bytes)
	I0920 18:03:34.756074   58507 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 18:03:34.792254   58507 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 18:03:34.822738   58507 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 18:03:34.853058   58507 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0920 18:03:34.890430   58507 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/NoKubernetes-246858/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0920 18:03:34.933936   58507 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/NoKubernetes-246858/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 18:03:34.974818   58507 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/NoKubernetes-246858/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 18:03:35.003045   58507 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/NoKubernetes-246858/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 18:03:35.028136   58507 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /usr/share/ca-certificates/159732.pem (1708 bytes)
	I0920 18:03:35.055898   58507 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 18:03:35.083917   58507 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem --> /usr/share/ca-certificates/15973.pem (1338 bytes)
	I0920 18:03:35.110140   58507 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 18:03:35.128210   58507 ssh_runner.go:195] Run: openssl version
	I0920 18:03:35.136202   58507 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/159732.pem && ln -fs /usr/share/ca-certificates/159732.pem /etc/ssl/certs/159732.pem"
	I0920 18:03:35.149018   58507 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/159732.pem
	I0920 18:03:35.154448   58507 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:04 /usr/share/ca-certificates/159732.pem
	I0920 18:03:35.154496   58507 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/159732.pem
	I0920 18:03:35.160890   58507 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/159732.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 18:03:35.174560   58507 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 18:03:35.189237   58507 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:03:35.195075   58507 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:03:35.195129   58507 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:03:35.201512   58507 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 18:03:35.214910   58507 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15973.pem && ln -fs /usr/share/ca-certificates/15973.pem /etc/ssl/certs/15973.pem"
	I0920 18:03:35.228466   58507 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15973.pem
	I0920 18:03:35.233598   58507 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:04 /usr/share/ca-certificates/15973.pem
	I0920 18:03:35.233661   58507 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15973.pem
	I0920 18:03:35.240619   58507 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15973.pem /etc/ssl/certs/51391683.0"
	I0920 18:03:35.254948   58507 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 18:03:35.259985   58507 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 18:03:35.260044   58507 kubeadm.go:392] StartCluster: {Name:NoKubernetes-246858 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.1 ClusterName:NoKubernetes-246858 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.119 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:03:35.260131   58507 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 18:03:35.260182   58507 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:03:35.302069   58507 cri.go:89] found id: ""
	I0920 18:03:35.302148   58507 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 18:03:35.315180   58507 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 18:03:35.326151   58507 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 18:03:35.337303   58507 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 18:03:35.337315   58507 kubeadm.go:157] found existing configuration files:
	
	I0920 18:03:35.337364   58507 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 18:03:35.347524   58507 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 18:03:35.347577   58507 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 18:03:35.358168   58507 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 18:03:35.368714   58507 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 18:03:35.368761   58507 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 18:03:35.379705   58507 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 18:03:35.390812   58507 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 18:03:35.390864   58507 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 18:03:35.401806   58507 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 18:03:35.417034   58507 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 18:03:35.417100   58507 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 18:03:35.433783   58507 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 18:03:35.504080   58507 kubeadm.go:310] W0920 18:03:35.461750     832 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 18:03:35.505211   58507 kubeadm.go:310] W0920 18:03:35.462888     832 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 18:03:35.626749   58507 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 18:03:35.603491   58846 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 18:03:35.690240   58846 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 18:03:35.972198   58846 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 18:03:36.250103   58846 docker.go:233] disabling docker service ...
	I0920 18:03:36.250167   58846 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 18:03:36.290577   58846 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 18:03:36.324135   58846 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 18:03:36.559876   58846 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 18:03:36.816367   58846 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 18:03:36.836566   58846 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 18:03:36.914461   58846 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 18:03:36.914524   58846 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:03:36.974119   58846 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 18:03:36.974177   58846 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:03:37.009018   58846 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:03:37.024283   58846 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:03:37.038965   58846 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 18:03:37.056871   58846 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:03:37.079741   58846 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:03:37.098052   58846 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:03:37.115704   58846 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 18:03:37.131147   58846 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 18:03:37.142816   58846 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:03:37.375676   58846 ssh_runner.go:195] Run: sudo systemctl restart crio
	
	
	==> CRI-O <==
	Sep 20 18:03:40 pause-421146 crio[2065]: time="2024-09-20 18:03:40.981811145Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855420981781898,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=58ca67b3-4e95-4ea0-bbbd-a9dfb15a9f94 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:03:40 pause-421146 crio[2065]: time="2024-09-20 18:03:40.982449033Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=981b46ae-6868-4ab5-bd7a-979606a5d427 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:03:40 pause-421146 crio[2065]: time="2024-09-20 18:03:40.982530887Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=981b46ae-6868-4ab5-bd7a-979606a5d427 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:03:40 pause-421146 crio[2065]: time="2024-09-20 18:03:40.982848282Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:52678e574e5cda3145f732e6a8b665d218fb10e16963e32d3dc9b209b7fccee5,PodSandboxId:a8b1775eebfd2b7231276d653a0caf54e71c433f59227ebc0d95ebf8fd842a7a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726855399523968093,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kzpzk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31a69166-0bab-4048-ab8b-39726652c348,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45767f66e4613897801d10ba7ef85f5203974b47f8a0f18a155269923c318b40,PodSandboxId:cd41238b0b4f051093b2f88ef744f1959b9365163c20500dd5856b10e4ec6879,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726855399478789134,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gcp8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: e95cbd03-c3ac-4381-ba7e-dab67b046217,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2b467124a385a8365850c1338b354a85bd06687fa90cfe3f841200fb6d9b74c,PodSandboxId:f6fd95e87f9f1f244c7052653c6e44dd79a1e9f974608f0b1f5ac0e9b7988f66,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726855395676136249,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-421146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65a92db064
d5149926f0585489dbd5f3,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2148bd1e1d3250b130059d85a48c37cb328f8ccf64f438e9dc63fdeee15ae56c,PodSandboxId:b0816ffaf87e51e03bb5d22fd2d157dbae85b9f04b5ae6d9ae11d54bf3e53ee3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726855395698234766,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-421146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32ebccbb68a7911977451c4f8550
1c3e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a6fc240ff380cc53597f9a78362928ffd5beb3133aef126be73281a5a058d71,PodSandboxId:18cee16364cf7a1b1d46012a7e095da7fef7fdd3e3b3a3ee1ba1b006ad44e673,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726855395662356389,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-421146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d667c1a7b1e7e272754e10db2315607d,},Annotations:map[string]string{io.kubernet
es.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d71881d204e3837d28568eb8b2c2abb6ea017b5b30910bd6ac080e4ccff72d72,PodSandboxId:a53ecbaa10b365d95631f0c9168517b23ccbfd706b42bab952158d51cfd81992,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726855395651764166,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-421146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57da3666fcd469516ca1d325c843ac23,},Annotations:map[string]string{io
.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75d3de29f781282cadcafa18080ca10c150d68ea2643c96305500091f11cd130,PodSandboxId:a8b1775eebfd2b7231276d653a0caf54e71c433f59227ebc0d95ebf8fd842a7a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726855372217768057,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kzpzk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31a69166-0bab-4048-ab8b-39726652c348,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a
204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e65068aabc970c4f3ded7fd5c3bbaef59385603701d9c00ed027e8385212b674,PodSandboxId:b0816ffaf87e51e03bb5d22fd2d157dbae85b9f04b5ae6d9ae11d54bf3e53ee3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726855370853142362,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kuber
netes.pod.name: kube-apiserver-pause-421146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32ebccbb68a7911977451c4f85501c3e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6caff0d1adf3a97c67a13c3e8be7618e77ea79c31f750fff59228a7be609fd4d,PodSandboxId:f6fd95e87f9f1f244c7052653c6e44dd79a1e9f974608f0b1f5ac0e9b7988f66,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726855370797197590,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kub
e-scheduler-pause-421146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65a92db064d5149926f0585489dbd5f3,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:998f1cab8849745ac4a858a8df2532e7b3bfc179f529b1172684f92fee861d0d,PodSandboxId:cd41238b0b4f051093b2f88ef744f1959b9365163c20500dd5856b10e4ec6879,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726855370693994465,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gcp8x,io.kubernetes
.pod.namespace: kube-system,io.kubernetes.pod.uid: e95cbd03-c3ac-4381-ba7e-dab67b046217,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acff960df015881f08db968c9ce85cea196679f2ba2dd86c8c0589365b50502c,PodSandboxId:18cee16364cf7a1b1d46012a7e095da7fef7fdd3e3b3a3ee1ba1b006ad44e673,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726855370718769024,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-421146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: d667c1a7b1e7e272754e10db2315607d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81a97c79e3ef43249f4d0285077861826dcf108662f81b26a4e9ea112ebc5f45,PodSandboxId:a53ecbaa10b365d95631f0c9168517b23ccbfd706b42bab952158d51cfd81992,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726855370542926466,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-421146,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 57da3666fcd469516ca1d325c843ac23,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=981b46ae-6868-4ab5-bd7a-979606a5d427 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:03:41 pause-421146 crio[2065]: time="2024-09-20 18:03:41.045936644Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=60167790-711c-478b-b30b-e0858529e959 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:03:41 pause-421146 crio[2065]: time="2024-09-20 18:03:41.047093206Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=60167790-711c-478b-b30b-e0858529e959 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:03:41 pause-421146 crio[2065]: time="2024-09-20 18:03:41.048638772Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7ec6304b-124a-4ed3-8513-15cb390c8595 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:03:41 pause-421146 crio[2065]: time="2024-09-20 18:03:41.049651401Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855421049600186,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7ec6304b-124a-4ed3-8513-15cb390c8595 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:03:41 pause-421146 crio[2065]: time="2024-09-20 18:03:41.051039769Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=182ec17d-13fe-4205-8424-317da5f419fb name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:03:41 pause-421146 crio[2065]: time="2024-09-20 18:03:41.051298153Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=182ec17d-13fe-4205-8424-317da5f419fb name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:03:41 pause-421146 crio[2065]: time="2024-09-20 18:03:41.051795553Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:52678e574e5cda3145f732e6a8b665d218fb10e16963e32d3dc9b209b7fccee5,PodSandboxId:a8b1775eebfd2b7231276d653a0caf54e71c433f59227ebc0d95ebf8fd842a7a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726855399523968093,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kzpzk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31a69166-0bab-4048-ab8b-39726652c348,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45767f66e4613897801d10ba7ef85f5203974b47f8a0f18a155269923c318b40,PodSandboxId:cd41238b0b4f051093b2f88ef744f1959b9365163c20500dd5856b10e4ec6879,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726855399478789134,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gcp8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: e95cbd03-c3ac-4381-ba7e-dab67b046217,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2b467124a385a8365850c1338b354a85bd06687fa90cfe3f841200fb6d9b74c,PodSandboxId:f6fd95e87f9f1f244c7052653c6e44dd79a1e9f974608f0b1f5ac0e9b7988f66,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726855395676136249,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-421146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65a92db064
d5149926f0585489dbd5f3,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2148bd1e1d3250b130059d85a48c37cb328f8ccf64f438e9dc63fdeee15ae56c,PodSandboxId:b0816ffaf87e51e03bb5d22fd2d157dbae85b9f04b5ae6d9ae11d54bf3e53ee3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726855395698234766,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-421146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32ebccbb68a7911977451c4f8550
1c3e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a6fc240ff380cc53597f9a78362928ffd5beb3133aef126be73281a5a058d71,PodSandboxId:18cee16364cf7a1b1d46012a7e095da7fef7fdd3e3b3a3ee1ba1b006ad44e673,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726855395662356389,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-421146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d667c1a7b1e7e272754e10db2315607d,},Annotations:map[string]string{io.kubernet
es.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d71881d204e3837d28568eb8b2c2abb6ea017b5b30910bd6ac080e4ccff72d72,PodSandboxId:a53ecbaa10b365d95631f0c9168517b23ccbfd706b42bab952158d51cfd81992,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726855395651764166,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-421146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57da3666fcd469516ca1d325c843ac23,},Annotations:map[string]string{io
.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75d3de29f781282cadcafa18080ca10c150d68ea2643c96305500091f11cd130,PodSandboxId:a8b1775eebfd2b7231276d653a0caf54e71c433f59227ebc0d95ebf8fd842a7a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726855372217768057,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kzpzk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31a69166-0bab-4048-ab8b-39726652c348,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a
204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e65068aabc970c4f3ded7fd5c3bbaef59385603701d9c00ed027e8385212b674,PodSandboxId:b0816ffaf87e51e03bb5d22fd2d157dbae85b9f04b5ae6d9ae11d54bf3e53ee3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726855370853142362,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kuber
netes.pod.name: kube-apiserver-pause-421146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32ebccbb68a7911977451c4f85501c3e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6caff0d1adf3a97c67a13c3e8be7618e77ea79c31f750fff59228a7be609fd4d,PodSandboxId:f6fd95e87f9f1f244c7052653c6e44dd79a1e9f974608f0b1f5ac0e9b7988f66,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726855370797197590,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kub
e-scheduler-pause-421146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65a92db064d5149926f0585489dbd5f3,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:998f1cab8849745ac4a858a8df2532e7b3bfc179f529b1172684f92fee861d0d,PodSandboxId:cd41238b0b4f051093b2f88ef744f1959b9365163c20500dd5856b10e4ec6879,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726855370693994465,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gcp8x,io.kubernetes
.pod.namespace: kube-system,io.kubernetes.pod.uid: e95cbd03-c3ac-4381-ba7e-dab67b046217,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acff960df015881f08db968c9ce85cea196679f2ba2dd86c8c0589365b50502c,PodSandboxId:18cee16364cf7a1b1d46012a7e095da7fef7fdd3e3b3a3ee1ba1b006ad44e673,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726855370718769024,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-421146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: d667c1a7b1e7e272754e10db2315607d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81a97c79e3ef43249f4d0285077861826dcf108662f81b26a4e9ea112ebc5f45,PodSandboxId:a53ecbaa10b365d95631f0c9168517b23ccbfd706b42bab952158d51cfd81992,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726855370542926466,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-421146,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 57da3666fcd469516ca1d325c843ac23,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=182ec17d-13fe-4205-8424-317da5f419fb name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:03:41 pause-421146 crio[2065]: time="2024-09-20 18:03:41.101019247Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=830343af-e59b-44ff-8043-456d6026ccdf name=/runtime.v1.RuntimeService/Version
	Sep 20 18:03:41 pause-421146 crio[2065]: time="2024-09-20 18:03:41.101341024Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=830343af-e59b-44ff-8043-456d6026ccdf name=/runtime.v1.RuntimeService/Version
	Sep 20 18:03:41 pause-421146 crio[2065]: time="2024-09-20 18:03:41.102536597Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=23f603a8-b867-4457-ac70-2ca67aedee40 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:03:41 pause-421146 crio[2065]: time="2024-09-20 18:03:41.103423824Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855421103389284,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=23f603a8-b867-4457-ac70-2ca67aedee40 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:03:41 pause-421146 crio[2065]: time="2024-09-20 18:03:41.103911281Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9004df9c-ea8a-4d69-b99e-c8f64caa3f07 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:03:41 pause-421146 crio[2065]: time="2024-09-20 18:03:41.103996374Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9004df9c-ea8a-4d69-b99e-c8f64caa3f07 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:03:41 pause-421146 crio[2065]: time="2024-09-20 18:03:41.104409589Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:52678e574e5cda3145f732e6a8b665d218fb10e16963e32d3dc9b209b7fccee5,PodSandboxId:a8b1775eebfd2b7231276d653a0caf54e71c433f59227ebc0d95ebf8fd842a7a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726855399523968093,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kzpzk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31a69166-0bab-4048-ab8b-39726652c348,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45767f66e4613897801d10ba7ef85f5203974b47f8a0f18a155269923c318b40,PodSandboxId:cd41238b0b4f051093b2f88ef744f1959b9365163c20500dd5856b10e4ec6879,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726855399478789134,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gcp8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: e95cbd03-c3ac-4381-ba7e-dab67b046217,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2b467124a385a8365850c1338b354a85bd06687fa90cfe3f841200fb6d9b74c,PodSandboxId:f6fd95e87f9f1f244c7052653c6e44dd79a1e9f974608f0b1f5ac0e9b7988f66,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726855395676136249,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-421146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65a92db064
d5149926f0585489dbd5f3,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2148bd1e1d3250b130059d85a48c37cb328f8ccf64f438e9dc63fdeee15ae56c,PodSandboxId:b0816ffaf87e51e03bb5d22fd2d157dbae85b9f04b5ae6d9ae11d54bf3e53ee3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726855395698234766,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-421146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32ebccbb68a7911977451c4f8550
1c3e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a6fc240ff380cc53597f9a78362928ffd5beb3133aef126be73281a5a058d71,PodSandboxId:18cee16364cf7a1b1d46012a7e095da7fef7fdd3e3b3a3ee1ba1b006ad44e673,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726855395662356389,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-421146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d667c1a7b1e7e272754e10db2315607d,},Annotations:map[string]string{io.kubernet
es.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d71881d204e3837d28568eb8b2c2abb6ea017b5b30910bd6ac080e4ccff72d72,PodSandboxId:a53ecbaa10b365d95631f0c9168517b23ccbfd706b42bab952158d51cfd81992,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726855395651764166,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-421146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57da3666fcd469516ca1d325c843ac23,},Annotations:map[string]string{io
.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75d3de29f781282cadcafa18080ca10c150d68ea2643c96305500091f11cd130,PodSandboxId:a8b1775eebfd2b7231276d653a0caf54e71c433f59227ebc0d95ebf8fd842a7a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726855372217768057,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kzpzk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31a69166-0bab-4048-ab8b-39726652c348,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a
204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e65068aabc970c4f3ded7fd5c3bbaef59385603701d9c00ed027e8385212b674,PodSandboxId:b0816ffaf87e51e03bb5d22fd2d157dbae85b9f04b5ae6d9ae11d54bf3e53ee3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726855370853142362,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kuber
netes.pod.name: kube-apiserver-pause-421146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32ebccbb68a7911977451c4f85501c3e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6caff0d1adf3a97c67a13c3e8be7618e77ea79c31f750fff59228a7be609fd4d,PodSandboxId:f6fd95e87f9f1f244c7052653c6e44dd79a1e9f974608f0b1f5ac0e9b7988f66,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726855370797197590,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kub
e-scheduler-pause-421146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65a92db064d5149926f0585489dbd5f3,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:998f1cab8849745ac4a858a8df2532e7b3bfc179f529b1172684f92fee861d0d,PodSandboxId:cd41238b0b4f051093b2f88ef744f1959b9365163c20500dd5856b10e4ec6879,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726855370693994465,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gcp8x,io.kubernetes
.pod.namespace: kube-system,io.kubernetes.pod.uid: e95cbd03-c3ac-4381-ba7e-dab67b046217,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acff960df015881f08db968c9ce85cea196679f2ba2dd86c8c0589365b50502c,PodSandboxId:18cee16364cf7a1b1d46012a7e095da7fef7fdd3e3b3a3ee1ba1b006ad44e673,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726855370718769024,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-421146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: d667c1a7b1e7e272754e10db2315607d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81a97c79e3ef43249f4d0285077861826dcf108662f81b26a4e9ea112ebc5f45,PodSandboxId:a53ecbaa10b365d95631f0c9168517b23ccbfd706b42bab952158d51cfd81992,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726855370542926466,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-421146,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 57da3666fcd469516ca1d325c843ac23,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9004df9c-ea8a-4d69-b99e-c8f64caa3f07 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:03:41 pause-421146 crio[2065]: time="2024-09-20 18:03:41.162128320Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=eda829c0-406b-427c-b955-248a4a187ee7 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:03:41 pause-421146 crio[2065]: time="2024-09-20 18:03:41.162243506Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=eda829c0-406b-427c-b955-248a4a187ee7 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:03:41 pause-421146 crio[2065]: time="2024-09-20 18:03:41.163538100Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=288ac689-33e7-441d-bd4c-ecdefdce73f5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:03:41 pause-421146 crio[2065]: time="2024-09-20 18:03:41.164058432Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855421163929061,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=288ac689-33e7-441d-bd4c-ecdefdce73f5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:03:41 pause-421146 crio[2065]: time="2024-09-20 18:03:41.164750348Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=18fa3eaa-dd62-4658-b411-ae4191abbed5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:03:41 pause-421146 crio[2065]: time="2024-09-20 18:03:41.164822633Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=18fa3eaa-dd62-4658-b411-ae4191abbed5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:03:41 pause-421146 crio[2065]: time="2024-09-20 18:03:41.165148423Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:52678e574e5cda3145f732e6a8b665d218fb10e16963e32d3dc9b209b7fccee5,PodSandboxId:a8b1775eebfd2b7231276d653a0caf54e71c433f59227ebc0d95ebf8fd842a7a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726855399523968093,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kzpzk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31a69166-0bab-4048-ab8b-39726652c348,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45767f66e4613897801d10ba7ef85f5203974b47f8a0f18a155269923c318b40,PodSandboxId:cd41238b0b4f051093b2f88ef744f1959b9365163c20500dd5856b10e4ec6879,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726855399478789134,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gcp8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: e95cbd03-c3ac-4381-ba7e-dab67b046217,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2b467124a385a8365850c1338b354a85bd06687fa90cfe3f841200fb6d9b74c,PodSandboxId:f6fd95e87f9f1f244c7052653c6e44dd79a1e9f974608f0b1f5ac0e9b7988f66,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726855395676136249,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-421146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65a92db064
d5149926f0585489dbd5f3,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2148bd1e1d3250b130059d85a48c37cb328f8ccf64f438e9dc63fdeee15ae56c,PodSandboxId:b0816ffaf87e51e03bb5d22fd2d157dbae85b9f04b5ae6d9ae11d54bf3e53ee3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726855395698234766,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-421146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32ebccbb68a7911977451c4f8550
1c3e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a6fc240ff380cc53597f9a78362928ffd5beb3133aef126be73281a5a058d71,PodSandboxId:18cee16364cf7a1b1d46012a7e095da7fef7fdd3e3b3a3ee1ba1b006ad44e673,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726855395662356389,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-421146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d667c1a7b1e7e272754e10db2315607d,},Annotations:map[string]string{io.kubernet
es.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d71881d204e3837d28568eb8b2c2abb6ea017b5b30910bd6ac080e4ccff72d72,PodSandboxId:a53ecbaa10b365d95631f0c9168517b23ccbfd706b42bab952158d51cfd81992,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726855395651764166,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-421146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57da3666fcd469516ca1d325c843ac23,},Annotations:map[string]string{io
.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75d3de29f781282cadcafa18080ca10c150d68ea2643c96305500091f11cd130,PodSandboxId:a8b1775eebfd2b7231276d653a0caf54e71c433f59227ebc0d95ebf8fd842a7a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726855372217768057,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kzpzk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31a69166-0bab-4048-ab8b-39726652c348,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a
204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e65068aabc970c4f3ded7fd5c3bbaef59385603701d9c00ed027e8385212b674,PodSandboxId:b0816ffaf87e51e03bb5d22fd2d157dbae85b9f04b5ae6d9ae11d54bf3e53ee3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726855370853142362,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kuber
netes.pod.name: kube-apiserver-pause-421146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32ebccbb68a7911977451c4f85501c3e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6caff0d1adf3a97c67a13c3e8be7618e77ea79c31f750fff59228a7be609fd4d,PodSandboxId:f6fd95e87f9f1f244c7052653c6e44dd79a1e9f974608f0b1f5ac0e9b7988f66,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726855370797197590,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kub
e-scheduler-pause-421146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65a92db064d5149926f0585489dbd5f3,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:998f1cab8849745ac4a858a8df2532e7b3bfc179f529b1172684f92fee861d0d,PodSandboxId:cd41238b0b4f051093b2f88ef744f1959b9365163c20500dd5856b10e4ec6879,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726855370693994465,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gcp8x,io.kubernetes
.pod.namespace: kube-system,io.kubernetes.pod.uid: e95cbd03-c3ac-4381-ba7e-dab67b046217,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acff960df015881f08db968c9ce85cea196679f2ba2dd86c8c0589365b50502c,PodSandboxId:18cee16364cf7a1b1d46012a7e095da7fef7fdd3e3b3a3ee1ba1b006ad44e673,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726855370718769024,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-421146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: d667c1a7b1e7e272754e10db2315607d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81a97c79e3ef43249f4d0285077861826dcf108662f81b26a4e9ea112ebc5f45,PodSandboxId:a53ecbaa10b365d95631f0c9168517b23ccbfd706b42bab952158d51cfd81992,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726855370542926466,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-421146,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 57da3666fcd469516ca1d325c843ac23,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=18fa3eaa-dd62-4658-b411-ae4191abbed5 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	52678e574e5cd       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   21 seconds ago      Running             coredns                   2                   a8b1775eebfd2       coredns-7c65d6cfc9-kzpzk
	45767f66e4613       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   21 seconds ago      Running             kube-proxy                2                   cd41238b0b4f0       kube-proxy-gcp8x
	2148bd1e1d325       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   25 seconds ago      Running             kube-apiserver            2                   b0816ffaf87e5       kube-apiserver-pause-421146
	c2b467124a385       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   25 seconds ago      Running             kube-scheduler            2                   f6fd95e87f9f1       kube-scheduler-pause-421146
	1a6fc240ff380       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   25 seconds ago      Running             etcd                      2                   18cee16364cf7       etcd-pause-421146
	d71881d204e38       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   25 seconds ago      Running             kube-controller-manager   2                   a53ecbaa10b36       kube-controller-manager-pause-421146
	75d3de29f7812       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   49 seconds ago      Exited              coredns                   1                   a8b1775eebfd2       coredns-7c65d6cfc9-kzpzk
	e65068aabc970       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   50 seconds ago      Exited              kube-apiserver            1                   b0816ffaf87e5       kube-apiserver-pause-421146
	6caff0d1adf3a       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   50 seconds ago      Exited              kube-scheduler            1                   f6fd95e87f9f1       kube-scheduler-pause-421146
	acff960df0158       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   50 seconds ago      Exited              etcd                      1                   18cee16364cf7       etcd-pause-421146
	998f1cab88497       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   50 seconds ago      Exited              kube-proxy                1                   cd41238b0b4f0       kube-proxy-gcp8x
	81a97c79e3ef4       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   50 seconds ago      Exited              kube-controller-manager   1                   a53ecbaa10b36       kube-controller-manager-pause-421146
	
	
	==> coredns [52678e574e5cda3145f732e6a8b665d218fb10e16963e32d3dc9b209b7fccee5] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:50811 - 33353 "HINFO IN 8666522759006030779.9000534369695820997. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020431597s
	
	
	==> coredns [75d3de29f781282cadcafa18080ca10c150d68ea2643c96305500091f11cd130] <==
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:40169 - 12964 "HINFO IN 6180363540378383076.2113421048419907267. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.024052508s
	
	
	==> describe nodes <==
	Name:               pause-421146
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-421146
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0626f22cf0d915d75e291a5bce701f94395056e1
	                    minikube.k8s.io/name=pause-421146
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T18_01_59_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 18:01:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-421146
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 18:03:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 18:03:19 +0000   Fri, 20 Sep 2024 18:01:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 18:03:19 +0000   Fri, 20 Sep 2024 18:01:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 18:03:19 +0000   Fri, 20 Sep 2024 18:01:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 18:03:19 +0000   Fri, 20 Sep 2024 18:01:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.200
	  Hostname:    pause-421146
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 75eb7f358fe44d58b6e7a0bb3b5ede52
	  System UUID:                75eb7f35-8fe4-4d58-b6e7-a0bb3b5ede52
	  Boot ID:                    24144fb2-ed29-4a8e-b3b6-7385de781a0d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-kzpzk                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     99s
	  kube-system                 etcd-pause-421146                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         104s
	  kube-system                 kube-apiserver-pause-421146             250m (12%)    0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-controller-manager-pause-421146    200m (10%)    0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 kube-proxy-gcp8x                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         99s
	  kube-system                 kube-scheduler-pause-421146             100m (5%)     0 (0%)      0 (0%)           0 (0%)         104s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 98s                kube-proxy       
	  Normal  Starting                 21s                kube-proxy       
	  Normal  Starting                 46s                kube-proxy       
	  Normal  Starting                 104s               kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  104s               kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  104s               kubelet          Node pause-421146 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    104s               kubelet          Node pause-421146 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     104s               kubelet          Node pause-421146 status is now: NodeHasSufficientPID
	  Normal  NodeReady                102s               kubelet          Node pause-421146 status is now: NodeReady
	  Normal  RegisteredNode           100s               node-controller  Node pause-421146 event: Registered Node pause-421146 in Controller
	  Normal  RegisteredNode           43s                node-controller  Node pause-421146 event: Registered Node pause-421146 in Controller
	  Normal  Starting                 26s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  26s (x8 over 26s)  kubelet          Node pause-421146 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    26s (x8 over 26s)  kubelet          Node pause-421146 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     26s (x7 over 26s)  kubelet          Node pause-421146 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  26s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19s                node-controller  Node pause-421146 event: Registered Node pause-421146 in Controller
	
	
	==> dmesg <==
	[  +0.061527] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063031] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.215225] systemd-fstab-generator[615]: Ignoring "noauto" option for root device
	[  +0.135769] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +0.304925] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +4.425917] systemd-fstab-generator[746]: Ignoring "noauto" option for root device
	[  +0.063054] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.640369] systemd-fstab-generator[882]: Ignoring "noauto" option for root device
	[  +0.556219] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.904187] systemd-fstab-generator[1217]: Ignoring "noauto" option for root device
	[  +0.094341] kauditd_printk_skb: 41 callbacks suppressed
	[Sep20 18:02] systemd-fstab-generator[1332]: Ignoring "noauto" option for root device
	[  +0.856108] kauditd_printk_skb: 48 callbacks suppressed
	[ +37.578100] kauditd_printk_skb: 42 callbacks suppressed
	[  +8.666960] systemd-fstab-generator[1989]: Ignoring "noauto" option for root device
	[  +0.146137] systemd-fstab-generator[2001]: Ignoring "noauto" option for root device
	[  +0.193224] systemd-fstab-generator[2017]: Ignoring "noauto" option for root device
	[  +0.163445] systemd-fstab-generator[2029]: Ignoring "noauto" option for root device
	[  +0.327985] systemd-fstab-generator[2058]: Ignoring "noauto" option for root device
	[  +2.093217] systemd-fstab-generator[2616]: Ignoring "noauto" option for root device
	[  +3.506745] kauditd_printk_skb: 195 callbacks suppressed
	[Sep20 18:03] systemd-fstab-generator[3058]: Ignoring "noauto" option for root device
	[  +4.729432] kauditd_printk_skb: 43 callbacks suppressed
	[  +8.397007] kauditd_printk_skb: 4 callbacks suppressed
	[  +6.841746] systemd-fstab-generator[3533]: Ignoring "noauto" option for root device
	
	
	==> etcd [1a6fc240ff380cc53597f9a78362928ffd5beb3133aef126be73281a5a058d71] <==
	{"level":"info","ts":"2024-09-20T18:03:16.084173Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-20T18:03:16.084642Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"b6017a462bf4c740","initial-advertise-peer-urls":["https://192.168.50.200:2380"],"listen-peer-urls":["https://192.168.50.200:2380"],"advertise-client-urls":["https://192.168.50.200:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.200:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-20T18:03:16.084682Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-20T18:03:16.084714Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.200:2380"}
	{"level":"info","ts":"2024-09-20T18:03:16.084731Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.200:2380"}
	{"level":"info","ts":"2024-09-20T18:03:17.650928Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6017a462bf4c740 is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-20T18:03:17.650994Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6017a462bf4c740 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-20T18:03:17.651041Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6017a462bf4c740 received MsgPreVoteResp from b6017a462bf4c740 at term 3"}
	{"level":"info","ts":"2024-09-20T18:03:17.651065Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6017a462bf4c740 became candidate at term 4"}
	{"level":"info","ts":"2024-09-20T18:03:17.651073Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6017a462bf4c740 received MsgVoteResp from b6017a462bf4c740 at term 4"}
	{"level":"info","ts":"2024-09-20T18:03:17.651093Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6017a462bf4c740 became leader at term 4"}
	{"level":"info","ts":"2024-09-20T18:03:17.651105Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b6017a462bf4c740 elected leader b6017a462bf4c740 at term 4"}
	{"level":"info","ts":"2024-09-20T18:03:17.657502Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T18:03:17.658348Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"b6017a462bf4c740","local-member-attributes":"{Name:pause-421146 ClientURLs:[https://192.168.50.200:2379]}","request-path":"/0/members/b6017a462bf4c740/attributes","cluster-id":"26c34286e2a4509a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-20T18:03:17.658596Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T18:03:17.658781Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T18:03:17.658791Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-20T18:03:17.658811Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-20T18:03:17.659411Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T18:03:17.659798Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.200:2379"}
	{"level":"info","ts":"2024-09-20T18:03:17.660165Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-09-20T18:03:40.538240Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"136.164484ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T18:03:40.538390Z","caller":"traceutil/trace.go:171","msg":"trace[335198915] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:542; }","duration":"136.333471ms","start":"2024-09-20T18:03:40.402038Z","end":"2024-09-20T18:03:40.538371Z","steps":["trace[335198915] 'range keys from in-memory index tree'  (duration: 136.136495ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T18:03:40.538913Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"110.600825ms","expected-duration":"100ms","prefix":"","request":"header:<ID:14357636212061820028 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.50.200\" mod_revision:535 > success:<request_put:<key:\"/registry/masterleases/192.168.50.200\" value_size:67 lease:5134264175207044218 >> failure:<request_range:<key:\"/registry/masterleases/192.168.50.200\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-09-20T18:03:40.539053Z","caller":"traceutil/trace.go:171","msg":"trace[628819982] transaction","detail":"{read_only:false; response_revision:543; number_of_response:1; }","duration":"240.970729ms","start":"2024-09-20T18:03:40.298066Z","end":"2024-09-20T18:03:40.539037Z","steps":["trace[628819982] 'process raft request'  (duration: 129.855033ms)","trace[628819982] 'compare'  (duration: 110.217664ms)"],"step_count":2}
	
	
	==> etcd [acff960df015881f08db968c9ce85cea196679f2ba2dd86c8c0589365b50502c] <==
	{"level":"info","ts":"2024-09-20T18:02:52.968212Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6017a462bf4c740 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-20T18:02:52.968239Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6017a462bf4c740 received MsgPreVoteResp from b6017a462bf4c740 at term 2"}
	{"level":"info","ts":"2024-09-20T18:02:52.968311Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6017a462bf4c740 became candidate at term 3"}
	{"level":"info","ts":"2024-09-20T18:02:52.968320Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6017a462bf4c740 received MsgVoteResp from b6017a462bf4c740 at term 3"}
	{"level":"info","ts":"2024-09-20T18:02:52.968330Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6017a462bf4c740 became leader at term 3"}
	{"level":"info","ts":"2024-09-20T18:02:52.968339Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b6017a462bf4c740 elected leader b6017a462bf4c740 at term 3"}
	{"level":"info","ts":"2024-09-20T18:02:52.974415Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"b6017a462bf4c740","local-member-attributes":"{Name:pause-421146 ClientURLs:[https://192.168.50.200:2379]}","request-path":"/0/members/b6017a462bf4c740/attributes","cluster-id":"26c34286e2a4509a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-20T18:02:52.974498Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T18:02:52.976644Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T18:02:52.977654Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T18:02:52.995178Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T18:02:52.999945Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-20T18:02:52.996432Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.200:2379"}
	{"level":"info","ts":"2024-09-20T18:02:53.007709Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-20T18:02:53.007841Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-20T18:03:13.078083Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-20T18:03:13.078201Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"pause-421146","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.200:2380"],"advertise-client-urls":["https://192.168.50.200:2379"]}
	{"level":"warn","ts":"2024-09-20T18:03:13.078375Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-20T18:03:13.078442Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-20T18:03:13.080175Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.200:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-20T18:03:13.080222Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.200:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-20T18:03:13.080330Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"b6017a462bf4c740","current-leader-member-id":"b6017a462bf4c740"}
	{"level":"info","ts":"2024-09-20T18:03:13.083875Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.50.200:2380"}
	{"level":"info","ts":"2024-09-20T18:03:13.084051Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.50.200:2380"}
	{"level":"info","ts":"2024-09-20T18:03:13.084082Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"pause-421146","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.200:2380"],"advertise-client-urls":["https://192.168.50.200:2379"]}
	
	
	==> kernel <==
	 18:03:41 up 2 min,  0 users,  load average: 1.17, 0.63, 0.24
	Linux pause-421146 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [2148bd1e1d3250b130059d85a48c37cb328f8ccf64f438e9dc63fdeee15ae56c] <==
	I0920 18:03:19.123620       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0920 18:03:19.124552       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0920 18:03:19.123729       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0920 18:03:19.124484       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0920 18:03:19.128348       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0920 18:03:19.128413       1 policy_source.go:224] refreshing policies
	I0920 18:03:19.124504       1 shared_informer.go:320] Caches are synced for configmaps
	I0920 18:03:19.125229       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0920 18:03:19.125245       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0920 18:03:19.125988       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0920 18:03:19.131581       1 aggregator.go:171] initial CRD sync complete...
	I0920 18:03:19.131601       1 autoregister_controller.go:144] Starting autoregister controller
	I0920 18:03:19.131609       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0920 18:03:19.131617       1 cache.go:39] Caches are synced for autoregister controller
	I0920 18:03:19.138005       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0920 18:03:19.182780       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0920 18:03:20.029248       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0920 18:03:20.237214       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.50.200]
	I0920 18:03:20.239159       1 controller.go:615] quota admission added evaluator for: endpoints
	I0920 18:03:20.246571       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0920 18:03:20.499874       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0920 18:03:20.523017       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0920 18:03:20.585380       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0920 18:03:20.638177       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0920 18:03:20.649500       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-apiserver [e65068aabc970c4f3ded7fd5c3bbaef59385603701d9c00ed027e8385212b674] <==
	I0920 18:03:02.869033       1 system_namespaces_controller.go:76] Shutting down system namespaces controller
	I0920 18:03:02.869064       1 customresource_discovery_controller.go:328] Shutting down DiscoveryController
	I0920 18:03:02.869103       1 controller.go:132] Ending legacy_token_tracking_controller
	I0920 18:03:02.869130       1 controller.go:133] Shutting down legacy_token_tracking_controller
	I0920 18:03:02.869170       1 cluster_authentication_trust_controller.go:466] Shutting down cluster_authentication_trust_controller controller
	I0920 18:03:02.869212       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I0920 18:03:02.869497       1 dynamic_cafile_content.go:174] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0920 18:03:02.869577       1 establishing_controller.go:92] Shutting down EstablishingController
	I0920 18:03:02.869603       1 naming_controller.go:305] Shutting down NamingConditionController
	I0920 18:03:02.869657       1 controller.go:170] Shutting down OpenAPI controller
	I0920 18:03:02.870078       1 autoregister_controller.go:168] Shutting down autoregister controller
	I0920 18:03:02.870172       1 crdregistration_controller.go:145] Shutting down crd-autoregister controller
	I0920 18:03:02.870219       1 apiapproval_controller.go:201] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
	I0920 18:03:02.870337       1 crd_finalizer.go:281] Shutting down CRDFinalizer
	I0920 18:03:02.870379       1 controller.go:120] Shutting down OpenAPI V3 controller
	I0920 18:03:02.870526       1 dynamic_cafile_content.go:174] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0920 18:03:02.870683       1 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0920 18:03:02.870729       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0920 18:03:02.870790       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0920 18:03:02.870823       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
	I0920 18:03:02.871119       1 dynamic_serving_content.go:149] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0920 18:03:02.871397       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0920 18:03:02.871474       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0920 18:03:02.871717       1 dynamic_serving_content.go:149] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0920 18:03:02.871986       1 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	
	
	==> kube-controller-manager [81a97c79e3ef43249f4d0285077861826dcf108662f81b26a4e9ea112ebc5f45] <==
	I0920 18:02:58.233350       1 shared_informer.go:320] Caches are synced for service account
	I0920 18:02:58.233434       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0920 18:02:58.234389       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0920 18:02:58.234466       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0920 18:02:58.234899       1 shared_informer.go:320] Caches are synced for disruption
	I0920 18:02:58.234948       1 shared_informer.go:320] Caches are synced for PVC protection
	I0920 18:02:58.235033       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0920 18:02:58.235069       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0920 18:02:58.235095       1 shared_informer.go:320] Caches are synced for daemon sets
	I0920 18:02:58.253375       1 shared_informer.go:320] Caches are synced for namespace
	I0920 18:02:58.256952       1 shared_informer.go:320] Caches are synced for ephemeral
	I0920 18:02:58.260463       1 shared_informer.go:320] Caches are synced for deployment
	I0920 18:02:58.264668       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0920 18:02:58.270339       1 shared_informer.go:320] Caches are synced for endpoint
	I0920 18:02:58.283695       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0920 18:02:58.332070       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0920 18:02:58.392317       1 shared_informer.go:320] Caches are synced for attach detach
	I0920 18:02:58.422393       1 shared_informer.go:320] Caches are synced for resource quota
	I0920 18:02:58.436333       1 shared_informer.go:320] Caches are synced for HPA
	I0920 18:02:58.447057       1 shared_informer.go:320] Caches are synced for resource quota
	I0920 18:02:58.500248       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="266.755611ms"
	I0920 18:02:58.501379       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="40.904µs"
	I0920 18:02:58.884964       1 shared_informer.go:320] Caches are synced for garbage collector
	I0920 18:02:58.885030       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0920 18:02:58.892531       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-controller-manager [d71881d204e3837d28568eb8b2c2abb6ea017b5b30910bd6ac080e4ccff72d72] <==
	I0920 18:03:22.452334       1 shared_informer.go:320] Caches are synced for job
	I0920 18:03:22.452864       1 shared_informer.go:320] Caches are synced for expand
	I0920 18:03:22.454016       1 shared_informer.go:320] Caches are synced for TTL
	I0920 18:03:22.467744       1 shared_informer.go:320] Caches are synced for node
	I0920 18:03:22.467836       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I0920 18:03:22.467875       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0920 18:03:22.467880       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0920 18:03:22.467890       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0920 18:03:22.468007       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="pause-421146"
	I0920 18:03:22.470799       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0920 18:03:22.474426       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0920 18:03:22.501750       1 shared_informer.go:320] Caches are synced for attach detach
	I0920 18:03:22.522228       1 shared_informer.go:320] Caches are synced for cronjob
	I0920 18:03:22.552375       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0920 18:03:22.600804       1 shared_informer.go:320] Caches are synced for stateful set
	I0920 18:03:22.642324       1 shared_informer.go:320] Caches are synced for resource quota
	I0920 18:03:22.644639       1 shared_informer.go:320] Caches are synced for daemon sets
	I0920 18:03:22.658867       1 shared_informer.go:320] Caches are synced for resource quota
	I0920 18:03:22.681635       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0920 18:03:22.703008       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0920 18:03:23.095770       1 shared_informer.go:320] Caches are synced for garbage collector
	I0920 18:03:23.152058       1 shared_informer.go:320] Caches are synced for garbage collector
	I0920 18:03:23.152175       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0920 18:03:27.947442       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="22.107317ms"
	I0920 18:03:27.950956       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="119.783µs"
	
	
	==> kube-proxy [45767f66e4613897801d10ba7ef85f5203974b47f8a0f18a155269923c318b40] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 18:03:19.775617       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0920 18:03:19.789574       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.200"]
	E0920 18:03:19.789834       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 18:03:19.830365       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 18:03:19.830477       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 18:03:19.830523       1 server_linux.go:169] "Using iptables Proxier"
	I0920 18:03:19.833839       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 18:03:19.834382       1 server.go:483] "Version info" version="v1.31.1"
	I0920 18:03:19.834459       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 18:03:19.836571       1 config.go:199] "Starting service config controller"
	I0920 18:03:19.836632       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 18:03:19.836757       1 config.go:328] "Starting node config controller"
	I0920 18:03:19.836786       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 18:03:19.836889       1 config.go:105] "Starting endpoint slice config controller"
	I0920 18:03:19.836986       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 18:03:19.937792       1 shared_informer.go:320] Caches are synced for node config
	I0920 18:03:19.937897       1 shared_informer.go:320] Caches are synced for service config
	I0920 18:03:19.937819       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [998f1cab8849745ac4a858a8df2532e7b3bfc179f529b1172684f92fee861d0d] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 18:02:53.185410       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0920 18:02:54.935831       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.200"]
	E0920 18:02:54.936143       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 18:02:54.998740       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 18:02:54.998882       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 18:02:54.998922       1 server_linux.go:169] "Using iptables Proxier"
	I0920 18:02:55.010235       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 18:02:55.011070       1 server.go:483] "Version info" version="v1.31.1"
	I0920 18:02:55.011343       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 18:02:55.013333       1 config.go:199] "Starting service config controller"
	I0920 18:02:55.013392       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 18:02:55.013449       1 config.go:105] "Starting endpoint slice config controller"
	I0920 18:02:55.013467       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 18:02:55.014083       1 config.go:328] "Starting node config controller"
	I0920 18:02:55.014114       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 18:02:55.114171       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 18:02:55.114437       1 shared_informer.go:320] Caches are synced for node config
	I0920 18:02:55.114470       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [6caff0d1adf3a97c67a13c3e8be7618e77ea79c31f750fff59228a7be609fd4d] <==
	I0920 18:02:53.338698       1 serving.go:386] Generated self-signed cert in-memory
	W0920 18:02:54.899155       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0920 18:02:54.899206       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0920 18:02:54.899216       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0920 18:02:54.899228       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0920 18:02:54.940963       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0920 18:02:54.942769       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 18:02:54.945769       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0920 18:02:54.945890       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0920 18:02:54.946072       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0920 18:02:54.946182       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0920 18:02:55.046151       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0920 18:03:12.918961       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0920 18:03:12.919112       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E0920 18:03:12.919244       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [c2b467124a385a8365850c1338b354a85bd06687fa90cfe3f841200fb6d9b74c] <==
	I0920 18:03:16.569979       1 serving.go:386] Generated self-signed cert in-memory
	W0920 18:03:19.085791       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0920 18:03:19.085846       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0920 18:03:19.085856       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0920 18:03:19.085862       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0920 18:03:19.109535       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0920 18:03:19.109570       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 18:03:19.111650       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0920 18:03:19.111812       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0920 18:03:19.111846       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0920 18:03:19.111862       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0920 18:03:19.214550       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 20 18:03:15 pause-421146 kubelet[3065]: I0920 18:03:15.629254    3065 scope.go:117] "RemoveContainer" containerID="81a97c79e3ef43249f4d0285077861826dcf108662f81b26a4e9ea112ebc5f45"
	Sep 20 18:03:15 pause-421146 kubelet[3065]: I0920 18:03:15.631926    3065 scope.go:117] "RemoveContainer" containerID="6caff0d1adf3a97c67a13c3e8be7618e77ea79c31f750fff59228a7be609fd4d"
	Sep 20 18:03:15 pause-421146 kubelet[3065]: I0920 18:03:15.632689    3065 scope.go:117] "RemoveContainer" containerID="acff960df015881f08db968c9ce85cea196679f2ba2dd86c8c0589365b50502c"
	Sep 20 18:03:15 pause-421146 kubelet[3065]: E0920 18:03:15.770325    3065 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-421146?timeout=10s\": dial tcp 192.168.50.200:8443: connect: connection refused" interval="800ms"
	Sep 20 18:03:15 pause-421146 kubelet[3065]: I0920 18:03:15.961792    3065 kubelet_node_status.go:72] "Attempting to register node" node="pause-421146"
	Sep 20 18:03:15 pause-421146 kubelet[3065]: E0920 18:03:15.962879    3065 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.200:8443: connect: connection refused" node="pause-421146"
	Sep 20 18:03:15 pause-421146 kubelet[3065]: W0920 18:03:15.991072    3065 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dpause-421146&limit=500&resourceVersion=0": dial tcp 192.168.50.200:8443: connect: connection refused
	Sep 20 18:03:15 pause-421146 kubelet[3065]: E0920 18:03:15.991158    3065 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dpause-421146&limit=500&resourceVersion=0\": dial tcp 192.168.50.200:8443: connect: connection refused" logger="UnhandledError"
	Sep 20 18:03:16 pause-421146 kubelet[3065]: I0920 18:03:16.765100    3065 kubelet_node_status.go:72] "Attempting to register node" node="pause-421146"
	Sep 20 18:03:19 pause-421146 kubelet[3065]: I0920 18:03:19.135086    3065 apiserver.go:52] "Watching apiserver"
	Sep 20 18:03:19 pause-421146 kubelet[3065]: I0920 18:03:19.165715    3065 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 20 18:03:19 pause-421146 kubelet[3065]: I0920 18:03:19.174609    3065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e95cbd03-c3ac-4381-ba7e-dab67b046217-xtables-lock\") pod \"kube-proxy-gcp8x\" (UID: \"e95cbd03-c3ac-4381-ba7e-dab67b046217\") " pod="kube-system/kube-proxy-gcp8x"
	Sep 20 18:03:19 pause-421146 kubelet[3065]: I0920 18:03:19.174658    3065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e95cbd03-c3ac-4381-ba7e-dab67b046217-lib-modules\") pod \"kube-proxy-gcp8x\" (UID: \"e95cbd03-c3ac-4381-ba7e-dab67b046217\") " pod="kube-system/kube-proxy-gcp8x"
	Sep 20 18:03:19 pause-421146 kubelet[3065]: I0920 18:03:19.185898    3065 kubelet_node_status.go:111] "Node was previously registered" node="pause-421146"
	Sep 20 18:03:19 pause-421146 kubelet[3065]: I0920 18:03:19.186030    3065 kubelet_node_status.go:75] "Successfully registered node" node="pause-421146"
	Sep 20 18:03:19 pause-421146 kubelet[3065]: I0920 18:03:19.186131    3065 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 20 18:03:19 pause-421146 kubelet[3065]: I0920 18:03:19.187196    3065 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 20 18:03:19 pause-421146 kubelet[3065]: I0920 18:03:19.461317    3065 scope.go:117] "RemoveContainer" containerID="998f1cab8849745ac4a858a8df2532e7b3bfc179f529b1172684f92fee861d0d"
	Sep 20 18:03:19 pause-421146 kubelet[3065]: I0920 18:03:19.461536    3065 scope.go:117] "RemoveContainer" containerID="75d3de29f781282cadcafa18080ca10c150d68ea2643c96305500091f11cd130"
	Sep 20 18:03:21 pause-421146 kubelet[3065]: I0920 18:03:21.358446    3065 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 20 18:03:25 pause-421146 kubelet[3065]: E0920 18:03:25.259614    3065 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855405258911971,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:03:25 pause-421146 kubelet[3065]: E0920 18:03:25.259674    3065 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855405258911971,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:03:27 pause-421146 kubelet[3065]: I0920 18:03:27.901365    3065 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 20 18:03:35 pause-421146 kubelet[3065]: E0920 18:03:35.263436    3065 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855415261465865,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:03:35 pause-421146 kubelet[3065]: E0920 18:03:35.263484    3065 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855415261465865,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-421146 -n pause-421146
helpers_test.go:261: (dbg) Run:  kubectl --context pause-421146 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (60.94s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (282.73s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-744025 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-744025 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m42.441387191s)

                                                
                                                
-- stdout --
	* [old-k8s-version-744025] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19672
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19672-8777/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-8777/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-744025" primary control-plane node in "old-k8s-version-744025" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 18:07:52.227294   68107 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:07:52.227588   68107 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:07:52.227600   68107 out.go:358] Setting ErrFile to fd 2...
	I0920 18:07:52.227607   68107 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:07:52.227860   68107 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-8777/.minikube/bin
	I0920 18:07:52.229003   68107 out.go:352] Setting JSON to false
	I0920 18:07:52.230113   68107 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6615,"bootTime":1726849057,"procs":302,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 18:07:52.230207   68107 start.go:139] virtualization: kvm guest
	I0920 18:07:52.231862   68107 out.go:177] * [old-k8s-version-744025] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 18:07:52.233390   68107 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 18:07:52.233389   68107 notify.go:220] Checking for updates...
	I0920 18:07:52.235710   68107 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 18:07:52.236978   68107 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-8777/kubeconfig
	I0920 18:07:52.238105   68107 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-8777/.minikube
	I0920 18:07:52.239178   68107 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 18:07:52.240301   68107 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 18:07:52.242005   68107 config.go:182] Loaded profile config "bridge-833505": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:07:52.242163   68107 config.go:182] Loaded profile config "enable-default-cni-833505": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:07:52.242299   68107 config.go:182] Loaded profile config "flannel-833505": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:07:52.242427   68107 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 18:07:52.288110   68107 out.go:177] * Using the kvm2 driver based on user configuration
	I0920 18:07:52.289389   68107 start.go:297] selected driver: kvm2
	I0920 18:07:52.289419   68107 start.go:901] validating driver "kvm2" against <nil>
	I0920 18:07:52.289437   68107 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 18:07:52.290301   68107 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:07:52.290407   68107 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19672-8777/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 18:07:52.307807   68107 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 18:07:52.307856   68107 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 18:07:52.308105   68107 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:07:52.308146   68107 cni.go:84] Creating CNI manager for ""
	I0920 18:07:52.308194   68107 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:07:52.308204   68107 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 18:07:52.308281   68107 start.go:340] cluster config:
	{Name:old-k8s-version-744025 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-744025 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:07:52.308418   68107 iso.go:125] acquiring lock: {Name:mkba95ef0488e46f622333e9f317f43def93040b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:07:52.310154   68107 out.go:177] * Starting "old-k8s-version-744025" primary control-plane node in "old-k8s-version-744025" cluster
	I0920 18:07:52.311420   68107 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0920 18:07:52.311463   68107 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0920 18:07:52.311472   68107 cache.go:56] Caching tarball of preloaded images
	I0920 18:07:52.311587   68107 preload.go:172] Found /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 18:07:52.311602   68107 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0920 18:07:52.311704   68107 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/config.json ...
	I0920 18:07:52.311724   68107 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/config.json: {Name:mk5459942e9de021fc633369bb14dee06e01c8fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:07:52.311891   68107 start.go:360] acquireMachinesLock for old-k8s-version-744025: {Name:mkfeedb385cf08b5d2aa00913e85815d02a180c2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 18:08:00.714614   68107 start.go:364] duration metric: took 8.402696233s to acquireMachinesLock for "old-k8s-version-744025"
	I0920 18:08:00.714707   68107 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-744025 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-744025 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:08:00.714826   68107 start.go:125] createHost starting for "" (driver="kvm2")
	I0920 18:08:00.717251   68107 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 18:08:00.717436   68107 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:08:00.717477   68107 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:08:00.737875   68107 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36133
	I0920 18:08:00.738462   68107 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:08:00.739070   68107 main.go:141] libmachine: Using API Version  1
	I0920 18:08:00.739096   68107 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:08:00.739522   68107 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:08:00.739707   68107 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetMachineName
	I0920 18:08:00.739868   68107 main.go:141] libmachine: (old-k8s-version-744025) Calling .DriverName
	I0920 18:08:00.740006   68107 start.go:159] libmachine.API.Create for "old-k8s-version-744025" (driver="kvm2")
	I0920 18:08:00.740044   68107 client.go:168] LocalClient.Create starting
	I0920 18:08:00.740077   68107 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem
	I0920 18:08:00.740143   68107 main.go:141] libmachine: Decoding PEM data...
	I0920 18:08:00.740165   68107 main.go:141] libmachine: Parsing certificate...
	I0920 18:08:00.740238   68107 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem
	I0920 18:08:00.740271   68107 main.go:141] libmachine: Decoding PEM data...
	I0920 18:08:00.740292   68107 main.go:141] libmachine: Parsing certificate...
	I0920 18:08:00.740320   68107 main.go:141] libmachine: Running pre-create checks...
	I0920 18:08:00.740332   68107 main.go:141] libmachine: (old-k8s-version-744025) Calling .PreCreateCheck
	I0920 18:08:00.740667   68107 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetConfigRaw
	I0920 18:08:00.741102   68107 main.go:141] libmachine: Creating machine...
	I0920 18:08:00.741119   68107 main.go:141] libmachine: (old-k8s-version-744025) Calling .Create
	I0920 18:08:00.741288   68107 main.go:141] libmachine: (old-k8s-version-744025) Creating KVM machine...
	I0920 18:08:00.742606   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | found existing default KVM network
	I0920 18:08:00.744798   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:08:00.744605   68219 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002097f0}
	I0920 18:08:00.744826   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | created network xml: 
	I0920 18:08:00.744838   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | <network>
	I0920 18:08:00.744849   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG |   <name>mk-old-k8s-version-744025</name>
	I0920 18:08:00.744857   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG |   <dns enable='no'/>
	I0920 18:08:00.744863   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG |   
	I0920 18:08:00.744877   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0920 18:08:00.744887   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG |     <dhcp>
	I0920 18:08:00.744895   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0920 18:08:00.744908   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG |     </dhcp>
	I0920 18:08:00.744924   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG |   </ip>
	I0920 18:08:00.744937   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG |   
	I0920 18:08:00.744945   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | </network>
	I0920 18:08:00.744960   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | 
	I0920 18:08:00.750996   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | trying to create private KVM network mk-old-k8s-version-744025 192.168.39.0/24...
	I0920 18:08:00.835045   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | private KVM network mk-old-k8s-version-744025 192.168.39.0/24 created
	I0920 18:08:00.835076   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:08:00.834983   68219 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19672-8777/.minikube
	I0920 18:08:00.835100   68107 main.go:141] libmachine: (old-k8s-version-744025) Setting up store path in /home/jenkins/minikube-integration/19672-8777/.minikube/machines/old-k8s-version-744025 ...
	I0920 18:08:00.835111   68107 main.go:141] libmachine: (old-k8s-version-744025) Building disk image from file:///home/jenkins/minikube-integration/19672-8777/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso
	I0920 18:08:00.835128   68107 main.go:141] libmachine: (old-k8s-version-744025) Downloading /home/jenkins/minikube-integration/19672-8777/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19672-8777/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso...
	I0920 18:08:01.143582   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:08:01.143415   68219 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/old-k8s-version-744025/id_rsa...
	I0920 18:08:01.246082   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:08:01.245937   68219 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/old-k8s-version-744025/old-k8s-version-744025.rawdisk...
	I0920 18:08:01.246124   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | Writing magic tar header
	I0920 18:08:01.246152   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | Writing SSH key tar header
	I0920 18:08:01.246186   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:08:01.246108   68219 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19672-8777/.minikube/machines/old-k8s-version-744025 ...
	I0920 18:08:01.246265   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/old-k8s-version-744025
	I0920 18:08:01.246295   68107 main.go:141] libmachine: (old-k8s-version-744025) Setting executable bit set on /home/jenkins/minikube-integration/19672-8777/.minikube/machines/old-k8s-version-744025 (perms=drwx------)
	I0920 18:08:01.246313   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-8777/.minikube/machines
	I0920 18:08:01.246329   68107 main.go:141] libmachine: (old-k8s-version-744025) Setting executable bit set on /home/jenkins/minikube-integration/19672-8777/.minikube/machines (perms=drwxr-xr-x)
	I0920 18:08:01.246338   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-8777/.minikube
	I0920 18:08:01.246352   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-8777
	I0920 18:08:01.246364   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0920 18:08:01.246375   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | Checking permissions on dir: /home/jenkins
	I0920 18:08:01.246386   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | Checking permissions on dir: /home
	I0920 18:08:01.246397   68107 main.go:141] libmachine: (old-k8s-version-744025) Setting executable bit set on /home/jenkins/minikube-integration/19672-8777/.minikube (perms=drwxr-xr-x)
	I0920 18:08:01.246413   68107 main.go:141] libmachine: (old-k8s-version-744025) Setting executable bit set on /home/jenkins/minikube-integration/19672-8777 (perms=drwxrwxr-x)
	I0920 18:08:01.246425   68107 main.go:141] libmachine: (old-k8s-version-744025) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0920 18:08:01.246435   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | Skipping /home - not owner
	I0920 18:08:01.246451   68107 main.go:141] libmachine: (old-k8s-version-744025) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0920 18:08:01.246469   68107 main.go:141] libmachine: (old-k8s-version-744025) Creating domain...
	I0920 18:08:01.247794   68107 main.go:141] libmachine: (old-k8s-version-744025) define libvirt domain using xml: 
	I0920 18:08:01.247815   68107 main.go:141] libmachine: (old-k8s-version-744025) <domain type='kvm'>
	I0920 18:08:01.247822   68107 main.go:141] libmachine: (old-k8s-version-744025)   <name>old-k8s-version-744025</name>
	I0920 18:08:01.247829   68107 main.go:141] libmachine: (old-k8s-version-744025)   <memory unit='MiB'>2200</memory>
	I0920 18:08:01.247837   68107 main.go:141] libmachine: (old-k8s-version-744025)   <vcpu>2</vcpu>
	I0920 18:08:01.247844   68107 main.go:141] libmachine: (old-k8s-version-744025)   <features>
	I0920 18:08:01.247851   68107 main.go:141] libmachine: (old-k8s-version-744025)     <acpi/>
	I0920 18:08:01.247858   68107 main.go:141] libmachine: (old-k8s-version-744025)     <apic/>
	I0920 18:08:01.247871   68107 main.go:141] libmachine: (old-k8s-version-744025)     <pae/>
	I0920 18:08:01.247880   68107 main.go:141] libmachine: (old-k8s-version-744025)     
	I0920 18:08:01.247891   68107 main.go:141] libmachine: (old-k8s-version-744025)   </features>
	I0920 18:08:01.247898   68107 main.go:141] libmachine: (old-k8s-version-744025)   <cpu mode='host-passthrough'>
	I0920 18:08:01.247927   68107 main.go:141] libmachine: (old-k8s-version-744025)   
	I0920 18:08:01.247953   68107 main.go:141] libmachine: (old-k8s-version-744025)   </cpu>
	I0920 18:08:01.247965   68107 main.go:141] libmachine: (old-k8s-version-744025)   <os>
	I0920 18:08:01.247980   68107 main.go:141] libmachine: (old-k8s-version-744025)     <type>hvm</type>
	I0920 18:08:01.247994   68107 main.go:141] libmachine: (old-k8s-version-744025)     <boot dev='cdrom'/>
	I0920 18:08:01.248004   68107 main.go:141] libmachine: (old-k8s-version-744025)     <boot dev='hd'/>
	I0920 18:08:01.248015   68107 main.go:141] libmachine: (old-k8s-version-744025)     <bootmenu enable='no'/>
	I0920 18:08:01.248025   68107 main.go:141] libmachine: (old-k8s-version-744025)   </os>
	I0920 18:08:01.248036   68107 main.go:141] libmachine: (old-k8s-version-744025)   <devices>
	I0920 18:08:01.248047   68107 main.go:141] libmachine: (old-k8s-version-744025)     <disk type='file' device='cdrom'>
	I0920 18:08:01.248068   68107 main.go:141] libmachine: (old-k8s-version-744025)       <source file='/home/jenkins/minikube-integration/19672-8777/.minikube/machines/old-k8s-version-744025/boot2docker.iso'/>
	I0920 18:08:01.248085   68107 main.go:141] libmachine: (old-k8s-version-744025)       <target dev='hdc' bus='scsi'/>
	I0920 18:08:01.248097   68107 main.go:141] libmachine: (old-k8s-version-744025)       <readonly/>
	I0920 18:08:01.248104   68107 main.go:141] libmachine: (old-k8s-version-744025)     </disk>
	I0920 18:08:01.248117   68107 main.go:141] libmachine: (old-k8s-version-744025)     <disk type='file' device='disk'>
	I0920 18:08:01.248134   68107 main.go:141] libmachine: (old-k8s-version-744025)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0920 18:08:01.248254   68107 main.go:141] libmachine: (old-k8s-version-744025)       <source file='/home/jenkins/minikube-integration/19672-8777/.minikube/machines/old-k8s-version-744025/old-k8s-version-744025.rawdisk'/>
	I0920 18:08:01.248313   68107 main.go:141] libmachine: (old-k8s-version-744025)       <target dev='hda' bus='virtio'/>
	I0920 18:08:01.248327   68107 main.go:141] libmachine: (old-k8s-version-744025)     </disk>
	I0920 18:08:01.248341   68107 main.go:141] libmachine: (old-k8s-version-744025)     <interface type='network'>
	I0920 18:08:01.248364   68107 main.go:141] libmachine: (old-k8s-version-744025)       <source network='mk-old-k8s-version-744025'/>
	I0920 18:08:01.248385   68107 main.go:141] libmachine: (old-k8s-version-744025)       <model type='virtio'/>
	I0920 18:08:01.248398   68107 main.go:141] libmachine: (old-k8s-version-744025)     </interface>
	I0920 18:08:01.248408   68107 main.go:141] libmachine: (old-k8s-version-744025)     <interface type='network'>
	I0920 18:08:01.248418   68107 main.go:141] libmachine: (old-k8s-version-744025)       <source network='default'/>
	I0920 18:08:01.248430   68107 main.go:141] libmachine: (old-k8s-version-744025)       <model type='virtio'/>
	I0920 18:08:01.248443   68107 main.go:141] libmachine: (old-k8s-version-744025)     </interface>
	I0920 18:08:01.248454   68107 main.go:141] libmachine: (old-k8s-version-744025)     <serial type='pty'>
	I0920 18:08:01.248466   68107 main.go:141] libmachine: (old-k8s-version-744025)       <target port='0'/>
	I0920 18:08:01.248478   68107 main.go:141] libmachine: (old-k8s-version-744025)     </serial>
	I0920 18:08:01.248487   68107 main.go:141] libmachine: (old-k8s-version-744025)     <console type='pty'>
	I0920 18:08:01.248499   68107 main.go:141] libmachine: (old-k8s-version-744025)       <target type='serial' port='0'/>
	I0920 18:08:01.248525   68107 main.go:141] libmachine: (old-k8s-version-744025)     </console>
	I0920 18:08:01.248543   68107 main.go:141] libmachine: (old-k8s-version-744025)     <rng model='virtio'>
	I0920 18:08:01.248560   68107 main.go:141] libmachine: (old-k8s-version-744025)       <backend model='random'>/dev/random</backend>
	I0920 18:08:01.248570   68107 main.go:141] libmachine: (old-k8s-version-744025)     </rng>
	I0920 18:08:01.248579   68107 main.go:141] libmachine: (old-k8s-version-744025)     
	I0920 18:08:01.248588   68107 main.go:141] libmachine: (old-k8s-version-744025)     
	I0920 18:08:01.248596   68107 main.go:141] libmachine: (old-k8s-version-744025)   </devices>
	I0920 18:08:01.248605   68107 main.go:141] libmachine: (old-k8s-version-744025) </domain>
	I0920 18:08:01.248622   68107 main.go:141] libmachine: (old-k8s-version-744025) 
	I0920 18:08:01.253530   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:3c:24:35 in network default
	I0920 18:08:01.254366   68107 main.go:141] libmachine: (old-k8s-version-744025) Ensuring networks are active...
	I0920 18:08:01.254388   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:08:01.255166   68107 main.go:141] libmachine: (old-k8s-version-744025) Ensuring network default is active
	I0920 18:08:01.255531   68107 main.go:141] libmachine: (old-k8s-version-744025) Ensuring network mk-old-k8s-version-744025 is active
	I0920 18:08:01.256185   68107 main.go:141] libmachine: (old-k8s-version-744025) Getting domain xml...
	I0920 18:08:01.257017   68107 main.go:141] libmachine: (old-k8s-version-744025) Creating domain...
	I0920 18:08:02.743516   68107 main.go:141] libmachine: (old-k8s-version-744025) Waiting to get IP...
	I0920 18:08:02.744371   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:08:02.744952   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:08:02.744977   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:08:02.744926   68219 retry.go:31] will retry after 205.228708ms: waiting for machine to come up
	I0920 18:08:02.951576   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:08:02.952300   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:08:02.952335   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:08:02.952260   68219 retry.go:31] will retry after 328.061361ms: waiting for machine to come up
	I0920 18:08:03.282860   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:08:03.283436   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:08:03.283487   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:08:03.283419   68219 retry.go:31] will retry after 473.577756ms: waiting for machine to come up
	I0920 18:08:03.759694   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:08:03.760432   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:08:03.760462   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:08:03.760410   68219 retry.go:31] will retry after 424.087579ms: waiting for machine to come up
	I0920 18:08:04.185726   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:08:04.186272   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:08:04.186292   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:08:04.186222   68219 retry.go:31] will retry after 494.865814ms: waiting for machine to come up
	I0920 18:08:04.683040   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:08:04.683584   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:08:04.683613   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:08:04.683534   68219 retry.go:31] will retry after 902.50779ms: waiting for machine to come up
	I0920 18:08:05.588408   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:08:05.589078   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:08:05.589127   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:08:05.589010   68219 retry.go:31] will retry after 808.61449ms: waiting for machine to come up
	I0920 18:08:06.399828   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:08:06.400455   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:08:06.400481   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:08:06.400408   68219 retry.go:31] will retry after 985.683822ms: waiting for machine to come up
	I0920 18:08:07.387295   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:08:07.387746   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:08:07.387775   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:08:07.387699   68219 retry.go:31] will retry after 1.188066061s: waiting for machine to come up
	I0920 18:08:08.577356   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:08:08.577875   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:08:08.577902   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:08:08.577807   68219 retry.go:31] will retry after 2.088949257s: waiting for machine to come up
	I0920 18:08:10.668368   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:08:10.668859   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:08:10.668892   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:08:10.668800   68219 retry.go:31] will retry after 1.909463117s: waiting for machine to come up
	I0920 18:08:12.580366   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:08:12.580958   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:08:12.580979   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:08:12.580933   68219 retry.go:31] will retry after 3.239902762s: waiting for machine to come up
	I0920 18:08:15.822485   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:08:15.823020   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:08:15.823040   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:08:15.822982   68219 retry.go:31] will retry after 4.304304382s: waiting for machine to come up
	I0920 18:08:20.131666   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:08:20.132160   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:08:20.132187   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:08:20.132111   68219 retry.go:31] will retry after 4.313878888s: waiting for machine to come up
	I0920 18:08:24.447393   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:08:24.448145   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has current primary IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:08:24.448175   68107 main.go:141] libmachine: (old-k8s-version-744025) Found IP for machine: 192.168.39.207
	I0920 18:08:24.448190   68107 main.go:141] libmachine: (old-k8s-version-744025) Reserving static IP address...
	I0920 18:08:24.448555   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-744025", mac: "52:54:00:e5:57:41", ip: "192.168.39.207"} in network mk-old-k8s-version-744025
	I0920 18:08:24.542400   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | Getting to WaitForSSH function...
	I0920 18:08:24.542432   68107 main.go:141] libmachine: (old-k8s-version-744025) Reserved static IP address: 192.168.39.207
	I0920 18:08:24.542446   68107 main.go:141] libmachine: (old-k8s-version-744025) Waiting for SSH to be available...
	I0920 18:08:24.546074   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:08:24.546393   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025
	I0920 18:08:24.546417   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find defined IP address of network mk-old-k8s-version-744025 interface with MAC address 52:54:00:e5:57:41
	I0920 18:08:24.546585   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | Using SSH client type: external
	I0920 18:08:24.546610   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/old-k8s-version-744025/id_rsa (-rw-------)
	I0920 18:08:24.546679   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-8777/.minikube/machines/old-k8s-version-744025/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 18:08:24.546694   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | About to run SSH command:
	I0920 18:08:24.546716   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | exit 0
	I0920 18:08:24.551728   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | SSH cmd err, output: exit status 255: 
	I0920 18:08:24.551758   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0920 18:08:24.551768   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | command : exit 0
	I0920 18:08:24.551776   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | err     : exit status 255
	I0920 18:08:24.551786   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | output  : 
	I0920 18:08:27.552598   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | Getting to WaitForSSH function...
	I0920 18:08:27.555491   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:08:27.555924   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:08:17 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:08:27.555947   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:08:27.556084   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | Using SSH client type: external
	I0920 18:08:27.556126   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/old-k8s-version-744025/id_rsa (-rw-------)
	I0920 18:08:27.556174   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.207 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-8777/.minikube/machines/old-k8s-version-744025/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 18:08:27.556189   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | About to run SSH command:
	I0920 18:08:27.556202   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | exit 0
	I0920 18:08:27.685987   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | SSH cmd err, output: <nil>: 
	I0920 18:08:27.686194   68107 main.go:141] libmachine: (old-k8s-version-744025) KVM machine creation complete!
	I0920 18:08:27.686525   68107 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetConfigRaw
	I0920 18:08:27.687088   68107 main.go:141] libmachine: (old-k8s-version-744025) Calling .DriverName
	I0920 18:08:27.687317   68107 main.go:141] libmachine: (old-k8s-version-744025) Calling .DriverName
	I0920 18:08:27.687446   68107 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0920 18:08:27.687471   68107 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetState
	I0920 18:08:27.688851   68107 main.go:141] libmachine: Detecting operating system of created instance...
	I0920 18:08:27.688863   68107 main.go:141] libmachine: Waiting for SSH to be available...
	I0920 18:08:27.688868   68107 main.go:141] libmachine: Getting to WaitForSSH function...
	I0920 18:08:27.688873   68107 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHHostname
	I0920 18:08:27.691501   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:08:27.691950   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:08:17 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:08:27.691980   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:08:27.692099   68107 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHPort
	I0920 18:08:27.692301   68107 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:08:27.692510   68107 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:08:27.692627   68107 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHUsername
	I0920 18:08:27.692786   68107 main.go:141] libmachine: Using SSH client type: native
	I0920 18:08:27.693017   68107 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.207 22 <nil> <nil>}
	I0920 18:08:27.693036   68107 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0920 18:08:27.797463   68107 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:08:27.797487   68107 main.go:141] libmachine: Detecting the provisioner...
	I0920 18:08:27.797499   68107 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHHostname
	I0920 18:08:27.800583   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:08:27.800981   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:08:17 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:08:27.801020   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:08:27.801195   68107 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHPort
	I0920 18:08:27.801383   68107 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:08:27.801578   68107 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:08:27.801737   68107 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHUsername
	I0920 18:08:27.801911   68107 main.go:141] libmachine: Using SSH client type: native
	I0920 18:08:27.802141   68107 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.207 22 <nil> <nil>}
	I0920 18:08:27.802158   68107 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0920 18:08:27.915046   68107 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0920 18:08:27.915130   68107 main.go:141] libmachine: found compatible host: buildroot
	I0920 18:08:27.915140   68107 main.go:141] libmachine: Provisioning with buildroot...
	I0920 18:08:27.915148   68107 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetMachineName
	I0920 18:08:27.915372   68107 buildroot.go:166] provisioning hostname "old-k8s-version-744025"
	I0920 18:08:27.915396   68107 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetMachineName
	I0920 18:08:27.915597   68107 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHHostname
	I0920 18:08:27.918657   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:08:27.919253   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:08:17 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:08:27.919282   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:08:27.919442   68107 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHPort
	I0920 18:08:27.919655   68107 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:08:27.919833   68107 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:08:27.919981   68107 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHUsername
	I0920 18:08:27.920175   68107 main.go:141] libmachine: Using SSH client type: native
	I0920 18:08:27.920381   68107 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.207 22 <nil> <nil>}
	I0920 18:08:27.920398   68107 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-744025 && echo "old-k8s-version-744025" | sudo tee /etc/hostname
	I0920 18:08:28.045120   68107 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-744025
	
	I0920 18:08:28.045156   68107 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHHostname
	I0920 18:08:28.048091   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:08:28.048445   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:08:17 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:08:28.048466   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:08:28.048700   68107 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHPort
	I0920 18:08:28.048889   68107 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:08:28.049058   68107 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:08:28.049213   68107 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHUsername
	I0920 18:08:28.049395   68107 main.go:141] libmachine: Using SSH client type: native
	I0920 18:08:28.049574   68107 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.207 22 <nil> <nil>}
	I0920 18:08:28.049596   68107 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-744025' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-744025/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-744025' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 18:08:28.172049   68107 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:08:28.172082   68107 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-8777/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-8777/.minikube}
	I0920 18:08:28.172142   68107 buildroot.go:174] setting up certificates
	I0920 18:08:28.172156   68107 provision.go:84] configureAuth start
	I0920 18:08:28.172167   68107 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetMachineName
	I0920 18:08:28.172417   68107 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetIP
	I0920 18:08:28.175092   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:08:28.175514   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:08:17 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:08:28.175541   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:08:28.175665   68107 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHHostname
	I0920 18:08:28.178389   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:08:28.178731   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:08:17 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:08:28.178756   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:08:28.178968   68107 provision.go:143] copyHostCerts
	I0920 18:08:28.179043   68107 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem, removing ...
	I0920 18:08:28.179062   68107 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem
	I0920 18:08:28.179135   68107 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem (1123 bytes)
	I0920 18:08:28.179276   68107 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem, removing ...
	I0920 18:08:28.179285   68107 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem
	I0920 18:08:28.179315   68107 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem (1675 bytes)
	I0920 18:08:28.179420   68107 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem, removing ...
	I0920 18:08:28.179430   68107 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem
	I0920 18:08:28.179463   68107 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem (1082 bytes)
	I0920 18:08:28.179542   68107 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-744025 san=[127.0.0.1 192.168.39.207 localhost minikube old-k8s-version-744025]
	I0920 18:08:28.512200   68107 provision.go:177] copyRemoteCerts
	I0920 18:08:28.512258   68107 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 18:08:28.512280   68107 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHHostname
	I0920 18:08:28.515009   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:08:28.515415   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:08:17 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:08:28.515444   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:08:28.515639   68107 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHPort
	I0920 18:08:28.515865   68107 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:08:28.516141   68107 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHUsername
	I0920 18:08:28.516315   68107 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/old-k8s-version-744025/id_rsa Username:docker}
	I0920 18:08:28.600101   68107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0920 18:08:28.628168   68107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0920 18:08:28.654554   68107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0920 18:08:28.681672   68107 provision.go:87] duration metric: took 509.504039ms to configureAuth
	I0920 18:08:28.681705   68107 buildroot.go:189] setting minikube options for container-runtime
	I0920 18:08:28.681918   68107 config.go:182] Loaded profile config "old-k8s-version-744025": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0920 18:08:28.681989   68107 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHHostname
	I0920 18:08:28.684885   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:08:28.685377   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:08:17 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:08:28.685410   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:08:28.685595   68107 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHPort
	I0920 18:08:28.685792   68107 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:08:28.685956   68107 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:08:28.686106   68107 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHUsername
	I0920 18:08:28.686268   68107 main.go:141] libmachine: Using SSH client type: native
	I0920 18:08:28.686455   68107 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.207 22 <nil> <nil>}
	I0920 18:08:28.686477   68107 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 18:08:28.945781   68107 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 18:08:28.945807   68107 main.go:141] libmachine: Checking connection to Docker...
	I0920 18:08:28.945816   68107 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetURL
	I0920 18:08:28.947093   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | Using libvirt version 6000000
	I0920 18:08:28.949507   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:08:28.949941   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:08:17 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:08:28.949969   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:08:28.950131   68107 main.go:141] libmachine: Docker is up and running!
	I0920 18:08:28.950146   68107 main.go:141] libmachine: Reticulating splines...
	I0920 18:08:28.950154   68107 client.go:171] duration metric: took 28.210101936s to LocalClient.Create
	I0920 18:08:28.950182   68107 start.go:167] duration metric: took 28.210177602s to libmachine.API.Create "old-k8s-version-744025"
	I0920 18:08:28.950190   68107 start.go:293] postStartSetup for "old-k8s-version-744025" (driver="kvm2")
	I0920 18:08:28.950199   68107 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 18:08:28.950225   68107 main.go:141] libmachine: (old-k8s-version-744025) Calling .DriverName
	I0920 18:08:28.950440   68107 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 18:08:28.950467   68107 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHHostname
	I0920 18:08:28.952521   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:08:28.952894   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:08:17 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:08:28.952928   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:08:28.953042   68107 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHPort
	I0920 18:08:28.953188   68107 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:08:28.953362   68107 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHUsername
	I0920 18:08:28.953498   68107 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/old-k8s-version-744025/id_rsa Username:docker}
	I0920 18:08:29.042049   68107 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 18:08:29.046985   68107 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 18:08:29.047012   68107 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/addons for local assets ...
	I0920 18:08:29.047067   68107 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/files for local assets ...
	I0920 18:08:29.047153   68107 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem -> 159732.pem in /etc/ssl/certs
	I0920 18:08:29.047247   68107 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 18:08:29.057309   68107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /etc/ssl/certs/159732.pem (1708 bytes)
	I0920 18:08:29.084017   68107 start.go:296] duration metric: took 133.813095ms for postStartSetup
	I0920 18:08:29.084078   68107 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetConfigRaw
	I0920 18:08:29.084688   68107 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetIP
	I0920 18:08:29.087621   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:08:29.088043   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:08:17 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:08:29.088066   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:08:29.088315   68107 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/config.json ...
	I0920 18:08:29.088548   68107 start.go:128] duration metric: took 28.373697144s to createHost
	I0920 18:08:29.088579   68107 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHHostname
	I0920 18:08:29.090868   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:08:29.091152   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:08:17 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:08:29.091180   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:08:29.091318   68107 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHPort
	I0920 18:08:29.091481   68107 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:08:29.091607   68107 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:08:29.091784   68107 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHUsername
	I0920 18:08:29.091913   68107 main.go:141] libmachine: Using SSH client type: native
	I0920 18:08:29.092065   68107 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.207 22 <nil> <nil>}
	I0920 18:08:29.092070   68107 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 18:08:29.202587   68107 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726855709.188676025
	
	I0920 18:08:29.202614   68107 fix.go:216] guest clock: 1726855709.188676025
	I0920 18:08:29.202624   68107 fix.go:229] Guest: 2024-09-20 18:08:29.188676025 +0000 UTC Remote: 2024-09-20 18:08:29.088568596 +0000 UTC m=+36.902191149 (delta=100.107429ms)
	I0920 18:08:29.202661   68107 fix.go:200] guest clock delta is within tolerance: 100.107429ms
	I0920 18:08:29.202669   68107 start.go:83] releasing machines lock for "old-k8s-version-744025", held for 28.488006746s
	I0920 18:08:29.202694   68107 main.go:141] libmachine: (old-k8s-version-744025) Calling .DriverName
	I0920 18:08:29.203036   68107 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetIP
	I0920 18:08:29.206028   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:08:29.206419   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:08:17 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:08:29.206468   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:08:29.206585   68107 main.go:141] libmachine: (old-k8s-version-744025) Calling .DriverName
	I0920 18:08:29.207102   68107 main.go:141] libmachine: (old-k8s-version-744025) Calling .DriverName
	I0920 18:08:29.207287   68107 main.go:141] libmachine: (old-k8s-version-744025) Calling .DriverName
	I0920 18:08:29.207374   68107 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 18:08:29.207420   68107 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHHostname
	I0920 18:08:29.207571   68107 ssh_runner.go:195] Run: cat /version.json
	I0920 18:08:29.207599   68107 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHHostname
	I0920 18:08:29.210232   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:08:29.210732   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:08:17 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:08:29.210762   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:08:29.214114   68107 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHPort
	I0920 18:08:29.214308   68107 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:08:29.214469   68107 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHUsername
	I0920 18:08:29.214640   68107 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/old-k8s-version-744025/id_rsa Username:docker}
	I0920 18:08:29.220135   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:08:29.220552   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:08:17 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:08:29.220581   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:08:29.220796   68107 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHPort
	I0920 18:08:29.220984   68107 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:08:29.221139   68107 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHUsername
	I0920 18:08:29.221284   68107 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/old-k8s-version-744025/id_rsa Username:docker}
	I0920 18:08:29.332787   68107 ssh_runner.go:195] Run: systemctl --version
	I0920 18:08:29.340942   68107 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 18:08:29.501080   68107 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 18:08:29.509173   68107 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 18:08:29.509246   68107 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 18:08:29.527367   68107 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 18:08:29.527387   68107 start.go:495] detecting cgroup driver to use...
	I0920 18:08:29.527438   68107 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 18:08:29.546450   68107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 18:08:29.562332   68107 docker.go:217] disabling cri-docker service (if available) ...
	I0920 18:08:29.562396   68107 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 18:08:29.577973   68107 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 18:08:29.593106   68107 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 18:08:29.749167   68107 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 18:08:29.953364   68107 docker.go:233] disabling docker service ...
	I0920 18:08:29.953442   68107 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 18:08:29.971446   68107 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 18:08:29.991048   68107 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 18:08:30.165349   68107 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 18:08:30.324854   68107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 18:08:30.340622   68107 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 18:08:30.361143   68107 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0920 18:08:30.361216   68107 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:08:30.373611   68107 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 18:08:30.373694   68107 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:08:30.385148   68107 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:08:30.396278   68107 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:08:30.408942   68107 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 18:08:30.422493   68107 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 18:08:30.433938   68107 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 18:08:30.433998   68107 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 18:08:30.456521   68107 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 18:08:30.475039   68107 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:08:30.604060   68107 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 18:08:30.723220   68107 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 18:08:30.723276   68107 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 18:08:30.728409   68107 start.go:563] Will wait 60s for crictl version
	I0920 18:08:30.728475   68107 ssh_runner.go:195] Run: which crictl
	I0920 18:08:30.732890   68107 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 18:08:30.771272   68107 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 18:08:30.771368   68107 ssh_runner.go:195] Run: crio --version
	I0920 18:08:30.802380   68107 ssh_runner.go:195] Run: crio --version
	I0920 18:08:30.833266   68107 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0920 18:08:30.834310   68107 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetIP
	I0920 18:08:30.837039   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:08:30.837399   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:08:17 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:08:30.837438   68107 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:08:30.837675   68107 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 18:08:30.841962   68107 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:08:30.855121   68107 kubeadm.go:883] updating cluster {Name:old-k8s-version-744025 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-744025 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.207 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 18:08:30.855233   68107 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0920 18:08:30.855306   68107 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:08:30.892870   68107 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0920 18:08:30.892969   68107 ssh_runner.go:195] Run: which lz4
	I0920 18:08:30.897177   68107 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 18:08:30.902074   68107 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 18:08:30.902104   68107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0920 18:08:32.684139   68107 crio.go:462] duration metric: took 1.786999535s to copy over tarball
	I0920 18:08:32.684228   68107 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 18:08:35.912536   68107 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.228274171s)
	I0920 18:08:35.912559   68107 crio.go:469] duration metric: took 3.228392097s to extract the tarball
	I0920 18:08:35.912571   68107 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 18:08:35.959829   68107 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:08:36.117687   68107 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0920 18:08:36.117714   68107 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0920 18:08:36.117804   68107 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:08:36.117830   68107 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 18:08:36.117860   68107 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0920 18:08:36.117854   68107 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0920 18:08:36.117878   68107 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0920 18:08:36.117814   68107 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 18:08:36.117805   68107 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 18:08:36.117805   68107 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 18:08:36.119241   68107 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0920 18:08:36.119399   68107 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 18:08:36.119602   68107 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:08:36.119966   68107 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 18:08:36.119983   68107 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0920 18:08:36.120006   68107 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 18:08:36.120044   68107 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 18:08:36.119966   68107 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0920 18:08:36.366624   68107 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0920 18:08:36.412948   68107 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0920 18:08:36.413005   68107 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0920 18:08:36.413054   68107 ssh_runner.go:195] Run: which crictl
	I0920 18:08:36.417322   68107 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 18:08:36.441425   68107 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 18:08:36.444029   68107 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0920 18:08:36.444868   68107 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0920 18:08:36.455229   68107 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 18:08:36.462739   68107 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0920 18:08:36.482385   68107 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0920 18:08:36.510692   68107 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0920 18:08:36.636058   68107 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0920 18:08:36.636104   68107 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 18:08:36.636132   68107 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0920 18:08:36.636167   68107 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0920 18:08:36.636183   68107 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 18:08:36.636196   68107 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0920 18:08:36.636218   68107 ssh_runner.go:195] Run: which crictl
	I0920 18:08:36.636232   68107 ssh_runner.go:195] Run: which crictl
	I0920 18:08:36.636257   68107 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 18:08:36.636150   68107 ssh_runner.go:195] Run: which crictl
	I0920 18:08:36.636288   68107 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0920 18:08:36.636308   68107 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 18:08:36.636332   68107 ssh_runner.go:195] Run: which crictl
	I0920 18:08:36.660197   68107 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0920 18:08:36.660254   68107 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0920 18:08:36.660263   68107 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 18:08:36.660295   68107 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0920 18:08:36.660317   68107 ssh_runner.go:195] Run: which crictl
	I0920 18:08:36.660342   68107 ssh_runner.go:195] Run: which crictl
	I0920 18:08:36.686764   68107 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 18:08:36.686821   68107 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 18:08:36.686847   68107 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 18:08:36.686935   68107 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 18:08:36.686949   68107 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0920 18:08:36.686986   68107 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 18:08:36.687002   68107 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 18:08:36.825614   68107 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 18:08:36.825681   68107 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 18:08:36.825705   68107 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 18:08:36.825804   68107 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 18:08:36.825963   68107 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 18:08:36.830070   68107 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 18:08:36.972774   68107 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 18:08:37.008642   68107 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 18:08:37.008746   68107 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 18:08:37.008644   68107 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 18:08:37.008697   68107 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 18:08:37.008908   68107 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 18:08:37.039202   68107 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0920 18:08:37.114037   68107 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0920 18:08:37.129973   68107 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0920 18:08:37.137502   68107 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0920 18:08:37.137769   68107 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0920 18:08:37.142681   68107 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0920 18:08:37.376052   68107 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:08:37.522406   68107 cache_images.go:92] duration metric: took 1.404671469s to LoadCachedImages
	W0920 18:08:37.522492   68107 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0920 18:08:37.522511   68107 kubeadm.go:934] updating node { 192.168.39.207 8443 v1.20.0 crio true true} ...
	I0920 18:08:37.522626   68107 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-744025 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.207
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-744025 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 18:08:37.522687   68107 ssh_runner.go:195] Run: crio config
	I0920 18:08:37.576585   68107 cni.go:84] Creating CNI manager for ""
	I0920 18:08:37.576613   68107 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:08:37.576625   68107 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 18:08:37.576648   68107 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.207 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-744025 NodeName:old-k8s-version-744025 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.207"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.207 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0920 18:08:37.576823   68107 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.207
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-744025"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.207
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.207"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 18:08:37.576908   68107 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0920 18:08:37.588487   68107 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 18:08:37.588580   68107 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 18:08:37.599040   68107 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0920 18:08:37.618476   68107 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 18:08:37.639363   68107 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0920 18:08:37.667266   68107 ssh_runner.go:195] Run: grep 192.168.39.207	control-plane.minikube.internal$ /etc/hosts
	I0920 18:08:37.675181   68107 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.207	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:08:37.694240   68107 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:08:37.867643   68107 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:08:37.892246   68107 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025 for IP: 192.168.39.207
	I0920 18:08:37.892265   68107 certs.go:194] generating shared ca certs ...
	I0920 18:08:37.892280   68107 certs.go:226] acquiring lock for ca certs: {Name:mkc7ef6c737c6bdc3fdd9dcff8f57029c020d8f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:08:37.892433   68107 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key
	I0920 18:08:37.892504   68107 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key
	I0920 18:08:37.892516   68107 certs.go:256] generating profile certs ...
	I0920 18:08:37.892588   68107 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/client.key
	I0920 18:08:37.892606   68107 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/client.crt with IP's: []
	I0920 18:08:38.124952   68107 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/client.crt ...
	I0920 18:08:38.124994   68107 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/client.crt: {Name:mked2213d0697b94174256667a4e94da5a2c8a92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:08:38.125339   68107 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/client.key ...
	I0920 18:08:38.125376   68107 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/client.key: {Name:mk65eb0c240ebd1aec10f5dc62413d0b006298d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:08:38.125523   68107 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/apiserver.key.3105b99d
	I0920 18:08:38.125549   68107 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/apiserver.crt.3105b99d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.207]
	I0920 18:08:38.242420   68107 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/apiserver.crt.3105b99d ...
	I0920 18:08:38.242470   68107 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/apiserver.crt.3105b99d: {Name:mkf44f7d93f09e9108cf48b884f9bf2605366c21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:08:38.242743   68107 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/apiserver.key.3105b99d ...
	I0920 18:08:38.242765   68107 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/apiserver.key.3105b99d: {Name:mkbd9ff14fe5e7fde002b982965fd0dc1df71095 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:08:38.242865   68107 certs.go:381] copying /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/apiserver.crt.3105b99d -> /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/apiserver.crt
	I0920 18:08:38.242932   68107 certs.go:385] copying /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/apiserver.key.3105b99d -> /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/apiserver.key
	I0920 18:08:38.242986   68107 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/proxy-client.key
	I0920 18:08:38.243004   68107 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/proxy-client.crt with IP's: []
	I0920 18:08:38.357765   68107 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/proxy-client.crt ...
	I0920 18:08:38.357811   68107 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/proxy-client.crt: {Name:mk2d0f00e42d451c568da7c65958f4b4da51d102 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:08:38.358099   68107 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/proxy-client.key ...
	I0920 18:08:38.358122   68107 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/proxy-client.key: {Name:mkd23143bd5d9aef58152b199a90f4853a04f875 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:08:38.358418   68107 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem (1338 bytes)
	W0920 18:08:38.358471   68107 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973_empty.pem, impossibly tiny 0 bytes
	I0920 18:08:38.358486   68107 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 18:08:38.358517   68107 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem (1082 bytes)
	I0920 18:08:38.358549   68107 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem (1123 bytes)
	I0920 18:08:38.358579   68107 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem (1675 bytes)
	I0920 18:08:38.358632   68107 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem (1708 bytes)
	I0920 18:08:38.359573   68107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 18:08:38.390142   68107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 18:08:38.415440   68107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 18:08:38.444294   68107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0920 18:08:38.475543   68107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0920 18:08:38.503851   68107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 18:08:38.533592   68107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 18:08:38.562021   68107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 18:08:38.589737   68107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 18:08:38.617322   68107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem --> /usr/share/ca-certificates/15973.pem (1338 bytes)
	I0920 18:08:38.643804   68107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /usr/share/ca-certificates/159732.pem (1708 bytes)
	I0920 18:08:38.668703   68107 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 18:08:38.686615   68107 ssh_runner.go:195] Run: openssl version
	I0920 18:08:38.692555   68107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 18:08:38.702863   68107 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:08:38.707534   68107 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:08:38.707583   68107 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:08:38.713323   68107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 18:08:38.725327   68107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15973.pem && ln -fs /usr/share/ca-certificates/15973.pem /etc/ssl/certs/15973.pem"
	I0920 18:08:38.737273   68107 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15973.pem
	I0920 18:08:38.742658   68107 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:04 /usr/share/ca-certificates/15973.pem
	I0920 18:08:38.742722   68107 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15973.pem
	I0920 18:08:38.748745   68107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15973.pem /etc/ssl/certs/51391683.0"
	I0920 18:08:38.762654   68107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/159732.pem && ln -fs /usr/share/ca-certificates/159732.pem /etc/ssl/certs/159732.pem"
	I0920 18:08:38.774745   68107 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/159732.pem
	I0920 18:08:38.779404   68107 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:04 /usr/share/ca-certificates/159732.pem
	I0920 18:08:38.779461   68107 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/159732.pem
	I0920 18:08:38.785425   68107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/159732.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 18:08:38.796660   68107 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 18:08:38.800751   68107 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 18:08:38.800812   68107 kubeadm.go:392] StartCluster: {Name:old-k8s-version-744025 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-744025 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.207 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:08:38.800920   68107 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 18:08:38.800971   68107 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:08:38.848846   68107 cri.go:89] found id: ""
	I0920 18:08:38.848922   68107 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 18:08:38.860694   68107 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 18:08:38.871719   68107 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 18:08:38.882959   68107 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 18:08:38.882979   68107 kubeadm.go:157] found existing configuration files:
	
	I0920 18:08:38.883033   68107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 18:08:38.893714   68107 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 18:08:38.893789   68107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 18:08:38.904055   68107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 18:08:38.915573   68107 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 18:08:38.915651   68107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 18:08:38.926241   68107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 18:08:38.940262   68107 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 18:08:38.940334   68107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 18:08:38.951999   68107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 18:08:38.963597   68107 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 18:08:38.963670   68107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 18:08:38.977754   68107 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 18:08:39.116196   68107 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0920 18:08:39.116272   68107 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 18:08:39.273231   68107 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 18:08:39.273387   68107 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 18:08:39.273531   68107 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0920 18:08:39.475010   68107 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 18:08:39.477944   68107 out.go:235]   - Generating certificates and keys ...
	I0920 18:08:39.478045   68107 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 18:08:39.478126   68107 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 18:08:39.570238   68107 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0920 18:08:39.731375   68107 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0920 18:08:39.925125   68107 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0920 18:08:40.081817   68107 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0920 18:08:40.292253   68107 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0920 18:08:40.292503   68107 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-744025] and IPs [192.168.39.207 127.0.0.1 ::1]
	I0920 18:08:40.417567   68107 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0920 18:08:40.417748   68107 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-744025] and IPs [192.168.39.207 127.0.0.1 ::1]
	I0920 18:08:40.600720   68107 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0920 18:08:40.683240   68107 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0920 18:08:41.029081   68107 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0920 18:08:41.030728   68107 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 18:08:41.110986   68107 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 18:08:41.328529   68107 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 18:08:41.490462   68107 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 18:08:41.709825   68107 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 18:08:41.730234   68107 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 18:08:41.731749   68107 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 18:08:41.731811   68107 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 18:08:41.871000   68107 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 18:08:41.872894   68107 out.go:235]   - Booting up control plane ...
	I0920 18:08:41.873018   68107 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 18:08:41.883333   68107 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 18:08:41.884549   68107 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 18:08:41.885395   68107 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 18:08:41.890353   68107 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0920 18:09:21.884708   68107 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0920 18:09:21.885162   68107 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:09:21.885425   68107 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:09:26.885869   68107 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:09:26.886070   68107 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:09:36.886548   68107 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:09:36.886801   68107 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:09:56.887660   68107 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:09:56.887890   68107 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:10:36.891262   68107 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:10:36.891564   68107 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:10:36.891591   68107 kubeadm.go:310] 
	I0920 18:10:36.891659   68107 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0920 18:10:36.891714   68107 kubeadm.go:310] 		timed out waiting for the condition
	I0920 18:10:36.891725   68107 kubeadm.go:310] 
	I0920 18:10:36.891775   68107 kubeadm.go:310] 	This error is likely caused by:
	I0920 18:10:36.891833   68107 kubeadm.go:310] 		- The kubelet is not running
	I0920 18:10:36.891981   68107 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0920 18:10:36.891992   68107 kubeadm.go:310] 
	I0920 18:10:36.892128   68107 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0920 18:10:36.892177   68107 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0920 18:10:36.892419   68107 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0920 18:10:36.892434   68107 kubeadm.go:310] 
	I0920 18:10:36.892579   68107 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0920 18:10:36.892698   68107 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0920 18:10:36.892711   68107 kubeadm.go:310] 
	I0920 18:10:36.892959   68107 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0920 18:10:36.893079   68107 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0920 18:10:36.893187   68107 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0920 18:10:36.893277   68107 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0920 18:10:36.893289   68107 kubeadm.go:310] 
	I0920 18:10:36.893565   68107 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 18:10:36.893695   68107 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0920 18:10:36.893807   68107 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0920 18:10:36.893963   68107 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-744025] and IPs [192.168.39.207 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-744025] and IPs [192.168.39.207 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-744025] and IPs [192.168.39.207 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-744025] and IPs [192.168.39.207 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0920 18:10:36.894021   68107 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0920 18:10:37.588688   68107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:10:37.604600   68107 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 18:10:37.615869   68107 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 18:10:37.615897   68107 kubeadm.go:157] found existing configuration files:
	
	I0920 18:10:37.615964   68107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 18:10:37.626773   68107 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 18:10:37.626867   68107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 18:10:37.637097   68107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 18:10:37.647090   68107 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 18:10:37.647149   68107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 18:10:37.658217   68107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 18:10:37.669249   68107 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 18:10:37.669320   68107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 18:10:37.680303   68107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 18:10:37.690079   68107 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 18:10:37.690151   68107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 18:10:37.700409   68107 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 18:10:37.777186   68107 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0920 18:10:37.777305   68107 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 18:10:37.930742   68107 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 18:10:37.930901   68107 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 18:10:37.931016   68107 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0920 18:10:38.128283   68107 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 18:10:38.130192   68107 out.go:235]   - Generating certificates and keys ...
	I0920 18:10:38.130309   68107 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 18:10:38.130402   68107 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 18:10:38.130540   68107 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 18:10:38.130638   68107 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 18:10:38.130731   68107 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 18:10:38.130807   68107 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 18:10:38.130898   68107 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 18:10:38.131280   68107 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 18:10:38.131772   68107 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 18:10:38.132096   68107 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 18:10:38.132167   68107 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 18:10:38.132259   68107 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 18:10:38.306799   68107 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 18:10:38.483123   68107 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 18:10:38.576487   68107 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 18:10:38.754374   68107 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 18:10:38.775109   68107 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 18:10:38.776208   68107 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 18:10:38.776297   68107 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 18:10:38.939188   68107 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 18:10:38.940965   68107 out.go:235]   - Booting up control plane ...
	I0920 18:10:38.941091   68107 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 18:10:38.946126   68107 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 18:10:38.947314   68107 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 18:10:38.948252   68107 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 18:10:38.964237   68107 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0920 18:11:18.967209   68107 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0920 18:11:18.967621   68107 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:11:18.967905   68107 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:11:23.968401   68107 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:11:23.968633   68107 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:11:33.969437   68107 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:11:33.969635   68107 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:11:53.970787   68107 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:11:53.971018   68107 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:12:33.970548   68107 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:12:33.970780   68107 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:12:33.970790   68107 kubeadm.go:310] 
	I0920 18:12:33.970826   68107 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0920 18:12:33.970893   68107 kubeadm.go:310] 		timed out waiting for the condition
	I0920 18:12:33.970915   68107 kubeadm.go:310] 
	I0920 18:12:33.970964   68107 kubeadm.go:310] 	This error is likely caused by:
	I0920 18:12:33.971015   68107 kubeadm.go:310] 		- The kubelet is not running
	I0920 18:12:33.971164   68107 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0920 18:12:33.971175   68107 kubeadm.go:310] 
	I0920 18:12:33.971261   68107 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0920 18:12:33.971296   68107 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0920 18:12:33.971325   68107 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0920 18:12:33.971332   68107 kubeadm.go:310] 
	I0920 18:12:33.971419   68107 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0920 18:12:33.971492   68107 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0920 18:12:33.971499   68107 kubeadm.go:310] 
	I0920 18:12:33.971610   68107 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0920 18:12:33.971686   68107 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0920 18:12:33.971810   68107 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0920 18:12:33.971966   68107 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0920 18:12:33.971990   68107 kubeadm.go:310] 
	I0920 18:12:33.972984   68107 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 18:12:33.973065   68107 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0920 18:12:33.973122   68107 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0920 18:12:33.973199   68107 kubeadm.go:394] duration metric: took 3m55.172393049s to StartCluster
	I0920 18:12:33.973260   68107 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:12:33.973317   68107 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:12:34.017922   68107 cri.go:89] found id: ""
	I0920 18:12:34.017949   68107 logs.go:276] 0 containers: []
	W0920 18:12:34.017961   68107 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:12:34.017971   68107 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:12:34.018029   68107 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:12:34.051472   68107 cri.go:89] found id: ""
	I0920 18:12:34.051499   68107 logs.go:276] 0 containers: []
	W0920 18:12:34.051507   68107 logs.go:278] No container was found matching "etcd"
	I0920 18:12:34.051514   68107 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:12:34.051571   68107 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:12:34.084877   68107 cri.go:89] found id: ""
	I0920 18:12:34.084902   68107 logs.go:276] 0 containers: []
	W0920 18:12:34.084912   68107 logs.go:278] No container was found matching "coredns"
	I0920 18:12:34.084919   68107 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:12:34.084979   68107 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:12:34.118862   68107 cri.go:89] found id: ""
	I0920 18:12:34.118889   68107 logs.go:276] 0 containers: []
	W0920 18:12:34.118898   68107 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:12:34.118904   68107 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:12:34.118963   68107 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:12:34.152506   68107 cri.go:89] found id: ""
	I0920 18:12:34.152534   68107 logs.go:276] 0 containers: []
	W0920 18:12:34.152542   68107 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:12:34.152547   68107 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:12:34.152606   68107 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:12:34.186897   68107 cri.go:89] found id: ""
	I0920 18:12:34.186926   68107 logs.go:276] 0 containers: []
	W0920 18:12:34.186934   68107 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:12:34.186941   68107 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:12:34.186988   68107 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:12:34.235985   68107 cri.go:89] found id: ""
	I0920 18:12:34.236016   68107 logs.go:276] 0 containers: []
	W0920 18:12:34.236027   68107 logs.go:278] No container was found matching "kindnet"
	I0920 18:12:34.236038   68107 logs.go:123] Gathering logs for kubelet ...
	I0920 18:12:34.236054   68107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:12:34.302847   68107 logs.go:123] Gathering logs for dmesg ...
	I0920 18:12:34.302884   68107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:12:34.319841   68107 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:12:34.319865   68107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:12:34.449400   68107 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:12:34.449428   68107 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:12:34.449448   68107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:12:34.565713   68107 logs.go:123] Gathering logs for container status ...
	I0920 18:12:34.565761   68107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0920 18:12:34.611138   68107 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0920 18:12:34.611185   68107 out.go:270] * 
	* 
	W0920 18:12:34.611229   68107 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0920 18:12:34.611245   68107 out.go:270] * 
	* 
	W0920 18:12:34.612024   68107 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 18:12:34.615031   68107 out.go:201] 
	W0920 18:12:34.616446   68107 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0920 18:12:34.616497   68107 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0920 18:12:34.616522   68107 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0920 18:12:34.618381   68107 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-744025 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-744025 -n old-k8s-version-744025
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-744025 -n old-k8s-version-744025: exit status 6 (226.792257ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 18:12:34.895514   74621 status.go:451] kubeconfig endpoint: get endpoint: "old-k8s-version-744025" does not appear in /home/jenkins/minikube-integration/19672-8777/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-744025" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (282.73s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (138.96s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-956403 --alsologtostderr -v=3
E0920 18:10:07.511283   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/auto-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:10:07.911113   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/kindnet-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:10:10.073524   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/auto-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:10:13.033176   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/kindnet-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:10:15.195211   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/auto-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:10:23.274848   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/kindnet-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:10:25.436973   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/auto-833505/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-956403 --alsologtostderr -v=3: exit status 82 (2m0.50718541s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-956403"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 18:10:06.302231   73761 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:10:06.302461   73761 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:10:06.302470   73761 out.go:358] Setting ErrFile to fd 2...
	I0920 18:10:06.302474   73761 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:10:06.302645   73761 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-8777/.minikube/bin
	I0920 18:10:06.302862   73761 out.go:352] Setting JSON to false
	I0920 18:10:06.302938   73761 mustload.go:65] Loading cluster: no-preload-956403
	I0920 18:10:06.303297   73761 config.go:182] Loaded profile config "no-preload-956403": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:10:06.303363   73761 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/no-preload-956403/config.json ...
	I0920 18:10:06.303526   73761 mustload.go:65] Loading cluster: no-preload-956403
	I0920 18:10:06.303619   73761 config.go:182] Loaded profile config "no-preload-956403": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:10:06.303647   73761 stop.go:39] StopHost: no-preload-956403
	I0920 18:10:06.303997   73761 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:10:06.304033   73761 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:10:06.318394   73761 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36877
	I0920 18:10:06.318915   73761 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:10:06.319604   73761 main.go:141] libmachine: Using API Version  1
	I0920 18:10:06.319634   73761 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:10:06.320003   73761 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:10:06.322330   73761 out.go:177] * Stopping node "no-preload-956403"  ...
	I0920 18:10:06.323561   73761 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0920 18:10:06.323590   73761 main.go:141] libmachine: (no-preload-956403) Calling .DriverName
	I0920 18:10:06.323814   73761 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0920 18:10:06.323850   73761 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:10:06.327015   73761 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:10:06.327490   73761 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:08:52 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:10:06.327527   73761 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:10:06.327679   73761 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHPort
	I0920 18:10:06.327879   73761 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:10:06.328044   73761 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHUsername
	I0920 18:10:06.328200   73761 sshutil.go:53] new ssh client: &{IP:192.168.50.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/no-preload-956403/id_rsa Username:docker}
	I0920 18:10:06.422079   73761 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0920 18:10:06.481153   73761 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0920 18:10:06.557335   73761 main.go:141] libmachine: Stopping "no-preload-956403"...
	I0920 18:10:06.557382   73761 main.go:141] libmachine: (no-preload-956403) Calling .GetState
	I0920 18:10:06.559115   73761 main.go:141] libmachine: (no-preload-956403) Calling .Stop
	I0920 18:10:06.563337   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 0/120
	I0920 18:10:07.564679   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 1/120
	I0920 18:10:08.566658   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 2/120
	I0920 18:10:09.568473   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 3/120
	I0920 18:10:10.570149   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 4/120
	I0920 18:10:11.572444   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 5/120
	I0920 18:10:12.574230   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 6/120
	I0920 18:10:13.576414   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 7/120
	I0920 18:10:14.578164   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 8/120
	I0920 18:10:15.579618   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 9/120
	I0920 18:10:16.580841   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 10/120
	I0920 18:10:17.582349   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 11/120
	I0920 18:10:18.584281   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 12/120
	I0920 18:10:19.586788   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 13/120
	I0920 18:10:20.588087   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 14/120
	I0920 18:10:21.590173   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 15/120
	I0920 18:10:22.591522   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 16/120
	I0920 18:10:23.592962   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 17/120
	I0920 18:10:24.594286   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 18/120
	I0920 18:10:25.596555   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 19/120
	I0920 18:10:26.598775   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 20/120
	I0920 18:10:27.599924   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 21/120
	I0920 18:10:28.601438   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 22/120
	I0920 18:10:29.603007   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 23/120
	I0920 18:10:30.604288   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 24/120
	I0920 18:10:31.606275   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 25/120
	I0920 18:10:32.607849   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 26/120
	I0920 18:10:33.609257   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 27/120
	I0920 18:10:34.610893   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 28/120
	I0920 18:10:35.612648   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 29/120
	I0920 18:10:36.614970   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 30/120
	I0920 18:10:37.616824   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 31/120
	I0920 18:10:38.618153   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 32/120
	I0920 18:10:39.620157   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 33/120
	I0920 18:10:40.621399   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 34/120
	I0920 18:10:41.623361   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 35/120
	I0920 18:10:42.624721   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 36/120
	I0920 18:10:43.626113   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 37/120
	I0920 18:10:44.627474   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 38/120
	I0920 18:10:45.629046   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 39/120
	I0920 18:10:46.631408   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 40/120
	I0920 18:10:47.633003   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 41/120
	I0920 18:10:48.634633   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 42/120
	I0920 18:10:49.636046   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 43/120
	I0920 18:10:50.637629   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 44/120
	I0920 18:10:51.639718   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 45/120
	I0920 18:10:52.641304   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 46/120
	I0920 18:10:53.642682   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 47/120
	I0920 18:10:54.644085   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 48/120
	I0920 18:10:55.645523   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 49/120
	I0920 18:10:56.648031   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 50/120
	I0920 18:10:57.649398   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 51/120
	I0920 18:10:58.650694   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 52/120
	I0920 18:10:59.652677   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 53/120
	I0920 18:11:00.654037   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 54/120
	I0920 18:11:01.656110   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 55/120
	I0920 18:11:02.657565   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 56/120
	I0920 18:11:03.659064   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 57/120
	I0920 18:11:04.660241   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 58/120
	I0920 18:11:05.661873   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 59/120
	I0920 18:11:06.663587   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 60/120
	I0920 18:11:07.665170   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 61/120
	I0920 18:11:08.667727   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 62/120
	I0920 18:11:09.669322   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 63/120
	I0920 18:11:10.670881   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 64/120
	I0920 18:11:11.672847   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 65/120
	I0920 18:11:12.674387   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 66/120
	I0920 18:11:13.676424   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 67/120
	I0920 18:11:14.678055   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 68/120
	I0920 18:11:15.680285   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 69/120
	I0920 18:11:16.682885   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 70/120
	I0920 18:11:17.684476   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 71/120
	I0920 18:11:18.685990   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 72/120
	I0920 18:11:19.687474   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 73/120
	I0920 18:11:20.688824   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 74/120
	I0920 18:11:21.690750   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 75/120
	I0920 18:11:22.692243   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 76/120
	I0920 18:11:23.693673   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 77/120
	I0920 18:11:24.695294   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 78/120
	I0920 18:11:25.696760   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 79/120
	I0920 18:11:26.698410   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 80/120
	I0920 18:11:27.699928   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 81/120
	I0920 18:11:28.701360   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 82/120
	I0920 18:11:29.702734   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 83/120
	I0920 18:11:30.704138   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 84/120
	I0920 18:11:31.706317   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 85/120
	I0920 18:11:32.707732   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 86/120
	I0920 18:11:33.709361   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 87/120
	I0920 18:11:34.710826   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 88/120
	I0920 18:11:35.712416   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 89/120
	I0920 18:11:36.714781   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 90/120
	I0920 18:11:37.716116   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 91/120
	I0920 18:11:38.717315   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 92/120
	I0920 18:11:39.718782   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 93/120
	I0920 18:11:40.720129   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 94/120
	I0920 18:11:41.722023   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 95/120
	I0920 18:11:42.723419   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 96/120
	I0920 18:11:43.724899   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 97/120
	I0920 18:11:44.726307   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 98/120
	I0920 18:11:45.727717   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 99/120
	I0920 18:11:46.729733   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 100/120
	I0920 18:11:47.731042   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 101/120
	I0920 18:11:48.732358   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 102/120
	I0920 18:11:49.733758   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 103/120
	I0920 18:11:50.735065   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 104/120
	I0920 18:11:51.737173   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 105/120
	I0920 18:11:52.738495   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 106/120
	I0920 18:11:53.739953   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 107/120
	I0920 18:11:54.741669   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 108/120
	I0920 18:11:55.742946   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 109/120
	I0920 18:11:56.744256   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 110/120
	I0920 18:11:57.745650   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 111/120
	I0920 18:11:58.746938   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 112/120
	I0920 18:11:59.748447   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 113/120
	I0920 18:12:00.749916   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 114/120
	I0920 18:12:01.751876   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 115/120
	I0920 18:12:02.753293   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 116/120
	I0920 18:12:03.754692   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 117/120
	I0920 18:12:04.756248   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 118/120
	I0920 18:12:05.757739   73761 main.go:141] libmachine: (no-preload-956403) Waiting for machine to stop 119/120
	I0920 18:12:06.758523   73761 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0920 18:12:06.758581   73761 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0920 18:12:06.760206   73761 out.go:201] 
	W0920 18:12:06.761369   73761 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0920 18:12:06.761397   73761 out.go:270] * 
	* 
	W0920 18:12:06.764253   73761 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 18:12:06.765510   73761 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-956403 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-956403 -n no-preload-956403
E0920 18:12:23.518108   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/custom-flannel-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:12:23.524533   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/custom-flannel-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:12:23.535914   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/custom-flannel-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:12:23.557278   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/custom-flannel-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:12:23.598712   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/custom-flannel-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:12:23.680417   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/custom-flannel-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:12:23.841953   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/custom-flannel-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:12:24.164084   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/custom-flannel-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:12:24.806193   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/custom-flannel-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:12:25.131904   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/calico-833505/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-956403 -n no-preload-956403: exit status 3 (18.447390928s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 18:12:25.214131   74408 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.47:22: connect: no route to host
	E0920 18:12:25.214151   74408 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.47:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-956403" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (138.96s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (138.96s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-768431 --alsologtostderr -v=3
E0920 18:10:43.756793   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/kindnet-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:10:45.918614   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/auto-833505/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-768431 --alsologtostderr -v=3: exit status 82 (2m0.496077271s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-768431"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 18:10:42.653541   74002 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:10:42.653706   74002 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:10:42.653718   74002 out.go:358] Setting ErrFile to fd 2...
	I0920 18:10:42.653725   74002 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:10:42.654011   74002 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-8777/.minikube/bin
	I0920 18:10:42.654345   74002 out.go:352] Setting JSON to false
	I0920 18:10:42.654451   74002 mustload.go:65] Loading cluster: embed-certs-768431
	I0920 18:10:42.655039   74002 config.go:182] Loaded profile config "embed-certs-768431": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:10:42.655135   74002 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/embed-certs-768431/config.json ...
	I0920 18:10:42.655364   74002 mustload.go:65] Loading cluster: embed-certs-768431
	I0920 18:10:42.655529   74002 config.go:182] Loaded profile config "embed-certs-768431": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:10:42.655577   74002 stop.go:39] StopHost: embed-certs-768431
	I0920 18:10:42.656114   74002 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:10:42.656169   74002 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:10:42.671678   74002 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46151
	I0920 18:10:42.672225   74002 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:10:42.672853   74002 main.go:141] libmachine: Using API Version  1
	I0920 18:10:42.672877   74002 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:10:42.673230   74002 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:10:42.675419   74002 out.go:177] * Stopping node "embed-certs-768431"  ...
	I0920 18:10:42.676574   74002 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0920 18:10:42.676604   74002 main.go:141] libmachine: (embed-certs-768431) Calling .DriverName
	I0920 18:10:42.676844   74002 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0920 18:10:42.676876   74002 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHHostname
	I0920 18:10:42.679872   74002 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:10:42.680436   74002 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:09:19 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:10:42.680465   74002 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:10:42.680579   74002 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHPort
	I0920 18:10:42.680772   74002 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:10:42.680920   74002 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHUsername
	I0920 18:10:42.681089   74002 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/embed-certs-768431/id_rsa Username:docker}
	I0920 18:10:42.786759   74002 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0920 18:10:42.843360   74002 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0920 18:10:42.902207   74002 main.go:141] libmachine: Stopping "embed-certs-768431"...
	I0920 18:10:42.902253   74002 main.go:141] libmachine: (embed-certs-768431) Calling .GetState
	I0920 18:10:42.903765   74002 main.go:141] libmachine: (embed-certs-768431) Calling .Stop
	I0920 18:10:42.907892   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 0/120
	I0920 18:10:43.909540   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 1/120
	I0920 18:10:44.910942   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 2/120
	I0920 18:10:45.912260   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 3/120
	I0920 18:10:46.913720   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 4/120
	I0920 18:10:47.915821   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 5/120
	I0920 18:10:48.917102   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 6/120
	I0920 18:10:49.918707   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 7/120
	I0920 18:10:50.920356   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 8/120
	I0920 18:10:51.921683   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 9/120
	I0920 18:10:52.923448   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 10/120
	I0920 18:10:53.925201   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 11/120
	I0920 18:10:54.926705   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 12/120
	I0920 18:10:55.928288   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 13/120
	I0920 18:10:56.929809   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 14/120
	I0920 18:10:57.931831   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 15/120
	I0920 18:10:58.933486   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 16/120
	I0920 18:10:59.934896   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 17/120
	I0920 18:11:00.936462   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 18/120
	I0920 18:11:01.937742   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 19/120
	I0920 18:11:02.940102   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 20/120
	I0920 18:11:03.941596   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 21/120
	I0920 18:11:04.943241   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 22/120
	I0920 18:11:05.944789   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 23/120
	I0920 18:11:06.946086   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 24/120
	I0920 18:11:07.948176   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 25/120
	I0920 18:11:08.949503   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 26/120
	I0920 18:11:09.951035   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 27/120
	I0920 18:11:10.952625   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 28/120
	I0920 18:11:11.954227   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 29/120
	I0920 18:11:12.956398   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 30/120
	I0920 18:11:13.958054   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 31/120
	I0920 18:11:14.960309   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 32/120
	I0920 18:11:15.962117   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 33/120
	I0920 18:11:16.963455   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 34/120
	I0920 18:11:17.965723   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 35/120
	I0920 18:11:18.967985   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 36/120
	I0920 18:11:19.969381   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 37/120
	I0920 18:11:20.970622   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 38/120
	I0920 18:11:21.972096   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 39/120
	I0920 18:11:22.974392   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 40/120
	I0920 18:11:23.976321   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 41/120
	I0920 18:11:24.977569   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 42/120
	I0920 18:11:25.979041   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 43/120
	I0920 18:11:26.980376   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 44/120
	I0920 18:11:27.982730   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 45/120
	I0920 18:11:28.984352   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 46/120
	I0920 18:11:29.985633   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 47/120
	I0920 18:11:30.987710   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 48/120
	I0920 18:11:31.989171   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 49/120
	I0920 18:11:32.990430   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 50/120
	I0920 18:11:33.992592   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 51/120
	I0920 18:11:34.993991   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 52/120
	I0920 18:11:35.995301   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 53/120
	I0920 18:11:36.996772   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 54/120
	I0920 18:11:37.998915   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 55/120
	I0920 18:11:39.000185   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 56/120
	I0920 18:11:40.001346   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 57/120
	I0920 18:11:41.002687   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 58/120
	I0920 18:11:42.004002   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 59/120
	I0920 18:11:43.006643   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 60/120
	I0920 18:11:44.008398   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 61/120
	I0920 18:11:45.009810   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 62/120
	I0920 18:11:46.011382   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 63/120
	I0920 18:11:47.012719   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 64/120
	I0920 18:11:48.014694   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 65/120
	I0920 18:11:49.016526   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 66/120
	I0920 18:11:50.017895   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 67/120
	I0920 18:11:51.019212   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 68/120
	I0920 18:11:52.020440   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 69/120
	I0920 18:11:53.022605   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 70/120
	I0920 18:11:54.024142   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 71/120
	I0920 18:11:55.025244   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 72/120
	I0920 18:11:56.026468   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 73/120
	I0920 18:11:57.028276   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 74/120
	I0920 18:11:58.030163   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 75/120
	I0920 18:11:59.032455   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 76/120
	I0920 18:12:00.033762   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 77/120
	I0920 18:12:01.034984   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 78/120
	I0920 18:12:02.036231   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 79/120
	I0920 18:12:03.038475   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 80/120
	I0920 18:12:04.039802   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 81/120
	I0920 18:12:05.041129   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 82/120
	I0920 18:12:06.042309   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 83/120
	I0920 18:12:07.044080   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 84/120
	I0920 18:12:08.045875   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 85/120
	I0920 18:12:09.047193   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 86/120
	I0920 18:12:10.048364   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 87/120
	I0920 18:12:11.049803   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 88/120
	I0920 18:12:12.051186   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 89/120
	I0920 18:12:13.053291   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 90/120
	I0920 18:12:14.054528   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 91/120
	I0920 18:12:15.056192   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 92/120
	I0920 18:12:16.057531   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 93/120
	I0920 18:12:17.059032   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 94/120
	I0920 18:12:18.060872   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 95/120
	I0920 18:12:19.062154   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 96/120
	I0920 18:12:20.063295   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 97/120
	I0920 18:12:21.064689   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 98/120
	I0920 18:12:22.066087   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 99/120
	I0920 18:12:23.068079   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 100/120
	I0920 18:12:24.069247   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 101/120
	I0920 18:12:25.070534   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 102/120
	I0920 18:12:26.072134   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 103/120
	I0920 18:12:27.073233   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 104/120
	I0920 18:12:28.075393   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 105/120
	I0920 18:12:29.076742   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 106/120
	I0920 18:12:30.078370   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 107/120
	I0920 18:12:31.080297   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 108/120
	I0920 18:12:32.081859   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 109/120
	I0920 18:12:33.084000   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 110/120
	I0920 18:12:34.085359   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 111/120
	I0920 18:12:35.086790   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 112/120
	I0920 18:12:36.088561   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 113/120
	I0920 18:12:37.089997   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 114/120
	I0920 18:12:38.091585   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 115/120
	I0920 18:12:39.092679   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 116/120
	I0920 18:12:40.093791   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 117/120
	I0920 18:12:41.094892   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 118/120
	I0920 18:12:42.095987   74002 main.go:141] libmachine: (embed-certs-768431) Waiting for machine to stop 119/120
	I0920 18:12:43.097177   74002 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0920 18:12:43.097233   74002 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0920 18:12:43.098888   74002 out.go:201] 
	W0920 18:12:43.099936   74002 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0920 18:12:43.099949   74002 out.go:270] * 
	* 
	W0920 18:12:43.102551   74002 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 18:12:43.103650   74002 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-768431 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-768431 -n embed-certs-768431
E0920 18:12:43.196803   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:12:44.014719   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/custom-flannel-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:12:46.640523   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/kindnet-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:12:48.802642   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/auto-833505/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-768431 -n embed-certs-768431: exit status 3 (18.461592245s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 18:13:01.566186   74810 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.202:22: connect: no route to host
	E0920 18:13:01.566210   74810 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.61.202:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-768431" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (138.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (138.99s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-553719 --alsologtostderr -v=3
E0920 18:11:24.718662   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/kindnet-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:11:26.880925   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/auto-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:11:39.932325   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/functional-945494/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:11:44.155129   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/calico-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:11:44.161515   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/calico-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:11:44.173042   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/calico-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:11:44.194546   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/calico-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:11:44.235935   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/calico-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:11:44.317463   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/calico-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:11:44.479065   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/calico-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:11:44.800713   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/calico-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:11:45.442520   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/calico-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:11:46.724355   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/calico-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:11:49.286055   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/calico-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:11:54.408199   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/calico-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:12:04.650168   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/calico-833505/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-553719 --alsologtostderr -v=3: exit status 82 (2m0.524669298s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-553719"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 18:11:05.657400   74185 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:11:05.657524   74185 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:11:05.657535   74185 out.go:358] Setting ErrFile to fd 2...
	I0920 18:11:05.657542   74185 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:11:05.657751   74185 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-8777/.minikube/bin
	I0920 18:11:05.658089   74185 out.go:352] Setting JSON to false
	I0920 18:11:05.658194   74185 mustload.go:65] Loading cluster: default-k8s-diff-port-553719
	I0920 18:11:05.658647   74185 config.go:182] Loaded profile config "default-k8s-diff-port-553719": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:11:05.658736   74185 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/default-k8s-diff-port-553719/config.json ...
	I0920 18:11:05.658943   74185 mustload.go:65] Loading cluster: default-k8s-diff-port-553719
	I0920 18:11:05.659076   74185 config.go:182] Loaded profile config "default-k8s-diff-port-553719": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:11:05.659116   74185 stop.go:39] StopHost: default-k8s-diff-port-553719
	I0920 18:11:05.659548   74185 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:11:05.659597   74185 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:11:05.674720   74185 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41153
	I0920 18:11:05.675219   74185 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:11:05.675789   74185 main.go:141] libmachine: Using API Version  1
	I0920 18:11:05.675814   74185 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:11:05.676176   74185 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:11:05.678582   74185 out.go:177] * Stopping node "default-k8s-diff-port-553719"  ...
	I0920 18:11:05.679890   74185 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0920 18:11:05.679930   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .DriverName
	I0920 18:11:05.680211   74185 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0920 18:11:05.680236   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHHostname
	I0920 18:11:05.683368   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:11:05.683889   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:09:45 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:11:05.683917   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:11:05.684085   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHPort
	I0920 18:11:05.684284   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:11:05.684451   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHUsername
	I0920 18:11:05.684649   74185 sshutil.go:53] new ssh client: &{IP:192.168.72.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/default-k8s-diff-port-553719/id_rsa Username:docker}
	I0920 18:11:05.789011   74185 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0920 18:11:05.859804   74185 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0920 18:11:05.922853   74185 main.go:141] libmachine: Stopping "default-k8s-diff-port-553719"...
	I0920 18:11:05.922881   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetState
	I0920 18:11:05.924795   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .Stop
	I0920 18:11:05.929106   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 0/120
	I0920 18:11:06.931040   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 1/120
	I0920 18:11:07.932746   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 2/120
	I0920 18:11:08.935147   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 3/120
	I0920 18:11:09.936630   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 4/120
	I0920 18:11:10.939473   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 5/120
	I0920 18:11:11.941315   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 6/120
	I0920 18:11:12.942748   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 7/120
	I0920 18:11:13.944664   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 8/120
	I0920 18:11:14.946249   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 9/120
	I0920 18:11:15.948099   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 10/120
	I0920 18:11:16.949760   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 11/120
	I0920 18:11:17.951494   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 12/120
	I0920 18:11:18.953412   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 13/120
	I0920 18:11:19.955043   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 14/120
	I0920 18:11:20.957293   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 15/120
	I0920 18:11:21.958797   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 16/120
	I0920 18:11:22.960205   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 17/120
	I0920 18:11:23.961903   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 18/120
	I0920 18:11:24.963332   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 19/120
	I0920 18:11:25.965906   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 20/120
	I0920 18:11:26.967405   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 21/120
	I0920 18:11:27.969185   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 22/120
	I0920 18:11:28.970774   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 23/120
	I0920 18:11:29.972164   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 24/120
	I0920 18:11:30.974465   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 25/120
	I0920 18:11:31.975976   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 26/120
	I0920 18:11:32.977491   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 27/120
	I0920 18:11:33.978978   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 28/120
	I0920 18:11:34.980513   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 29/120
	I0920 18:11:35.982987   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 30/120
	I0920 18:11:36.984749   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 31/120
	I0920 18:11:37.986248   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 32/120
	I0920 18:11:38.987881   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 33/120
	I0920 18:11:39.989247   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 34/120
	I0920 18:11:40.991191   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 35/120
	I0920 18:11:41.992979   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 36/120
	I0920 18:11:42.994340   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 37/120
	I0920 18:11:43.996007   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 38/120
	I0920 18:11:44.997414   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 39/120
	I0920 18:11:45.999631   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 40/120
	I0920 18:11:47.001146   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 41/120
	I0920 18:11:48.002647   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 42/120
	I0920 18:11:49.004491   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 43/120
	I0920 18:11:50.005923   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 44/120
	I0920 18:11:51.007700   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 45/120
	I0920 18:11:52.009421   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 46/120
	I0920 18:11:53.010750   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 47/120
	I0920 18:11:54.012203   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 48/120
	I0920 18:11:55.013528   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 49/120
	I0920 18:11:56.015582   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 50/120
	I0920 18:11:57.016904   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 51/120
	I0920 18:11:58.018503   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 52/120
	I0920 18:11:59.020126   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 53/120
	I0920 18:12:00.021409   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 54/120
	I0920 18:12:01.023169   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 55/120
	I0920 18:12:02.024606   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 56/120
	I0920 18:12:03.026022   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 57/120
	I0920 18:12:04.027372   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 58/120
	I0920 18:12:05.028671   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 59/120
	I0920 18:12:06.030284   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 60/120
	I0920 18:12:07.031800   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 61/120
	I0920 18:12:08.033249   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 62/120
	I0920 18:12:09.035398   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 63/120
	I0920 18:12:10.036770   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 64/120
	I0920 18:12:11.038683   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 65/120
	I0920 18:12:12.040214   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 66/120
	I0920 18:12:13.041504   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 67/120
	I0920 18:12:14.043097   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 68/120
	I0920 18:12:15.044599   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 69/120
	I0920 18:12:16.046760   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 70/120
	I0920 18:12:17.048971   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 71/120
	I0920 18:12:18.050422   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 72/120
	I0920 18:12:19.051949   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 73/120
	I0920 18:12:20.053448   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 74/120
	I0920 18:12:21.055567   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 75/120
	I0920 18:12:22.057122   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 76/120
	I0920 18:12:23.058884   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 77/120
	I0920 18:12:24.060268   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 78/120
	I0920 18:12:25.062068   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 79/120
	I0920 18:12:26.064647   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 80/120
	I0920 18:12:27.066115   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 81/120
	I0920 18:12:28.067592   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 82/120
	I0920 18:12:29.069068   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 83/120
	I0920 18:12:30.070820   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 84/120
	I0920 18:12:31.072880   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 85/120
	I0920 18:12:32.074455   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 86/120
	I0920 18:12:33.076072   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 87/120
	I0920 18:12:34.077686   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 88/120
	I0920 18:12:35.079175   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 89/120
	I0920 18:12:36.080868   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 90/120
	I0920 18:12:37.082246   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 91/120
	I0920 18:12:38.084436   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 92/120
	I0920 18:12:39.085780   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 93/120
	I0920 18:12:40.087294   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 94/120
	I0920 18:12:41.089545   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 95/120
	I0920 18:12:42.090871   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 96/120
	I0920 18:12:43.092752   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 97/120
	I0920 18:12:44.094191   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 98/120
	I0920 18:12:45.095666   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 99/120
	I0920 18:12:46.097700   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 100/120
	I0920 18:12:47.099026   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 101/120
	I0920 18:12:48.100555   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 102/120
	I0920 18:12:49.102026   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 103/120
	I0920 18:12:50.103475   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 104/120
	I0920 18:12:51.105777   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 105/120
	I0920 18:12:52.107099   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 106/120
	I0920 18:12:53.108784   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 107/120
	I0920 18:12:54.110180   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 108/120
	I0920 18:12:55.111626   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 109/120
	I0920 18:12:56.113628   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 110/120
	I0920 18:12:57.115347   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 111/120
	I0920 18:12:58.116709   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 112/120
	I0920 18:12:59.118223   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 113/120
	I0920 18:13:00.120244   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 114/120
	I0920 18:13:01.122450   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 115/120
	I0920 18:13:02.123657   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 116/120
	I0920 18:13:03.125273   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 117/120
	I0920 18:13:04.127360   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 118/120
	I0920 18:13:05.128902   74185 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for machine to stop 119/120
	I0920 18:13:06.129918   74185 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0920 18:13:06.129995   74185 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0920 18:13:06.131973   74185 out.go:201] 
	W0920 18:13:06.133030   74185 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0920 18:13:06.133044   74185 out.go:270] * 
	* 
	W0920 18:13:06.135985   74185 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 18:13:06.137282   74185 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-553719 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-553719 -n default-k8s-diff-port-553719
E0920 18:13:06.695279   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/enable-default-cni-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:13:09.256998   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/enable-default-cni-833505/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-553719 -n default-k8s-diff-port-553719: exit status 3 (18.466445776s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 18:13:24.606165   74975 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.190:22: connect: no route to host
	E0920 18:13:24.606187   74975 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.72.190:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-553719" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (138.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-956403 -n no-preload-956403
E0920 18:12:26.088438   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/custom-flannel-833505/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-956403 -n no-preload-956403: exit status 3 (3.16773846s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 18:12:28.382177   74508 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.47:22: connect: no route to host
	E0920 18:12:28.382200   74508 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.47:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-956403 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0920 18:12:28.650459   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/custom-flannel-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:12:33.772614   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/custom-flannel-833505/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-956403 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.15277337s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.47:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-956403 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-956403 -n no-preload-956403
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-956403 -n no-preload-956403: exit status 3 (3.06306163s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 18:12:37.598236   74591 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.47:22: connect: no route to host
	E0920 18:12:37.598258   74591 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.47:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-956403" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-744025 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-744025 create -f testdata/busybox.yaml: exit status 1 (46.269617ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-744025" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-744025 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-744025 -n old-k8s-version-744025
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-744025 -n old-k8s-version-744025: exit status 6 (225.745821ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 18:12:35.170017   74661 status.go:451] kubeconfig endpoint: get endpoint: "old-k8s-version-744025" does not appear in /home/jenkins/minikube-integration/19672-8777/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-744025" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-744025 -n old-k8s-version-744025
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-744025 -n old-k8s-version-744025: exit status 6 (223.431493ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 18:12:35.391789   74691 status.go:451] kubeconfig endpoint: get endpoint: "old-k8s-version-744025" does not appear in /home/jenkins/minikube-integration/19672-8777/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-744025" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (95.8s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-744025 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-744025 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m35.529678212s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-744025 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-744025 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-744025 describe deploy/metrics-server -n kube-system: exit status 1 (46.438827ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-744025" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-744025 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-744025 -n old-k8s-version-744025
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-744025 -n old-k8s-version-744025: exit status 6 (219.763017ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 18:14:11.189357   75460 status.go:451] kubeconfig endpoint: get endpoint: "old-k8s-version-744025" does not appear in /home/jenkins/minikube-integration/19672-8777/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-744025" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (95.80s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-768431 -n embed-certs-768431
E0920 18:13:04.124378   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/enable-default-cni-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:13:04.131665   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/enable-default-cni-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:13:04.143110   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/enable-default-cni-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:13:04.164885   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/enable-default-cni-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:13:04.206368   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/enable-default-cni-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:13:04.287862   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/enable-default-cni-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:13:04.449394   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/enable-default-cni-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:13:04.496834   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/custom-flannel-833505/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-768431 -n embed-certs-768431: exit status 3 (3.16778547s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 18:13:04.734293   74911 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.202:22: connect: no route to host
	E0920 18:13:04.734313   74911 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.61.202:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-768431 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0920 18:13:04.771392   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/enable-default-cni-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:13:05.413500   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/enable-default-cni-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:13:06.093592   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/calico-833505/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-768431 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153423082s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.202:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-768431 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-768431 -n embed-certs-768431
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-768431 -n embed-certs-768431: exit status 3 (3.062033353s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 18:13:13.950201   75040 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.202:22: connect: no route to host
	E0920 18:13:13.950227   75040 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.61.202:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-768431" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-553719 -n default-k8s-diff-port-553719
E0920 18:13:24.620432   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/enable-default-cni-833505/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-553719 -n default-k8s-diff-port-553719: exit status 3 (3.167659554s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 18:13:27.774158   75153 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.190:22: connect: no route to host
	E0920 18:13:27.774179   75153 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.72.190:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-553719 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0920 18:13:30.234638   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/bridge-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:13:30.241024   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/bridge-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:13:30.252479   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/bridge-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:13:30.273865   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/bridge-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:13:30.315259   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/bridge-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:13:30.396695   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/bridge-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:13:30.558287   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/bridge-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:13:30.879980   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/bridge-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:13:31.521944   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/bridge-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:13:32.803905   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/bridge-833505/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-553719 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152157419s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.190:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-553719 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-553719 -n default-k8s-diff-port-553719
E0920 18:13:35.365550   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/bridge-833505/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-553719 -n default-k8s-diff-port-553719: exit status 3 (3.063782208s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 18:13:36.990323   75233 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.190:22: connect: no route to host
	E0920 18:13:36.990356   75233 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.72.190:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-553719" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (732.79s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-744025 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0920 18:14:20.682932   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/flannel-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:14:26.064153   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/enable-default-cni-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:14:28.014921   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/calico-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:14:52.173827   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/bridge-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:15:01.645212   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/flannel-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:15:02.779729   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/kindnet-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:15:04.941969   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/auto-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:15:07.381768   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/custom-flannel-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:15:30.482035   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/kindnet-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:15:32.644411   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/auto-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:15:47.986641   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/enable-default-cni-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:16:14.095256   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/bridge-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:16:23.567240   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/flannel-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:16:39.932216   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/functional-945494/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:16:44.155252   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/calico-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:17:11.857668   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/calico-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:17:23.518263   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/custom-flannel-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:17:43.197284   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:17:51.223755   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/custom-flannel-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:18:03.005202   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/functional-945494/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:18:04.125188   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/enable-default-cni-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:18:30.233793   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/bridge-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:18:31.828345   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/enable-default-cni-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:18:39.707099   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/flannel-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:18:57.936690   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/bridge-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:19:07.409569   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/flannel-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:20:02.780328   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/kindnet-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:20:04.942938   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/auto-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:21:39.931429   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/functional-945494/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:21:44.154748   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/calico-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:22:23.518004   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/custom-flannel-833505/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-744025 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (12m9.128078979s)

                                                
                                                
-- stdout --
	* [old-k8s-version-744025] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19672
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19672-8777/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-8777/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-744025" primary control-plane node in "old-k8s-version-744025" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-744025" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 18:14:12.735020   75577 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:14:12.735385   75577 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:14:12.735400   75577 out.go:358] Setting ErrFile to fd 2...
	I0920 18:14:12.735407   75577 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:14:12.735610   75577 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-8777/.minikube/bin
	I0920 18:14:12.736161   75577 out.go:352] Setting JSON to false
	I0920 18:14:12.737033   75577 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6996,"bootTime":1726849057,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 18:14:12.737131   75577 start.go:139] virtualization: kvm guest
	I0920 18:14:12.739127   75577 out.go:177] * [old-k8s-version-744025] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 18:14:12.740288   75577 notify.go:220] Checking for updates...
	I0920 18:14:12.740310   75577 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 18:14:12.741542   75577 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 18:14:12.742727   75577 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-8777/kubeconfig
	I0920 18:14:12.743806   75577 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-8777/.minikube
	I0920 18:14:12.744863   75577 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 18:14:12.745877   75577 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 18:14:12.747201   75577 config.go:182] Loaded profile config "old-k8s-version-744025": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0920 18:14:12.747636   75577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:14:12.747678   75577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:14:12.762352   75577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42889
	I0920 18:14:12.762734   75577 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:14:12.763321   75577 main.go:141] libmachine: Using API Version  1
	I0920 18:14:12.763348   75577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:14:12.763742   75577 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:14:12.763968   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .DriverName
	I0920 18:14:12.765767   75577 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0920 18:14:12.767036   75577 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 18:14:12.767377   75577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:14:12.767449   75577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:14:12.782473   75577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40777
	I0920 18:14:12.782971   75577 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:14:12.783455   75577 main.go:141] libmachine: Using API Version  1
	I0920 18:14:12.783477   75577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:14:12.783807   75577 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:14:12.784035   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .DriverName
	I0920 18:14:12.820265   75577 out.go:177] * Using the kvm2 driver based on existing profile
	I0920 18:14:12.821397   75577 start.go:297] selected driver: kvm2
	I0920 18:14:12.821409   75577 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-744025 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-744025 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.207 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:14:12.821519   75577 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 18:14:12.822217   75577 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:14:12.822309   75577 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19672-8777/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 18:14:12.837641   75577 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 18:14:12.838107   75577 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:14:12.838143   75577 cni.go:84] Creating CNI manager for ""
	I0920 18:14:12.838193   75577 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:14:12.838231   75577 start.go:340] cluster config:
	{Name:old-k8s-version-744025 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-744025 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.207 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:14:12.838329   75577 iso.go:125] acquiring lock: {Name:mkba95ef0488e46f622333e9f317f43def93040b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:14:12.840059   75577 out.go:177] * Starting "old-k8s-version-744025" primary control-plane node in "old-k8s-version-744025" cluster
	I0920 18:14:12.841339   75577 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0920 18:14:12.841384   75577 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0920 18:14:12.841394   75577 cache.go:56] Caching tarball of preloaded images
	I0920 18:14:12.841473   75577 preload.go:172] Found /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 18:14:12.841482   75577 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0920 18:14:12.841594   75577 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/config.json ...
	I0920 18:14:12.841781   75577 start.go:360] acquireMachinesLock for old-k8s-version-744025: {Name:mkfeedb385cf08b5d2aa00913e85815d02a180c2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 18:17:55.087078   75577 start.go:364] duration metric: took 3m42.245259516s to acquireMachinesLock for "old-k8s-version-744025"
	I0920 18:17:55.087155   75577 start.go:96] Skipping create...Using existing machine configuration
	I0920 18:17:55.087166   75577 fix.go:54] fixHost starting: 
	I0920 18:17:55.087618   75577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:17:55.087671   75577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:17:55.107336   75577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34989
	I0920 18:17:55.107839   75577 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:17:55.108462   75577 main.go:141] libmachine: Using API Version  1
	I0920 18:17:55.108496   75577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:17:55.108855   75577 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:17:55.109072   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .DriverName
	I0920 18:17:55.109222   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetState
	I0920 18:17:55.110831   75577 fix.go:112] recreateIfNeeded on old-k8s-version-744025: state=Stopped err=<nil>
	I0920 18:17:55.110857   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .DriverName
	W0920 18:17:55.110973   75577 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 18:17:55.113408   75577 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-744025" ...
	I0920 18:17:55.114625   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .Start
	I0920 18:17:55.114814   75577 main.go:141] libmachine: (old-k8s-version-744025) Ensuring networks are active...
	I0920 18:17:55.115808   75577 main.go:141] libmachine: (old-k8s-version-744025) Ensuring network default is active
	I0920 18:17:55.116209   75577 main.go:141] libmachine: (old-k8s-version-744025) Ensuring network mk-old-k8s-version-744025 is active
	I0920 18:17:55.116615   75577 main.go:141] libmachine: (old-k8s-version-744025) Getting domain xml...
	I0920 18:17:55.117403   75577 main.go:141] libmachine: (old-k8s-version-744025) Creating domain...
	I0920 18:17:56.411359   75577 main.go:141] libmachine: (old-k8s-version-744025) Waiting to get IP...
	I0920 18:17:56.412507   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:17:56.413010   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:17:56.413094   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:17:56.412996   76990 retry.go:31] will retry after 300.110477ms: waiting for machine to come up
	I0920 18:17:56.714648   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:17:56.715163   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:17:56.715183   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:17:56.715114   76990 retry.go:31] will retry after 352.948637ms: waiting for machine to come up
	I0920 18:17:57.069760   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:17:57.070449   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:17:57.070474   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:17:57.070404   76990 retry.go:31] will retry after 335.58281ms: waiting for machine to come up
	I0920 18:17:57.408023   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:17:57.408521   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:17:57.408546   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:17:57.408469   76990 retry.go:31] will retry after 498.040148ms: waiting for machine to come up
	I0920 18:17:57.908137   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:17:57.908561   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:17:57.908589   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:17:57.908522   76990 retry.go:31] will retry after 749.044696ms: waiting for machine to come up
	I0920 18:17:58.658869   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:17:58.659393   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:17:58.659424   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:17:58.659342   76990 retry.go:31] will retry after 949.936088ms: waiting for machine to come up
	I0920 18:17:59.610936   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:17:59.611467   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:17:59.611513   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:17:59.611430   76990 retry.go:31] will retry after 762.437104ms: waiting for machine to come up
	I0920 18:18:00.375768   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:00.376207   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:18:00.376235   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:18:00.376152   76990 retry.go:31] will retry after 1.228102027s: waiting for machine to come up
	I0920 18:18:01.606490   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:01.606958   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:18:01.606982   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:18:01.606907   76990 retry.go:31] will retry after 1.383186524s: waiting for machine to come up
	I0920 18:18:02.992412   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:02.992887   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:18:02.992919   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:18:02.992822   76990 retry.go:31] will retry after 2.100326569s: waiting for machine to come up
	I0920 18:18:05.095088   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:05.095546   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:18:05.095569   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:18:05.095507   76990 retry.go:31] will retry after 1.758181729s: waiting for machine to come up
	I0920 18:18:06.855172   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:06.855654   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:18:06.855678   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:18:06.855606   76990 retry.go:31] will retry after 2.478116743s: waiting for machine to come up
	I0920 18:18:09.334928   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:09.335367   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:18:09.335402   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:18:09.335314   76990 retry.go:31] will retry after 4.194120768s: waiting for machine to come up
	I0920 18:18:13.533702   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.534281   75577 main.go:141] libmachine: (old-k8s-version-744025) Found IP for machine: 192.168.39.207
	I0920 18:18:13.534309   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has current primary IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.534314   75577 main.go:141] libmachine: (old-k8s-version-744025) Reserving static IP address...
	I0920 18:18:13.534725   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "old-k8s-version-744025", mac: "52:54:00:e5:57:41", ip: "192.168.39.207"} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:13.534761   75577 main.go:141] libmachine: (old-k8s-version-744025) Reserved static IP address: 192.168.39.207
	I0920 18:18:13.534783   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | skip adding static IP to network mk-old-k8s-version-744025 - found existing host DHCP lease matching {name: "old-k8s-version-744025", mac: "52:54:00:e5:57:41", ip: "192.168.39.207"}
	I0920 18:18:13.534800   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | Getting to WaitForSSH function...
	I0920 18:18:13.534816   75577 main.go:141] libmachine: (old-k8s-version-744025) Waiting for SSH to be available...
	I0920 18:18:13.536879   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.537271   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:13.537301   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.537391   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | Using SSH client type: external
	I0920 18:18:13.537439   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/old-k8s-version-744025/id_rsa (-rw-------)
	I0920 18:18:13.537476   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.207 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-8777/.minikube/machines/old-k8s-version-744025/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 18:18:13.537486   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | About to run SSH command:
	I0920 18:18:13.537498   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | exit 0
	I0920 18:18:13.662214   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | SSH cmd err, output: <nil>: 
	I0920 18:18:13.662565   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetConfigRaw
	I0920 18:18:13.663269   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetIP
	I0920 18:18:13.666068   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.666530   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:13.666561   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.666899   75577 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/config.json ...
	I0920 18:18:13.667111   75577 machine.go:93] provisionDockerMachine start ...
	I0920 18:18:13.667129   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .DriverName
	I0920 18:18:13.667331   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHHostname
	I0920 18:18:13.669347   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.669717   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:13.669743   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.669908   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHPort
	I0920 18:18:13.670167   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:13.670334   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:13.670453   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHUsername
	I0920 18:18:13.670583   75577 main.go:141] libmachine: Using SSH client type: native
	I0920 18:18:13.670759   75577 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.207 22 <nil> <nil>}
	I0920 18:18:13.670770   75577 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 18:18:13.774059   75577 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 18:18:13.774093   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetMachineName
	I0920 18:18:13.774406   75577 buildroot.go:166] provisioning hostname "old-k8s-version-744025"
	I0920 18:18:13.774434   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetMachineName
	I0920 18:18:13.774618   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHHostname
	I0920 18:18:13.777175   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.777482   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:13.777513   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.777633   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHPort
	I0920 18:18:13.777803   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:13.777966   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:13.778082   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHUsername
	I0920 18:18:13.778235   75577 main.go:141] libmachine: Using SSH client type: native
	I0920 18:18:13.778404   75577 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.207 22 <nil> <nil>}
	I0920 18:18:13.778417   75577 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-744025 && echo "old-k8s-version-744025" | sudo tee /etc/hostname
	I0920 18:18:13.900180   75577 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-744025
	
	I0920 18:18:13.900224   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHHostname
	I0920 18:18:13.902958   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.903309   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:13.903340   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.903543   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHPort
	I0920 18:18:13.903762   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:13.903931   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:13.904051   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHUsername
	I0920 18:18:13.904214   75577 main.go:141] libmachine: Using SSH client type: native
	I0920 18:18:13.904393   75577 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.207 22 <nil> <nil>}
	I0920 18:18:13.904409   75577 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-744025' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-744025/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-744025' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 18:18:14.023748   75577 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:18:14.023781   75577 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-8777/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-8777/.minikube}
	I0920 18:18:14.023834   75577 buildroot.go:174] setting up certificates
	I0920 18:18:14.023851   75577 provision.go:84] configureAuth start
	I0920 18:18:14.023866   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetMachineName
	I0920 18:18:14.024154   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetIP
	I0920 18:18:14.027240   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.027640   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:14.027778   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.027867   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHHostname
	I0920 18:18:14.030383   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.030741   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:14.030765   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.030915   75577 provision.go:143] copyHostCerts
	I0920 18:18:14.030979   75577 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem, removing ...
	I0920 18:18:14.030999   75577 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem
	I0920 18:18:14.031072   75577 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem (1082 bytes)
	I0920 18:18:14.031188   75577 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem, removing ...
	I0920 18:18:14.031205   75577 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem
	I0920 18:18:14.031240   75577 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem (1123 bytes)
	I0920 18:18:14.031340   75577 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem, removing ...
	I0920 18:18:14.031351   75577 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem
	I0920 18:18:14.031378   75577 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem (1675 bytes)
	I0920 18:18:14.031455   75577 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-744025 san=[127.0.0.1 192.168.39.207 localhost minikube old-k8s-version-744025]
	I0920 18:18:14.140775   75577 provision.go:177] copyRemoteCerts
	I0920 18:18:14.140847   75577 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 18:18:14.140883   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHHostname
	I0920 18:18:14.143599   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.144016   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:14.144062   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.144199   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHPort
	I0920 18:18:14.144377   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:14.144526   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHUsername
	I0920 18:18:14.144656   75577 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/old-k8s-version-744025/id_rsa Username:docker}
	I0920 18:18:14.228286   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 18:18:14.257293   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0920 18:18:14.281335   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0920 18:18:14.305736   75577 provision.go:87] duration metric: took 281.853458ms to configureAuth
	I0920 18:18:14.305762   75577 buildroot.go:189] setting minikube options for container-runtime
	I0920 18:18:14.305983   75577 config.go:182] Loaded profile config "old-k8s-version-744025": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0920 18:18:14.306076   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHHostname
	I0920 18:18:14.308833   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.309220   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:14.309244   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.309535   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHPort
	I0920 18:18:14.309779   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:14.309974   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:14.310124   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHUsername
	I0920 18:18:14.310316   75577 main.go:141] libmachine: Using SSH client type: native
	I0920 18:18:14.310535   75577 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.207 22 <nil> <nil>}
	I0920 18:18:14.310551   75577 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 18:18:14.536416   75577 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 18:18:14.536442   75577 machine.go:96] duration metric: took 869.318772ms to provisionDockerMachine
	I0920 18:18:14.536456   75577 start.go:293] postStartSetup for "old-k8s-version-744025" (driver="kvm2")
	I0920 18:18:14.536468   75577 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 18:18:14.536510   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .DriverName
	I0920 18:18:14.536803   75577 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 18:18:14.536831   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHHostname
	I0920 18:18:14.539503   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.539923   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:14.539951   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.540126   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHPort
	I0920 18:18:14.540303   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:14.540463   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHUsername
	I0920 18:18:14.540649   75577 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/old-k8s-version-744025/id_rsa Username:docker}
	I0920 18:18:14.625866   75577 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 18:18:14.630527   75577 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 18:18:14.630551   75577 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/addons for local assets ...
	I0920 18:18:14.630637   75577 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/files for local assets ...
	I0920 18:18:14.630727   75577 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem -> 159732.pem in /etc/ssl/certs
	I0920 18:18:14.630840   75577 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 18:18:14.640965   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /etc/ssl/certs/159732.pem (1708 bytes)
	I0920 18:18:14.668227   75577 start.go:296] duration metric: took 131.756559ms for postStartSetup
	I0920 18:18:14.668268   75577 fix.go:56] duration metric: took 19.581104117s for fixHost
	I0920 18:18:14.668295   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHHostname
	I0920 18:18:14.671138   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.671520   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:14.671549   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.671777   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHPort
	I0920 18:18:14.671981   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:14.672141   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:14.672280   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHUsername
	I0920 18:18:14.672436   75577 main.go:141] libmachine: Using SSH client type: native
	I0920 18:18:14.672606   75577 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.207 22 <nil> <nil>}
	I0920 18:18:14.672616   75577 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 18:18:14.778752   75577 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726856294.753230182
	
	I0920 18:18:14.778785   75577 fix.go:216] guest clock: 1726856294.753230182
	I0920 18:18:14.778796   75577 fix.go:229] Guest: 2024-09-20 18:18:14.753230182 +0000 UTC Remote: 2024-09-20 18:18:14.668273351 +0000 UTC m=+241.967312055 (delta=84.956831ms)
	I0920 18:18:14.778827   75577 fix.go:200] guest clock delta is within tolerance: 84.956831ms
	I0920 18:18:14.778836   75577 start.go:83] releasing machines lock for "old-k8s-version-744025", held for 19.691716285s
	I0920 18:18:14.778874   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .DriverName
	I0920 18:18:14.779135   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetIP
	I0920 18:18:14.781932   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.782357   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:14.782386   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.782572   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .DriverName
	I0920 18:18:14.783124   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .DriverName
	I0920 18:18:14.783328   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .DriverName
	I0920 18:18:14.783401   75577 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 18:18:14.783465   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHHostname
	I0920 18:18:14.783519   75577 ssh_runner.go:195] Run: cat /version.json
	I0920 18:18:14.783552   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHHostname
	I0920 18:18:14.786645   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.786817   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.786994   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:14.787023   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.787207   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:14.787259   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.787264   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHPort
	I0920 18:18:14.787404   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHPort
	I0920 18:18:14.787469   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:14.787561   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:14.787612   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHUsername
	I0920 18:18:14.787711   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHUsername
	I0920 18:18:14.787845   75577 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/old-k8s-version-744025/id_rsa Username:docker}
	I0920 18:18:14.787845   75577 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/old-k8s-version-744025/id_rsa Username:docker}
	I0920 18:18:14.872183   75577 ssh_runner.go:195] Run: systemctl --version
	I0920 18:18:14.913275   75577 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 18:18:15.071299   75577 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 18:18:15.078725   75577 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 18:18:15.078806   75577 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 18:18:15.101497   75577 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 18:18:15.101522   75577 start.go:495] detecting cgroup driver to use...
	I0920 18:18:15.101579   75577 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 18:18:15.120626   75577 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 18:18:15.137297   75577 docker.go:217] disabling cri-docker service (if available) ...
	I0920 18:18:15.137401   75577 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 18:18:15.152152   75577 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 18:18:15.167359   75577 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 18:18:15.288763   75577 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 18:18:15.475429   75577 docker.go:233] disabling docker service ...
	I0920 18:18:15.475512   75577 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 18:18:15.493331   75577 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 18:18:15.510326   75577 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 18:18:15.629119   75577 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 18:18:15.758923   75577 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 18:18:15.776778   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 18:18:15.798980   75577 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0920 18:18:15.799042   75577 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:18:15.810264   75577 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 18:18:15.810322   75577 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:18:15.821060   75577 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:18:15.832685   75577 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:18:15.843026   75577 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 18:18:15.853996   75577 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 18:18:15.864126   75577 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 18:18:15.864183   75577 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 18:18:15.878216   75577 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 18:18:15.889820   75577 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:18:16.015555   75577 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 18:18:16.118797   75577 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 18:18:16.118880   75577 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 18:18:16.124430   75577 start.go:563] Will wait 60s for crictl version
	I0920 18:18:16.124485   75577 ssh_runner.go:195] Run: which crictl
	I0920 18:18:16.128632   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 18:18:16.165596   75577 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 18:18:16.165707   75577 ssh_runner.go:195] Run: crio --version
	I0920 18:18:16.198772   75577 ssh_runner.go:195] Run: crio --version
	I0920 18:18:16.238511   75577 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0920 18:18:16.239946   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetIP
	I0920 18:18:16.242727   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:16.243147   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:16.243184   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:16.243561   75577 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 18:18:16.247928   75577 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:18:16.262165   75577 kubeadm.go:883] updating cluster {Name:old-k8s-version-744025 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-744025 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.207 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 18:18:16.262310   75577 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0920 18:18:16.262358   75577 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:18:16.313771   75577 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0920 18:18:16.313854   75577 ssh_runner.go:195] Run: which lz4
	I0920 18:18:16.318361   75577 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 18:18:16.322529   75577 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 18:18:16.322570   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0920 18:18:17.953626   75577 crio.go:462] duration metric: took 1.635321078s to copy over tarball
	I0920 18:18:17.953717   75577 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 18:18:21.049561   75577 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.095809102s)
	I0920 18:18:21.049588   75577 crio.go:469] duration metric: took 3.095926273s to extract the tarball
	I0920 18:18:21.049596   75577 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 18:18:21.093521   75577 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:18:21.132318   75577 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0920 18:18:21.132361   75577 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0920 18:18:21.132426   75577 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:18:21.132449   75577 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 18:18:21.132432   75577 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 18:18:21.132551   75577 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 18:18:21.132587   75577 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0920 18:18:21.132708   75577 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0920 18:18:21.132819   75577 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 18:18:21.132557   75577 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0920 18:18:21.134217   75577 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0920 18:18:21.134256   75577 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0920 18:18:21.134277   75577 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 18:18:21.134313   75577 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0920 18:18:21.134327   75577 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 18:18:21.134341   75577 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 18:18:21.134266   75577 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 18:18:21.134356   75577 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:18:21.421546   75577 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0920 18:18:21.467311   75577 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0920 18:18:21.467349   75577 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0920 18:18:21.467400   75577 ssh_runner.go:195] Run: which crictl
	I0920 18:18:21.471158   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 18:18:21.474543   75577 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0920 18:18:21.480501   75577 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 18:18:21.499826   75577 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0920 18:18:21.499835   75577 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0920 18:18:21.505712   75577 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0920 18:18:21.514377   75577 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0920 18:18:21.514897   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 18:18:21.601531   75577 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0920 18:18:21.601582   75577 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0920 18:18:21.601635   75577 ssh_runner.go:195] Run: which crictl
	I0920 18:18:21.660417   75577 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0920 18:18:21.660457   75577 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 18:18:21.660504   75577 ssh_runner.go:195] Run: which crictl
	I0920 18:18:21.660528   75577 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0920 18:18:21.660571   75577 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 18:18:21.660625   75577 ssh_runner.go:195] Run: which crictl
	I0920 18:18:21.667538   75577 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0920 18:18:21.667558   75577 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0920 18:18:21.667583   75577 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 18:18:21.667595   75577 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0920 18:18:21.667633   75577 ssh_runner.go:195] Run: which crictl
	I0920 18:18:21.667652   75577 ssh_runner.go:195] Run: which crictl
	I0920 18:18:21.676529   75577 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0920 18:18:21.676580   75577 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 18:18:21.676602   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 18:18:21.676633   75577 ssh_runner.go:195] Run: which crictl
	I0920 18:18:21.676643   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 18:18:21.680041   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 18:18:21.681078   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 18:18:21.681180   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 18:18:21.683409   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 18:18:21.804439   75577 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0920 18:18:21.804492   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 18:18:21.804530   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 18:18:21.804585   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 18:18:21.823434   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 18:18:21.823497   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 18:18:21.823539   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 18:18:21.932568   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 18:18:21.932619   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 18:18:21.932639   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 18:18:21.958001   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 18:18:21.971089   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 18:18:21.975643   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 18:18:22.100118   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 18:18:22.100173   75577 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0920 18:18:22.100197   75577 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0920 18:18:22.100276   75577 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0920 18:18:22.100333   75577 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0920 18:18:22.109895   75577 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0920 18:18:22.137798   75577 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0920 18:18:22.331884   75577 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:18:22.476470   75577 cache_images.go:92] duration metric: took 1.344086626s to LoadCachedImages
	W0920 18:18:22.476575   75577 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0920 18:18:22.476595   75577 kubeadm.go:934] updating node { 192.168.39.207 8443 v1.20.0 crio true true} ...
	I0920 18:18:22.476742   75577 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-744025 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.207
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-744025 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 18:18:22.476833   75577 ssh_runner.go:195] Run: crio config
	I0920 18:18:22.528964   75577 cni.go:84] Creating CNI manager for ""
	I0920 18:18:22.528990   75577 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:18:22.528999   75577 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 18:18:22.529016   75577 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.207 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-744025 NodeName:old-k8s-version-744025 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.207"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.207 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0920 18:18:22.529173   75577 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.207
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-744025"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.207
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.207"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 18:18:22.529233   75577 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0920 18:18:22.540849   75577 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 18:18:22.540935   75577 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 18:18:22.551148   75577 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0920 18:18:22.569199   75577 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 18:18:22.585909   75577 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0920 18:18:22.604987   75577 ssh_runner.go:195] Run: grep 192.168.39.207	control-plane.minikube.internal$ /etc/hosts
	I0920 18:18:22.609152   75577 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.207	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:18:22.628042   75577 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:18:22.796651   75577 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:18:22.813789   75577 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025 for IP: 192.168.39.207
	I0920 18:18:22.813817   75577 certs.go:194] generating shared ca certs ...
	I0920 18:18:22.813848   75577 certs.go:226] acquiring lock for ca certs: {Name:mkc7ef6c737c6bdc3fdd9dcff8f57029c020d8f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:18:22.814045   75577 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key
	I0920 18:18:22.814101   75577 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key
	I0920 18:18:22.814118   75577 certs.go:256] generating profile certs ...
	I0920 18:18:22.814290   75577 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/client.key
	I0920 18:18:22.814383   75577 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/apiserver.key.3105b99d
	I0920 18:18:22.814445   75577 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/proxy-client.key
	I0920 18:18:22.814626   75577 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem (1338 bytes)
	W0920 18:18:22.814660   75577 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973_empty.pem, impossibly tiny 0 bytes
	I0920 18:18:22.814666   75577 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 18:18:22.814691   75577 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem (1082 bytes)
	I0920 18:18:22.814717   75577 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem (1123 bytes)
	I0920 18:18:22.814749   75577 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem (1675 bytes)
	I0920 18:18:22.814813   75577 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem (1708 bytes)
	I0920 18:18:22.815736   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 18:18:22.863832   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 18:18:22.921747   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 18:18:22.959556   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0920 18:18:22.992097   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0920 18:18:23.027565   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 18:18:23.057374   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 18:18:23.094290   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 18:18:23.120095   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem --> /usr/share/ca-certificates/15973.pem (1338 bytes)
	I0920 18:18:23.144374   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /usr/share/ca-certificates/159732.pem (1708 bytes)
	I0920 18:18:23.169431   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 18:18:23.195779   75577 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 18:18:23.212820   75577 ssh_runner.go:195] Run: openssl version
	I0920 18:18:23.218876   75577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 18:18:23.229684   75577 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:18:23.234533   75577 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:18:23.234603   75577 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:18:23.240460   75577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 18:18:23.251940   75577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15973.pem && ln -fs /usr/share/ca-certificates/15973.pem /etc/ssl/certs/15973.pem"
	I0920 18:18:23.263308   75577 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15973.pem
	I0920 18:18:23.268059   75577 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:04 /usr/share/ca-certificates/15973.pem
	I0920 18:18:23.268128   75577 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15973.pem
	I0920 18:18:23.274199   75577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15973.pem /etc/ssl/certs/51391683.0"
	I0920 18:18:23.286362   75577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/159732.pem && ln -fs /usr/share/ca-certificates/159732.pem /etc/ssl/certs/159732.pem"
	I0920 18:18:23.303962   75577 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/159732.pem
	I0920 18:18:23.310907   75577 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:04 /usr/share/ca-certificates/159732.pem
	I0920 18:18:23.310981   75577 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/159732.pem
	I0920 18:18:23.317881   75577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/159732.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 18:18:23.329247   75577 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 18:18:23.334223   75577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 18:18:23.340565   75577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 18:18:23.346929   75577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 18:18:23.353681   75577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 18:18:23.359699   75577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 18:18:23.365749   75577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 18:18:23.371899   75577 kubeadm.go:392] StartCluster: {Name:old-k8s-version-744025 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-744025 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.207 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:18:23.371981   75577 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 18:18:23.372027   75577 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:18:23.419904   75577 cri.go:89] found id: ""
	I0920 18:18:23.419982   75577 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 18:18:23.431761   75577 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 18:18:23.431782   75577 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 18:18:23.431833   75577 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 18:18:23.444358   75577 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 18:18:23.445545   75577 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-744025" does not appear in /home/jenkins/minikube-integration/19672-8777/kubeconfig
	I0920 18:18:23.446531   75577 kubeconfig.go:62] /home/jenkins/minikube-integration/19672-8777/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-744025" cluster setting kubeconfig missing "old-k8s-version-744025" context setting]
	I0920 18:18:23.447808   75577 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/kubeconfig: {Name:mkf32a4c736808e023459b2f0e40188618a38db1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:18:23.568927   75577 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 18:18:23.579991   75577 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.207
	I0920 18:18:23.580025   75577 kubeadm.go:1160] stopping kube-system containers ...
	I0920 18:18:23.580038   75577 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0920 18:18:23.580097   75577 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:18:23.625568   75577 cri.go:89] found id: ""
	I0920 18:18:23.625648   75577 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 18:18:23.643938   75577 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 18:18:23.654375   75577 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 18:18:23.654398   75577 kubeadm.go:157] found existing configuration files:
	
	I0920 18:18:23.654453   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 18:18:23.664335   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 18:18:23.664409   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 18:18:23.674996   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 18:18:23.685310   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 18:18:23.685401   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 18:18:23.696241   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 18:18:23.706386   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 18:18:23.706465   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 18:18:23.716491   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 18:18:23.726566   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 18:18:23.726626   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 18:18:23.738576   75577 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 18:18:23.749510   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:23.877503   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:24.789322   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:25.054969   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:25.152117   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:25.249140   75577 api_server.go:52] waiting for apiserver process to appear ...
	I0920 18:18:25.249245   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:25.749427   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:26.249360   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:26.749895   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:27.249636   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:27.749629   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:28.250139   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:28.749691   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:29.249709   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:29.749550   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:30.250186   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:30.749562   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:31.249622   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:31.749925   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:32.250324   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:32.749877   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:33.249391   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:33.749510   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:34.250003   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:34.749967   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:35.249953   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:35.749992   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:36.250161   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:36.750339   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:37.249600   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:37.750269   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:38.249855   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:38.749845   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:39.249342   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:39.750306   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:40.250103   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:40.750346   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:41.250134   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:41.750136   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:42.249945   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:42.749980   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:43.249416   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:43.750087   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:44.249972   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:44.749649   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:45.250128   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:45.750346   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:46.249611   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:46.749566   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:47.249814   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:47.749487   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:48.249612   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:48.750324   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:49.250006   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:49.749667   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:50.249802   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:50.749597   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:51.250236   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:51.750203   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:52.250132   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:52.749468   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:53.249519   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:53.750056   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:54.250286   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:54.750240   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:55.249980   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:55.750192   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:56.249940   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:56.749289   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:57.249615   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:57.750113   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:58.250100   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:58.750133   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:59.249427   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:59.749350   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:00.250267   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:00.749723   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:01.249549   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:01.749698   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:02.250043   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:02.750082   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:03.249861   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:03.749400   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:04.250213   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:04.749806   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:05.249569   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:05.750115   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:06.250182   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:06.750188   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:07.250041   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:07.749479   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:08.249641   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:08.749637   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:09.249803   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:09.749534   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:10.249365   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:10.749366   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:11.250045   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:11.750298   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:12.249766   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:12.749447   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:13.249356   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:13.749514   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:14.249942   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:14.749988   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:15.249405   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:15.749620   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:16.249962   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:16.750325   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:17.250198   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:17.749509   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:18.250076   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:18.749518   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:19.249440   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:19.750156   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:20.249686   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:20.750315   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:21.249755   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:21.749370   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:22.249767   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:22.749289   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:23.249386   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:23.749870   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:24.249424   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:24.750349   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:25.249582   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:25.249657   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:25.287604   75577 cri.go:89] found id: ""
	I0920 18:19:25.287633   75577 logs.go:276] 0 containers: []
	W0920 18:19:25.287641   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:25.287647   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:25.287692   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:25.322093   75577 cri.go:89] found id: ""
	I0920 18:19:25.322127   75577 logs.go:276] 0 containers: []
	W0920 18:19:25.322137   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:25.322144   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:25.322194   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:25.354812   75577 cri.go:89] found id: ""
	I0920 18:19:25.354840   75577 logs.go:276] 0 containers: []
	W0920 18:19:25.354851   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:25.354863   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:25.354928   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:25.387777   75577 cri.go:89] found id: ""
	I0920 18:19:25.387804   75577 logs.go:276] 0 containers: []
	W0920 18:19:25.387813   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:25.387819   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:25.387867   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:25.420325   75577 cri.go:89] found id: ""
	I0920 18:19:25.420354   75577 logs.go:276] 0 containers: []
	W0920 18:19:25.420364   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:25.420372   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:25.420437   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:25.454824   75577 cri.go:89] found id: ""
	I0920 18:19:25.454853   75577 logs.go:276] 0 containers: []
	W0920 18:19:25.454864   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:25.454871   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:25.454933   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:25.493520   75577 cri.go:89] found id: ""
	I0920 18:19:25.493548   75577 logs.go:276] 0 containers: []
	W0920 18:19:25.493559   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:25.493566   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:25.493639   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:25.528795   75577 cri.go:89] found id: ""
	I0920 18:19:25.528820   75577 logs.go:276] 0 containers: []
	W0920 18:19:25.528828   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:25.528836   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:25.528847   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:25.571411   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:25.571442   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:25.625648   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:25.625693   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:25.639891   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:25.639920   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:25.769263   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:25.769289   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:25.769303   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:19:28.346894   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:28.359891   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:28.359967   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:28.396117   75577 cri.go:89] found id: ""
	I0920 18:19:28.396144   75577 logs.go:276] 0 containers: []
	W0920 18:19:28.396152   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:28.396158   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:28.396206   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:28.447888   75577 cri.go:89] found id: ""
	I0920 18:19:28.447923   75577 logs.go:276] 0 containers: []
	W0920 18:19:28.447933   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:28.447941   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:28.447998   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:28.485018   75577 cri.go:89] found id: ""
	I0920 18:19:28.485046   75577 logs.go:276] 0 containers: []
	W0920 18:19:28.485054   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:28.485060   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:28.485108   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:28.520904   75577 cri.go:89] found id: ""
	I0920 18:19:28.520934   75577 logs.go:276] 0 containers: []
	W0920 18:19:28.520942   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:28.520948   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:28.520996   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:28.561375   75577 cri.go:89] found id: ""
	I0920 18:19:28.561407   75577 logs.go:276] 0 containers: []
	W0920 18:19:28.561415   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:28.561421   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:28.561467   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:28.597643   75577 cri.go:89] found id: ""
	I0920 18:19:28.597671   75577 logs.go:276] 0 containers: []
	W0920 18:19:28.597679   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:28.597685   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:28.597747   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:28.632885   75577 cri.go:89] found id: ""
	I0920 18:19:28.632912   75577 logs.go:276] 0 containers: []
	W0920 18:19:28.632921   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:28.632926   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:28.632973   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:28.670639   75577 cri.go:89] found id: ""
	I0920 18:19:28.670665   75577 logs.go:276] 0 containers: []
	W0920 18:19:28.670675   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:28.670699   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:28.670714   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:19:28.749623   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:28.749659   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:28.790808   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:28.790831   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:28.842027   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:28.842070   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:28.855927   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:28.855954   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:28.935807   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:31.436321   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:31.450159   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:31.450221   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:31.485444   75577 cri.go:89] found id: ""
	I0920 18:19:31.485483   75577 logs.go:276] 0 containers: []
	W0920 18:19:31.485494   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:31.485502   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:31.485562   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:31.520888   75577 cri.go:89] found id: ""
	I0920 18:19:31.520918   75577 logs.go:276] 0 containers: []
	W0920 18:19:31.520927   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:31.520941   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:31.521007   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:31.555001   75577 cri.go:89] found id: ""
	I0920 18:19:31.555029   75577 logs.go:276] 0 containers: []
	W0920 18:19:31.555040   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:31.555047   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:31.555111   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:31.588766   75577 cri.go:89] found id: ""
	I0920 18:19:31.588794   75577 logs.go:276] 0 containers: []
	W0920 18:19:31.588802   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:31.588808   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:31.588872   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:31.622006   75577 cri.go:89] found id: ""
	I0920 18:19:31.622037   75577 logs.go:276] 0 containers: []
	W0920 18:19:31.622048   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:31.622056   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:31.622110   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:31.659475   75577 cri.go:89] found id: ""
	I0920 18:19:31.659508   75577 logs.go:276] 0 containers: []
	W0920 18:19:31.659519   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:31.659527   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:31.659589   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:31.695380   75577 cri.go:89] found id: ""
	I0920 18:19:31.695415   75577 logs.go:276] 0 containers: []
	W0920 18:19:31.695426   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:31.695436   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:31.695521   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:31.729507   75577 cri.go:89] found id: ""
	I0920 18:19:31.729540   75577 logs.go:276] 0 containers: []
	W0920 18:19:31.729550   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:31.729561   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:31.729574   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:31.781857   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:31.781908   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:31.795325   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:31.795356   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:31.868684   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:31.868708   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:31.868722   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:19:31.945334   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:31.945371   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:34.485723   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:34.499039   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:34.499106   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:34.534664   75577 cri.go:89] found id: ""
	I0920 18:19:34.534697   75577 logs.go:276] 0 containers: []
	W0920 18:19:34.534709   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:34.534717   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:34.534777   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:34.567515   75577 cri.go:89] found id: ""
	I0920 18:19:34.567545   75577 logs.go:276] 0 containers: []
	W0920 18:19:34.567556   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:34.567564   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:34.567624   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:34.601653   75577 cri.go:89] found id: ""
	I0920 18:19:34.601693   75577 logs.go:276] 0 containers: []
	W0920 18:19:34.601704   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:34.601712   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:34.601775   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:34.635238   75577 cri.go:89] found id: ""
	I0920 18:19:34.635271   75577 logs.go:276] 0 containers: []
	W0920 18:19:34.635282   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:34.635291   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:34.635361   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:34.669665   75577 cri.go:89] found id: ""
	I0920 18:19:34.669689   75577 logs.go:276] 0 containers: []
	W0920 18:19:34.669697   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:34.669703   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:34.669751   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:34.705984   75577 cri.go:89] found id: ""
	I0920 18:19:34.706012   75577 logs.go:276] 0 containers: []
	W0920 18:19:34.706022   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:34.706032   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:34.706110   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:34.740052   75577 cri.go:89] found id: ""
	I0920 18:19:34.740079   75577 logs.go:276] 0 containers: []
	W0920 18:19:34.740087   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:34.740092   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:34.740139   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:34.774930   75577 cri.go:89] found id: ""
	I0920 18:19:34.774962   75577 logs.go:276] 0 containers: []
	W0920 18:19:34.774973   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:34.774984   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:34.775081   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:34.791674   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:34.791714   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:34.894988   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:34.895019   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:34.895035   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:19:34.973785   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:34.973823   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:35.010928   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:35.010964   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:37.561543   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:37.574865   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:37.574925   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:37.609145   75577 cri.go:89] found id: ""
	I0920 18:19:37.609170   75577 logs.go:276] 0 containers: []
	W0920 18:19:37.609178   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:37.609183   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:37.609247   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:37.645080   75577 cri.go:89] found id: ""
	I0920 18:19:37.645107   75577 logs.go:276] 0 containers: []
	W0920 18:19:37.645115   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:37.645121   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:37.645168   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:37.680060   75577 cri.go:89] found id: ""
	I0920 18:19:37.680094   75577 logs.go:276] 0 containers: []
	W0920 18:19:37.680105   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:37.680113   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:37.680173   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:37.713586   75577 cri.go:89] found id: ""
	I0920 18:19:37.713618   75577 logs.go:276] 0 containers: []
	W0920 18:19:37.713629   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:37.713636   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:37.713694   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:37.750249   75577 cri.go:89] found id: ""
	I0920 18:19:37.750274   75577 logs.go:276] 0 containers: []
	W0920 18:19:37.750282   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:37.750289   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:37.750351   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:37.785613   75577 cri.go:89] found id: ""
	I0920 18:19:37.785642   75577 logs.go:276] 0 containers: []
	W0920 18:19:37.785650   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:37.785656   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:37.785705   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:37.824240   75577 cri.go:89] found id: ""
	I0920 18:19:37.824267   75577 logs.go:276] 0 containers: []
	W0920 18:19:37.824278   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:37.824286   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:37.824348   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:37.858522   75577 cri.go:89] found id: ""
	I0920 18:19:37.858547   75577 logs.go:276] 0 containers: []
	W0920 18:19:37.858556   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:37.858564   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:37.858575   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:19:37.939852   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:37.939891   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:37.981029   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:37.981062   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:38.030606   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:38.030633   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:38.043914   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:38.043953   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:38.124846   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:40.625196   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:40.640638   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:40.640708   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:40.687107   75577 cri.go:89] found id: ""
	I0920 18:19:40.687131   75577 logs.go:276] 0 containers: []
	W0920 18:19:40.687140   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:40.687148   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:40.687206   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:40.725727   75577 cri.go:89] found id: ""
	I0920 18:19:40.725858   75577 logs.go:276] 0 containers: []
	W0920 18:19:40.725875   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:40.725889   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:40.725967   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:40.762965   75577 cri.go:89] found id: ""
	I0920 18:19:40.762988   75577 logs.go:276] 0 containers: []
	W0920 18:19:40.762996   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:40.763002   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:40.763049   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:40.797387   75577 cri.go:89] found id: ""
	I0920 18:19:40.797415   75577 logs.go:276] 0 containers: []
	W0920 18:19:40.797427   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:40.797439   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:40.797498   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:40.834482   75577 cri.go:89] found id: ""
	I0920 18:19:40.834513   75577 logs.go:276] 0 containers: []
	W0920 18:19:40.834521   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:40.834528   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:40.834578   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:40.870062   75577 cri.go:89] found id: ""
	I0920 18:19:40.870090   75577 logs.go:276] 0 containers: []
	W0920 18:19:40.870099   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:40.870106   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:40.870164   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:40.906613   75577 cri.go:89] found id: ""
	I0920 18:19:40.906642   75577 logs.go:276] 0 containers: []
	W0920 18:19:40.906653   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:40.906662   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:40.906721   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:40.953033   75577 cri.go:89] found id: ""
	I0920 18:19:40.953056   75577 logs.go:276] 0 containers: []
	W0920 18:19:40.953065   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:40.953073   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:40.953083   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:40.998490   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:40.998523   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:41.051488   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:41.051525   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:41.067908   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:41.067937   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:41.144854   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:41.144878   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:41.144890   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:19:43.723613   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:43.736857   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:43.736924   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:43.772588   75577 cri.go:89] found id: ""
	I0920 18:19:43.772624   75577 logs.go:276] 0 containers: []
	W0920 18:19:43.772635   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:43.772643   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:43.772712   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:43.808580   75577 cri.go:89] found id: ""
	I0920 18:19:43.808611   75577 logs.go:276] 0 containers: []
	W0920 18:19:43.808622   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:43.808629   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:43.808695   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:43.847167   75577 cri.go:89] found id: ""
	I0920 18:19:43.847207   75577 logs.go:276] 0 containers: []
	W0920 18:19:43.847218   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:43.847243   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:43.847302   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:43.883614   75577 cri.go:89] found id: ""
	I0920 18:19:43.883646   75577 logs.go:276] 0 containers: []
	W0920 18:19:43.883658   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:43.883667   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:43.883738   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:43.917602   75577 cri.go:89] found id: ""
	I0920 18:19:43.917627   75577 logs.go:276] 0 containers: []
	W0920 18:19:43.917635   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:43.917641   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:43.917694   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:43.953232   75577 cri.go:89] found id: ""
	I0920 18:19:43.953259   75577 logs.go:276] 0 containers: []
	W0920 18:19:43.953268   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:43.953273   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:43.953325   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:43.991204   75577 cri.go:89] found id: ""
	I0920 18:19:43.991234   75577 logs.go:276] 0 containers: []
	W0920 18:19:43.991246   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:43.991253   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:43.991334   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:44.028117   75577 cri.go:89] found id: ""
	I0920 18:19:44.028140   75577 logs.go:276] 0 containers: []
	W0920 18:19:44.028147   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:44.028154   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:44.028164   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:44.068175   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:44.068203   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:44.119953   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:44.119993   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:44.134127   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:44.134154   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:44.211563   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:44.211592   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:44.211604   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:19:46.787328   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:46.803429   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:46.803516   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:46.844813   75577 cri.go:89] found id: ""
	I0920 18:19:46.844839   75577 logs.go:276] 0 containers: []
	W0920 18:19:46.844850   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:46.844856   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:46.844914   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:46.878441   75577 cri.go:89] found id: ""
	I0920 18:19:46.878483   75577 logs.go:276] 0 containers: []
	W0920 18:19:46.878497   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:46.878506   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:46.878580   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:46.913933   75577 cri.go:89] found id: ""
	I0920 18:19:46.913976   75577 logs.go:276] 0 containers: []
	W0920 18:19:46.913986   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:46.913993   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:46.914066   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:46.948577   75577 cri.go:89] found id: ""
	I0920 18:19:46.948609   75577 logs.go:276] 0 containers: []
	W0920 18:19:46.948618   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:46.948625   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:46.948689   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:46.982742   75577 cri.go:89] found id: ""
	I0920 18:19:46.982770   75577 logs.go:276] 0 containers: []
	W0920 18:19:46.982778   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:46.982785   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:46.982836   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:47.017075   75577 cri.go:89] found id: ""
	I0920 18:19:47.017107   75577 logs.go:276] 0 containers: []
	W0920 18:19:47.017120   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:47.017128   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:47.017190   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:47.055494   75577 cri.go:89] found id: ""
	I0920 18:19:47.055520   75577 logs.go:276] 0 containers: []
	W0920 18:19:47.055528   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:47.055534   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:47.055586   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:47.091966   75577 cri.go:89] found id: ""
	I0920 18:19:47.091998   75577 logs.go:276] 0 containers: []
	W0920 18:19:47.092006   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:47.092019   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:47.092035   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:47.142916   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:47.142955   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:47.158471   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:47.158502   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:47.241530   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:47.241553   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:47.241567   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:19:47.316958   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:47.316992   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:49.854403   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:49.867317   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:49.867431   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:49.903627   75577 cri.go:89] found id: ""
	I0920 18:19:49.903661   75577 logs.go:276] 0 containers: []
	W0920 18:19:49.903674   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:49.903682   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:49.903745   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:49.936857   75577 cri.go:89] found id: ""
	I0920 18:19:49.936892   75577 logs.go:276] 0 containers: []
	W0920 18:19:49.936902   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:49.936909   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:49.936975   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:49.974403   75577 cri.go:89] found id: ""
	I0920 18:19:49.974433   75577 logs.go:276] 0 containers: []
	W0920 18:19:49.974441   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:49.974447   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:49.974505   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:50.011810   75577 cri.go:89] found id: ""
	I0920 18:19:50.011839   75577 logs.go:276] 0 containers: []
	W0920 18:19:50.011850   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:50.011857   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:50.011921   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:50.050491   75577 cri.go:89] found id: ""
	I0920 18:19:50.050527   75577 logs.go:276] 0 containers: []
	W0920 18:19:50.050538   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:50.050546   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:50.050610   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:50.088270   75577 cri.go:89] found id: ""
	I0920 18:19:50.088299   75577 logs.go:276] 0 containers: []
	W0920 18:19:50.088308   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:50.088314   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:50.088375   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:50.126355   75577 cri.go:89] found id: ""
	I0920 18:19:50.126382   75577 logs.go:276] 0 containers: []
	W0920 18:19:50.126392   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:50.126399   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:50.126460   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:50.160770   75577 cri.go:89] found id: ""
	I0920 18:19:50.160798   75577 logs.go:276] 0 containers: []
	W0920 18:19:50.160808   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:50.160819   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:50.160834   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:50.212866   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:50.212914   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:50.227269   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:50.227295   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:50.301363   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:50.301392   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:50.301406   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:19:50.376293   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:50.376330   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:52.916567   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:52.930445   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:52.930525   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:52.965360   75577 cri.go:89] found id: ""
	I0920 18:19:52.965392   75577 logs.go:276] 0 containers: []
	W0920 18:19:52.965403   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:52.965411   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:52.965468   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:53.000439   75577 cri.go:89] found id: ""
	I0920 18:19:53.000480   75577 logs.go:276] 0 containers: []
	W0920 18:19:53.000495   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:53.000503   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:53.000577   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:53.034598   75577 cri.go:89] found id: ""
	I0920 18:19:53.034630   75577 logs.go:276] 0 containers: []
	W0920 18:19:53.034640   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:53.034647   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:53.034744   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:53.068636   75577 cri.go:89] found id: ""
	I0920 18:19:53.068664   75577 logs.go:276] 0 containers: []
	W0920 18:19:53.068673   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:53.068678   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:53.068750   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:53.104381   75577 cri.go:89] found id: ""
	I0920 18:19:53.104408   75577 logs.go:276] 0 containers: []
	W0920 18:19:53.104416   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:53.104421   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:53.104474   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:53.137865   75577 cri.go:89] found id: ""
	I0920 18:19:53.137897   75577 logs.go:276] 0 containers: []
	W0920 18:19:53.137909   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:53.137922   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:53.137983   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:53.171825   75577 cri.go:89] found id: ""
	I0920 18:19:53.171861   75577 logs.go:276] 0 containers: []
	W0920 18:19:53.171874   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:53.171883   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:53.171952   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:53.210742   75577 cri.go:89] found id: ""
	I0920 18:19:53.210774   75577 logs.go:276] 0 containers: []
	W0920 18:19:53.210784   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:53.210796   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:53.210811   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:19:53.285597   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:53.285637   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:53.329821   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:53.329871   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:53.381362   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:53.381415   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:53.396044   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:53.396074   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:53.471582   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:55.972369   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:55.987205   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:55.987281   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:56.031322   75577 cri.go:89] found id: ""
	I0920 18:19:56.031355   75577 logs.go:276] 0 containers: []
	W0920 18:19:56.031363   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:56.031368   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:56.031420   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:56.066442   75577 cri.go:89] found id: ""
	I0920 18:19:56.066487   75577 logs.go:276] 0 containers: []
	W0920 18:19:56.066576   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:56.066617   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:56.066695   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:56.101818   75577 cri.go:89] found id: ""
	I0920 18:19:56.101864   75577 logs.go:276] 0 containers: []
	W0920 18:19:56.101876   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:56.101883   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:56.101947   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:56.139513   75577 cri.go:89] found id: ""
	I0920 18:19:56.139545   75577 logs.go:276] 0 containers: []
	W0920 18:19:56.139557   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:56.139565   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:56.139618   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:56.173638   75577 cri.go:89] found id: ""
	I0920 18:19:56.173669   75577 logs.go:276] 0 containers: []
	W0920 18:19:56.173680   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:56.173688   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:56.173752   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:56.210657   75577 cri.go:89] found id: ""
	I0920 18:19:56.210689   75577 logs.go:276] 0 containers: []
	W0920 18:19:56.210700   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:56.210709   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:56.210768   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:56.246035   75577 cri.go:89] found id: ""
	I0920 18:19:56.246063   75577 logs.go:276] 0 containers: []
	W0920 18:19:56.246071   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:56.246077   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:56.246123   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:56.280766   75577 cri.go:89] found id: ""
	I0920 18:19:56.280796   75577 logs.go:276] 0 containers: []
	W0920 18:19:56.280807   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:56.280818   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:56.280834   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:56.320511   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:56.320540   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:56.373746   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:56.373785   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:56.389294   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:56.389322   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:56.460079   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:56.460100   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:56.460112   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:19:59.044541   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:59.058395   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:59.058464   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:59.097442   75577 cri.go:89] found id: ""
	I0920 18:19:59.097482   75577 logs.go:276] 0 containers: []
	W0920 18:19:59.097495   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:59.097512   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:59.097593   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:59.133091   75577 cri.go:89] found id: ""
	I0920 18:19:59.133116   75577 logs.go:276] 0 containers: []
	W0920 18:19:59.133128   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:59.133135   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:59.133264   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:59.166902   75577 cri.go:89] found id: ""
	I0920 18:19:59.166927   75577 logs.go:276] 0 containers: []
	W0920 18:19:59.166938   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:59.166945   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:59.167008   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:59.202551   75577 cri.go:89] found id: ""
	I0920 18:19:59.202573   75577 logs.go:276] 0 containers: []
	W0920 18:19:59.202581   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:59.202586   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:59.202633   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:59.235197   75577 cri.go:89] found id: ""
	I0920 18:19:59.235222   75577 logs.go:276] 0 containers: []
	W0920 18:19:59.235230   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:59.235236   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:59.235286   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:59.269148   75577 cri.go:89] found id: ""
	I0920 18:19:59.269176   75577 logs.go:276] 0 containers: []
	W0920 18:19:59.269187   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:59.269196   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:59.269262   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:59.307670   75577 cri.go:89] found id: ""
	I0920 18:19:59.307692   75577 logs.go:276] 0 containers: []
	W0920 18:19:59.307699   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:59.307705   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:59.307753   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:59.340554   75577 cri.go:89] found id: ""
	I0920 18:19:59.340589   75577 logs.go:276] 0 containers: []
	W0920 18:19:59.340599   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:59.340610   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:59.340623   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:59.382339   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:59.382470   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:59.432176   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:59.432211   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:59.445889   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:59.445922   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:59.516564   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:59.516590   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:59.516609   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:02.101918   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:02.115064   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:02.115128   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:02.148718   75577 cri.go:89] found id: ""
	I0920 18:20:02.148749   75577 logs.go:276] 0 containers: []
	W0920 18:20:02.148757   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:02.148764   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:02.148822   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:02.182673   75577 cri.go:89] found id: ""
	I0920 18:20:02.182711   75577 logs.go:276] 0 containers: []
	W0920 18:20:02.182722   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:02.182729   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:02.182797   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:02.218734   75577 cri.go:89] found id: ""
	I0920 18:20:02.218771   75577 logs.go:276] 0 containers: []
	W0920 18:20:02.218782   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:02.218791   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:02.218856   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:02.258956   75577 cri.go:89] found id: ""
	I0920 18:20:02.258981   75577 logs.go:276] 0 containers: []
	W0920 18:20:02.258989   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:02.258995   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:02.259066   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:02.294879   75577 cri.go:89] found id: ""
	I0920 18:20:02.294910   75577 logs.go:276] 0 containers: []
	W0920 18:20:02.294919   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:02.294925   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:02.294971   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:02.331010   75577 cri.go:89] found id: ""
	I0920 18:20:02.331039   75577 logs.go:276] 0 containers: []
	W0920 18:20:02.331049   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:02.331057   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:02.331122   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:02.365023   75577 cri.go:89] found id: ""
	I0920 18:20:02.365056   75577 logs.go:276] 0 containers: []
	W0920 18:20:02.365066   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:02.365072   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:02.365122   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:02.400501   75577 cri.go:89] found id: ""
	I0920 18:20:02.400528   75577 logs.go:276] 0 containers: []
	W0920 18:20:02.400537   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:02.400545   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:02.400556   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:02.450597   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:02.450636   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:02.467637   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:02.467671   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:02.540668   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:02.540690   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:02.540706   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:02.629706   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:02.629752   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:05.168119   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:05.182954   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:05.183043   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:05.221938   75577 cri.go:89] found id: ""
	I0920 18:20:05.221971   75577 logs.go:276] 0 containers: []
	W0920 18:20:05.221980   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:05.221990   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:05.222051   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:05.258454   75577 cri.go:89] found id: ""
	I0920 18:20:05.258479   75577 logs.go:276] 0 containers: []
	W0920 18:20:05.258487   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:05.258492   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:05.258540   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:05.293091   75577 cri.go:89] found id: ""
	I0920 18:20:05.293125   75577 logs.go:276] 0 containers: []
	W0920 18:20:05.293138   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:05.293146   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:05.293196   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:05.332992   75577 cri.go:89] found id: ""
	I0920 18:20:05.333025   75577 logs.go:276] 0 containers: []
	W0920 18:20:05.333034   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:05.333040   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:05.333088   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:05.368743   75577 cri.go:89] found id: ""
	I0920 18:20:05.368778   75577 logs.go:276] 0 containers: []
	W0920 18:20:05.368790   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:05.368798   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:05.368859   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:05.404913   75577 cri.go:89] found id: ""
	I0920 18:20:05.404941   75577 logs.go:276] 0 containers: []
	W0920 18:20:05.404948   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:05.404954   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:05.405003   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:05.442111   75577 cri.go:89] found id: ""
	I0920 18:20:05.442143   75577 logs.go:276] 0 containers: []
	W0920 18:20:05.442154   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:05.442163   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:05.442228   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:05.478808   75577 cri.go:89] found id: ""
	I0920 18:20:05.478842   75577 logs.go:276] 0 containers: []
	W0920 18:20:05.478853   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:05.478865   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:05.478879   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:05.531653   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:05.531691   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:05.545181   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:05.545210   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:05.615009   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:05.615041   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:05.615059   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:05.690842   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:05.690871   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:08.230851   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:08.244539   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:08.244609   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:08.281123   75577 cri.go:89] found id: ""
	I0920 18:20:08.281155   75577 logs.go:276] 0 containers: []
	W0920 18:20:08.281167   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:08.281174   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:08.281226   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:08.319704   75577 cri.go:89] found id: ""
	I0920 18:20:08.319740   75577 logs.go:276] 0 containers: []
	W0920 18:20:08.319754   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:08.319763   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:08.319828   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:08.354589   75577 cri.go:89] found id: ""
	I0920 18:20:08.354619   75577 logs.go:276] 0 containers: []
	W0920 18:20:08.354631   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:08.354638   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:08.354703   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:08.391580   75577 cri.go:89] found id: ""
	I0920 18:20:08.391603   75577 logs.go:276] 0 containers: []
	W0920 18:20:08.391612   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:08.391617   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:08.391666   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:08.425596   75577 cri.go:89] found id: ""
	I0920 18:20:08.425622   75577 logs.go:276] 0 containers: []
	W0920 18:20:08.425630   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:08.425638   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:08.425704   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:08.458720   75577 cri.go:89] found id: ""
	I0920 18:20:08.458747   75577 logs.go:276] 0 containers: []
	W0920 18:20:08.458758   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:08.458764   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:08.458812   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:08.496104   75577 cri.go:89] found id: ""
	I0920 18:20:08.496137   75577 logs.go:276] 0 containers: []
	W0920 18:20:08.496148   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:08.496155   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:08.496210   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:08.530961   75577 cri.go:89] found id: ""
	I0920 18:20:08.530989   75577 logs.go:276] 0 containers: []
	W0920 18:20:08.531000   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:08.531010   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:08.531023   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:08.568512   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:08.568541   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:08.619716   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:08.619754   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:08.634358   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:08.634390   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:08.721465   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:08.721488   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:08.721501   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:11.303942   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:11.316686   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:11.316759   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:11.353137   75577 cri.go:89] found id: ""
	I0920 18:20:11.353161   75577 logs.go:276] 0 containers: []
	W0920 18:20:11.353169   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:11.353176   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:11.353229   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:11.388271   75577 cri.go:89] found id: ""
	I0920 18:20:11.388298   75577 logs.go:276] 0 containers: []
	W0920 18:20:11.388315   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:11.388322   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:11.388388   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:11.422673   75577 cri.go:89] found id: ""
	I0920 18:20:11.422700   75577 logs.go:276] 0 containers: []
	W0920 18:20:11.422708   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:11.422714   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:11.422768   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:11.458869   75577 cri.go:89] found id: ""
	I0920 18:20:11.458900   75577 logs.go:276] 0 containers: []
	W0920 18:20:11.458910   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:11.458917   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:11.459068   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:11.494128   75577 cri.go:89] found id: ""
	I0920 18:20:11.494161   75577 logs.go:276] 0 containers: []
	W0920 18:20:11.494172   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:11.494180   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:11.494246   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:11.529111   75577 cri.go:89] found id: ""
	I0920 18:20:11.529135   75577 logs.go:276] 0 containers: []
	W0920 18:20:11.529150   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:11.529157   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:11.529223   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:11.562287   75577 cri.go:89] found id: ""
	I0920 18:20:11.562314   75577 logs.go:276] 0 containers: []
	W0920 18:20:11.562323   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:11.562329   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:11.562381   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:11.600066   75577 cri.go:89] found id: ""
	I0920 18:20:11.600106   75577 logs.go:276] 0 containers: []
	W0920 18:20:11.600117   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:11.600128   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:11.600143   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:11.681628   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:11.681665   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:11.722173   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:11.722205   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:11.773132   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:11.773171   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:11.787183   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:11.787215   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:11.856304   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:14.356881   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:14.371658   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:14.371729   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:14.406983   75577 cri.go:89] found id: ""
	I0920 18:20:14.407009   75577 logs.go:276] 0 containers: []
	W0920 18:20:14.407017   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:14.407022   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:14.407075   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:14.481048   75577 cri.go:89] found id: ""
	I0920 18:20:14.481075   75577 logs.go:276] 0 containers: []
	W0920 18:20:14.481086   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:14.481094   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:14.481156   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:14.513683   75577 cri.go:89] found id: ""
	I0920 18:20:14.513711   75577 logs.go:276] 0 containers: []
	W0920 18:20:14.513719   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:14.513725   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:14.513797   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:14.553330   75577 cri.go:89] found id: ""
	I0920 18:20:14.553363   75577 logs.go:276] 0 containers: []
	W0920 18:20:14.553375   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:14.553381   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:14.553446   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:14.590803   75577 cri.go:89] found id: ""
	I0920 18:20:14.590837   75577 logs.go:276] 0 containers: []
	W0920 18:20:14.590848   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:14.590856   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:14.590927   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:14.625100   75577 cri.go:89] found id: ""
	I0920 18:20:14.625130   75577 logs.go:276] 0 containers: []
	W0920 18:20:14.625141   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:14.625151   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:14.625219   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:14.659313   75577 cri.go:89] found id: ""
	I0920 18:20:14.659342   75577 logs.go:276] 0 containers: []
	W0920 18:20:14.659351   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:14.659357   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:14.659418   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:14.694901   75577 cri.go:89] found id: ""
	I0920 18:20:14.694931   75577 logs.go:276] 0 containers: []
	W0920 18:20:14.694939   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:14.694951   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:14.694966   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:14.708406   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:14.708437   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:14.785174   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:14.785200   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:14.785214   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:14.873622   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:14.873666   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:14.919130   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:14.919166   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:17.468595   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:17.483397   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:17.483473   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:17.522415   75577 cri.go:89] found id: ""
	I0920 18:20:17.522445   75577 logs.go:276] 0 containers: []
	W0920 18:20:17.522455   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:17.522463   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:17.522523   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:17.558957   75577 cri.go:89] found id: ""
	I0920 18:20:17.558991   75577 logs.go:276] 0 containers: []
	W0920 18:20:17.559002   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:17.559010   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:17.559174   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:17.595183   75577 cri.go:89] found id: ""
	I0920 18:20:17.595217   75577 logs.go:276] 0 containers: []
	W0920 18:20:17.595229   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:17.595237   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:17.595302   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:17.631727   75577 cri.go:89] found id: ""
	I0920 18:20:17.631757   75577 logs.go:276] 0 containers: []
	W0920 18:20:17.631768   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:17.631775   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:17.631894   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:17.666374   75577 cri.go:89] found id: ""
	I0920 18:20:17.666409   75577 logs.go:276] 0 containers: []
	W0920 18:20:17.666420   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:17.666427   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:17.666488   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:17.702256   75577 cri.go:89] found id: ""
	I0920 18:20:17.702281   75577 logs.go:276] 0 containers: []
	W0920 18:20:17.702291   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:17.702299   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:17.702370   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:17.742133   75577 cri.go:89] found id: ""
	I0920 18:20:17.742161   75577 logs.go:276] 0 containers: []
	W0920 18:20:17.742172   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:17.742179   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:17.742249   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:17.781269   75577 cri.go:89] found id: ""
	I0920 18:20:17.781300   75577 logs.go:276] 0 containers: []
	W0920 18:20:17.781311   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:17.781321   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:17.781336   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:17.835886   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:17.835923   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:17.851365   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:17.851396   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:17.932682   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:17.932711   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:17.932726   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:18.013680   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:18.013731   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:20.555074   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:20.568007   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:20.568071   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:20.602250   75577 cri.go:89] found id: ""
	I0920 18:20:20.602278   75577 logs.go:276] 0 containers: []
	W0920 18:20:20.602287   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:20.602293   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:20.602353   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:20.636711   75577 cri.go:89] found id: ""
	I0920 18:20:20.636739   75577 logs.go:276] 0 containers: []
	W0920 18:20:20.636748   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:20.636753   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:20.636807   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:20.669529   75577 cri.go:89] found id: ""
	I0920 18:20:20.669570   75577 logs.go:276] 0 containers: []
	W0920 18:20:20.669583   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:20.669593   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:20.669651   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:20.710330   75577 cri.go:89] found id: ""
	I0920 18:20:20.710364   75577 logs.go:276] 0 containers: []
	W0920 18:20:20.710372   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:20.710378   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:20.710427   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:20.747664   75577 cri.go:89] found id: ""
	I0920 18:20:20.747690   75577 logs.go:276] 0 containers: []
	W0920 18:20:20.747699   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:20.747705   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:20.747760   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:20.785680   75577 cri.go:89] found id: ""
	I0920 18:20:20.785717   75577 logs.go:276] 0 containers: []
	W0920 18:20:20.785726   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:20.785732   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:20.785794   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:20.822570   75577 cri.go:89] found id: ""
	I0920 18:20:20.822599   75577 logs.go:276] 0 containers: []
	W0920 18:20:20.822608   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:20.822613   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:20.822659   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:20.856739   75577 cri.go:89] found id: ""
	I0920 18:20:20.856772   75577 logs.go:276] 0 containers: []
	W0920 18:20:20.856786   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:20.856806   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:20.856822   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:20.896606   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:20.896643   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:20.948275   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:20.948313   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:20.962576   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:20.962606   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:21.033511   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:21.033534   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:21.033547   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:23.615731   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:23.628619   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:23.628698   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:23.664344   75577 cri.go:89] found id: ""
	I0920 18:20:23.664367   75577 logs.go:276] 0 containers: []
	W0920 18:20:23.664375   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:23.664381   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:23.664431   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:23.698791   75577 cri.go:89] found id: ""
	I0920 18:20:23.698823   75577 logs.go:276] 0 containers: []
	W0920 18:20:23.698832   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:23.698839   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:23.698889   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:23.732552   75577 cri.go:89] found id: ""
	I0920 18:20:23.732590   75577 logs.go:276] 0 containers: []
	W0920 18:20:23.732602   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:23.732610   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:23.732676   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:23.769463   75577 cri.go:89] found id: ""
	I0920 18:20:23.769490   75577 logs.go:276] 0 containers: []
	W0920 18:20:23.769501   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:23.769508   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:23.769567   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:23.807329   75577 cri.go:89] found id: ""
	I0920 18:20:23.807361   75577 logs.go:276] 0 containers: []
	W0920 18:20:23.807374   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:23.807382   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:23.807442   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:23.847889   75577 cri.go:89] found id: ""
	I0920 18:20:23.847913   75577 logs.go:276] 0 containers: []
	W0920 18:20:23.847920   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:23.847927   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:23.847985   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:23.883287   75577 cri.go:89] found id: ""
	I0920 18:20:23.883314   75577 logs.go:276] 0 containers: []
	W0920 18:20:23.883322   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:23.883335   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:23.883389   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:23.920006   75577 cri.go:89] found id: ""
	I0920 18:20:23.920034   75577 logs.go:276] 0 containers: []
	W0920 18:20:23.920045   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:23.920057   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:23.920070   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:23.995572   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:23.995608   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:24.035953   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:24.035983   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:24.085803   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:24.085860   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:24.100226   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:24.100255   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:24.173555   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:26.673726   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:26.687372   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:26.687449   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:26.726544   75577 cri.go:89] found id: ""
	I0920 18:20:26.726574   75577 logs.go:276] 0 containers: []
	W0920 18:20:26.726583   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:26.726590   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:26.726651   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:26.761542   75577 cri.go:89] found id: ""
	I0920 18:20:26.761571   75577 logs.go:276] 0 containers: []
	W0920 18:20:26.761580   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:26.761587   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:26.761639   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:26.803869   75577 cri.go:89] found id: ""
	I0920 18:20:26.803896   75577 logs.go:276] 0 containers: []
	W0920 18:20:26.803904   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:26.803910   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:26.803970   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:26.854835   75577 cri.go:89] found id: ""
	I0920 18:20:26.854876   75577 logs.go:276] 0 containers: []
	W0920 18:20:26.854888   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:26.854896   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:26.854958   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:26.896113   75577 cri.go:89] found id: ""
	I0920 18:20:26.896151   75577 logs.go:276] 0 containers: []
	W0920 18:20:26.896162   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:26.896170   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:26.896255   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:26.942828   75577 cri.go:89] found id: ""
	I0920 18:20:26.942853   75577 logs.go:276] 0 containers: []
	W0920 18:20:26.942863   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:26.942930   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:26.943007   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:26.977246   75577 cri.go:89] found id: ""
	I0920 18:20:26.977278   75577 logs.go:276] 0 containers: []
	W0920 18:20:26.977289   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:26.977296   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:26.977367   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:27.012408   75577 cri.go:89] found id: ""
	I0920 18:20:27.012440   75577 logs.go:276] 0 containers: []
	W0920 18:20:27.012451   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:27.012462   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:27.012477   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:27.063970   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:27.064017   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:27.078082   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:27.078119   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:27.148050   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:27.148079   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:27.148094   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:27.230836   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:27.230880   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:29.771845   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:29.785479   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:29.785553   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:29.823101   75577 cri.go:89] found id: ""
	I0920 18:20:29.823132   75577 logs.go:276] 0 containers: []
	W0920 18:20:29.823143   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:29.823150   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:29.823228   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:29.858679   75577 cri.go:89] found id: ""
	I0920 18:20:29.858713   75577 logs.go:276] 0 containers: []
	W0920 18:20:29.858724   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:29.858732   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:29.858796   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:29.901044   75577 cri.go:89] found id: ""
	I0920 18:20:29.901073   75577 logs.go:276] 0 containers: []
	W0920 18:20:29.901083   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:29.901091   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:29.901160   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:29.937743   75577 cri.go:89] found id: ""
	I0920 18:20:29.937775   75577 logs.go:276] 0 containers: []
	W0920 18:20:29.937792   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:29.937800   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:29.937884   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:29.972813   75577 cri.go:89] found id: ""
	I0920 18:20:29.972838   75577 logs.go:276] 0 containers: []
	W0920 18:20:29.972846   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:29.972852   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:29.972901   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:30.008024   75577 cri.go:89] found id: ""
	I0920 18:20:30.008053   75577 logs.go:276] 0 containers: []
	W0920 18:20:30.008062   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:30.008068   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:30.008117   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:30.048544   75577 cri.go:89] found id: ""
	I0920 18:20:30.048577   75577 logs.go:276] 0 containers: []
	W0920 18:20:30.048585   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:30.048591   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:30.048643   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:30.084501   75577 cri.go:89] found id: ""
	I0920 18:20:30.084525   75577 logs.go:276] 0 containers: []
	W0920 18:20:30.084534   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:30.084545   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:30.084559   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:30.136234   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:30.136279   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:30.149557   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:30.149591   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:30.229765   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:30.229792   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:30.229806   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:30.307786   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:30.307825   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:32.845220   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:32.859734   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:32.859810   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:32.896390   75577 cri.go:89] found id: ""
	I0920 18:20:32.896434   75577 logs.go:276] 0 containers: []
	W0920 18:20:32.896446   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:32.896464   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:32.896538   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:32.934548   75577 cri.go:89] found id: ""
	I0920 18:20:32.934572   75577 logs.go:276] 0 containers: []
	W0920 18:20:32.934580   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:32.934587   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:32.934640   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:32.982987   75577 cri.go:89] found id: ""
	I0920 18:20:32.983013   75577 logs.go:276] 0 containers: []
	W0920 18:20:32.983020   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:32.983026   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:32.983079   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:33.019059   75577 cri.go:89] found id: ""
	I0920 18:20:33.019085   75577 logs.go:276] 0 containers: []
	W0920 18:20:33.019093   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:33.019099   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:33.019148   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:33.057704   75577 cri.go:89] found id: ""
	I0920 18:20:33.057738   75577 logs.go:276] 0 containers: []
	W0920 18:20:33.057750   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:33.057759   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:33.057821   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:33.093702   75577 cri.go:89] found id: ""
	I0920 18:20:33.093732   75577 logs.go:276] 0 containers: []
	W0920 18:20:33.093743   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:33.093751   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:33.093809   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:33.128476   75577 cri.go:89] found id: ""
	I0920 18:20:33.128504   75577 logs.go:276] 0 containers: []
	W0920 18:20:33.128516   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:33.128523   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:33.128591   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:33.165513   75577 cri.go:89] found id: ""
	I0920 18:20:33.165542   75577 logs.go:276] 0 containers: []
	W0920 18:20:33.165550   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:33.165559   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:33.165569   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:33.219613   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:33.219650   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:33.232291   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:33.232317   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:33.304172   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:33.304197   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:33.304212   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:33.380057   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:33.380094   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:35.920842   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:35.935500   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:35.935593   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:35.974445   75577 cri.go:89] found id: ""
	I0920 18:20:35.974471   75577 logs.go:276] 0 containers: []
	W0920 18:20:35.974479   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:35.974485   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:35.974548   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:36.009487   75577 cri.go:89] found id: ""
	I0920 18:20:36.009518   75577 logs.go:276] 0 containers: []
	W0920 18:20:36.009530   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:36.009538   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:36.009706   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:36.049428   75577 cri.go:89] found id: ""
	I0920 18:20:36.049451   75577 logs.go:276] 0 containers: []
	W0920 18:20:36.049463   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:36.049469   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:36.049518   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:36.083973   75577 cri.go:89] found id: ""
	I0920 18:20:36.084011   75577 logs.go:276] 0 containers: []
	W0920 18:20:36.084022   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:36.084030   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:36.084119   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:36.117952   75577 cri.go:89] found id: ""
	I0920 18:20:36.117985   75577 logs.go:276] 0 containers: []
	W0920 18:20:36.117993   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:36.117998   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:36.118052   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:36.151222   75577 cri.go:89] found id: ""
	I0920 18:20:36.151256   75577 logs.go:276] 0 containers: []
	W0920 18:20:36.151265   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:36.151271   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:36.151319   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:36.184665   75577 cri.go:89] found id: ""
	I0920 18:20:36.184697   75577 logs.go:276] 0 containers: []
	W0920 18:20:36.184708   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:36.184716   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:36.184771   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:36.222974   75577 cri.go:89] found id: ""
	I0920 18:20:36.223003   75577 logs.go:276] 0 containers: []
	W0920 18:20:36.223012   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:36.223021   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:36.223033   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:36.300403   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:36.300424   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:36.300437   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:36.381220   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:36.381260   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:36.419010   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:36.419042   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:36.471758   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:36.471799   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:38.985359   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:38.998817   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:38.998898   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:39.036705   75577 cri.go:89] found id: ""
	I0920 18:20:39.036737   75577 logs.go:276] 0 containers: []
	W0920 18:20:39.036747   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:39.036755   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:39.036815   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:39.074254   75577 cri.go:89] found id: ""
	I0920 18:20:39.074285   75577 logs.go:276] 0 containers: []
	W0920 18:20:39.074294   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:39.074300   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:39.074367   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:39.108385   75577 cri.go:89] found id: ""
	I0920 18:20:39.108420   75577 logs.go:276] 0 containers: []
	W0920 18:20:39.108493   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:39.108506   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:39.108561   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:39.142283   75577 cri.go:89] found id: ""
	I0920 18:20:39.142313   75577 logs.go:276] 0 containers: []
	W0920 18:20:39.142325   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:39.142332   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:39.142396   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:39.176881   75577 cri.go:89] found id: ""
	I0920 18:20:39.176918   75577 logs.go:276] 0 containers: []
	W0920 18:20:39.176933   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:39.176941   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:39.177002   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:39.217734   75577 cri.go:89] found id: ""
	I0920 18:20:39.217759   75577 logs.go:276] 0 containers: []
	W0920 18:20:39.217767   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:39.217773   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:39.217852   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:39.250823   75577 cri.go:89] found id: ""
	I0920 18:20:39.250852   75577 logs.go:276] 0 containers: []
	W0920 18:20:39.250860   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:39.250868   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:39.250936   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:39.287494   75577 cri.go:89] found id: ""
	I0920 18:20:39.287519   75577 logs.go:276] 0 containers: []
	W0920 18:20:39.287528   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:39.287540   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:39.287552   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:39.343091   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:39.343126   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:39.359028   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:39.359063   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:39.425293   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:39.425321   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:39.425336   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:39.503303   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:39.503350   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:42.044784   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:42.057764   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:42.057876   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:42.091798   75577 cri.go:89] found id: ""
	I0920 18:20:42.091825   75577 logs.go:276] 0 containers: []
	W0920 18:20:42.091833   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:42.091839   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:42.091888   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:42.125325   75577 cri.go:89] found id: ""
	I0920 18:20:42.125352   75577 logs.go:276] 0 containers: []
	W0920 18:20:42.125362   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:42.125370   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:42.125423   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:42.158641   75577 cri.go:89] found id: ""
	I0920 18:20:42.158680   75577 logs.go:276] 0 containers: []
	W0920 18:20:42.158692   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:42.158701   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:42.158771   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:42.194065   75577 cri.go:89] found id: ""
	I0920 18:20:42.194090   75577 logs.go:276] 0 containers: []
	W0920 18:20:42.194101   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:42.194109   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:42.194174   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:42.227237   75577 cri.go:89] found id: ""
	I0920 18:20:42.227262   75577 logs.go:276] 0 containers: []
	W0920 18:20:42.227270   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:42.227275   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:42.227324   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:42.260205   75577 cri.go:89] found id: ""
	I0920 18:20:42.260240   75577 logs.go:276] 0 containers: []
	W0920 18:20:42.260251   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:42.260258   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:42.260325   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:42.300124   75577 cri.go:89] found id: ""
	I0920 18:20:42.300159   75577 logs.go:276] 0 containers: []
	W0920 18:20:42.300169   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:42.300175   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:42.300273   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:42.335638   75577 cri.go:89] found id: ""
	I0920 18:20:42.335674   75577 logs.go:276] 0 containers: []
	W0920 18:20:42.335685   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:42.335695   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:42.335710   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:42.386176   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:42.386214   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:42.400113   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:42.400147   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:42.479877   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:42.479900   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:42.479912   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:42.554654   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:42.554701   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:45.093962   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:45.107747   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:45.107811   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:45.147091   75577 cri.go:89] found id: ""
	I0920 18:20:45.147120   75577 logs.go:276] 0 containers: []
	W0920 18:20:45.147132   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:45.147140   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:45.147205   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:45.185332   75577 cri.go:89] found id: ""
	I0920 18:20:45.185396   75577 logs.go:276] 0 containers: []
	W0920 18:20:45.185431   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:45.185446   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:45.185523   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:45.224496   75577 cri.go:89] found id: ""
	I0920 18:20:45.224525   75577 logs.go:276] 0 containers: []
	W0920 18:20:45.224535   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:45.224542   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:45.224612   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:45.260686   75577 cri.go:89] found id: ""
	I0920 18:20:45.260718   75577 logs.go:276] 0 containers: []
	W0920 18:20:45.260729   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:45.260737   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:45.260801   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:45.301231   75577 cri.go:89] found id: ""
	I0920 18:20:45.301259   75577 logs.go:276] 0 containers: []
	W0920 18:20:45.301269   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:45.301277   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:45.301343   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:45.346441   75577 cri.go:89] found id: ""
	I0920 18:20:45.346468   75577 logs.go:276] 0 containers: []
	W0920 18:20:45.346476   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:45.346482   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:45.346537   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:45.385042   75577 cri.go:89] found id: ""
	I0920 18:20:45.385071   75577 logs.go:276] 0 containers: []
	W0920 18:20:45.385082   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:45.385090   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:45.385149   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:45.422208   75577 cri.go:89] found id: ""
	I0920 18:20:45.422238   75577 logs.go:276] 0 containers: []
	W0920 18:20:45.422249   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:45.422260   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:45.422275   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:45.472692   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:45.472742   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:45.487981   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:45.488011   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:45.563012   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:45.563035   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:45.563051   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:45.640750   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:45.640786   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:48.181093   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:48.194433   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:48.194516   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:48.234349   75577 cri.go:89] found id: ""
	I0920 18:20:48.234383   75577 logs.go:276] 0 containers: []
	W0920 18:20:48.234394   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:48.234403   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:48.234467   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:48.269424   75577 cri.go:89] found id: ""
	I0920 18:20:48.269450   75577 logs.go:276] 0 containers: []
	W0920 18:20:48.269457   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:48.269462   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:48.269514   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:48.303968   75577 cri.go:89] found id: ""
	I0920 18:20:48.303990   75577 logs.go:276] 0 containers: []
	W0920 18:20:48.303997   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:48.304002   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:48.304061   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:48.339149   75577 cri.go:89] found id: ""
	I0920 18:20:48.339180   75577 logs.go:276] 0 containers: []
	W0920 18:20:48.339191   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:48.339198   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:48.339259   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:48.374529   75577 cri.go:89] found id: ""
	I0920 18:20:48.374559   75577 logs.go:276] 0 containers: []
	W0920 18:20:48.374571   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:48.374578   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:48.374644   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:48.409170   75577 cri.go:89] found id: ""
	I0920 18:20:48.409203   75577 logs.go:276] 0 containers: []
	W0920 18:20:48.409211   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:48.409217   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:48.409292   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:48.443960   75577 cri.go:89] found id: ""
	I0920 18:20:48.443990   75577 logs.go:276] 0 containers: []
	W0920 18:20:48.444009   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:48.444017   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:48.444074   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:48.480934   75577 cri.go:89] found id: ""
	I0920 18:20:48.480965   75577 logs.go:276] 0 containers: []
	W0920 18:20:48.480978   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:48.480990   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:48.481006   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:48.533261   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:48.533295   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:48.546460   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:48.546488   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:48.615183   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:48.615206   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:48.615219   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:48.695299   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:48.695337   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:51.240343   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:51.255327   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:51.255411   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:51.294611   75577 cri.go:89] found id: ""
	I0920 18:20:51.294642   75577 logs.go:276] 0 containers: []
	W0920 18:20:51.294650   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:51.294656   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:51.294710   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:51.328569   75577 cri.go:89] found id: ""
	I0920 18:20:51.328598   75577 logs.go:276] 0 containers: []
	W0920 18:20:51.328613   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:51.328621   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:51.328675   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:51.364255   75577 cri.go:89] found id: ""
	I0920 18:20:51.364283   75577 logs.go:276] 0 containers: []
	W0920 18:20:51.364291   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:51.364297   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:51.364347   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:51.406178   75577 cri.go:89] found id: ""
	I0920 18:20:51.406204   75577 logs.go:276] 0 containers: []
	W0920 18:20:51.406215   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:51.406223   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:51.406284   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:51.449491   75577 cri.go:89] found id: ""
	I0920 18:20:51.449519   75577 logs.go:276] 0 containers: []
	W0920 18:20:51.449529   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:51.449536   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:51.449600   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:51.483243   75577 cri.go:89] found id: ""
	I0920 18:20:51.483269   75577 logs.go:276] 0 containers: []
	W0920 18:20:51.483278   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:51.483284   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:51.483334   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:51.517280   75577 cri.go:89] found id: ""
	I0920 18:20:51.517304   75577 logs.go:276] 0 containers: []
	W0920 18:20:51.517311   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:51.517316   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:51.517378   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:51.553517   75577 cri.go:89] found id: ""
	I0920 18:20:51.553545   75577 logs.go:276] 0 containers: []
	W0920 18:20:51.553556   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:51.553565   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:51.553576   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:51.607330   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:51.607369   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:51.620628   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:51.620662   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:51.687586   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:51.687618   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:51.687638   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:51.764882   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:51.764924   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:54.307276   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:54.321777   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:54.321860   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:54.355565   75577 cri.go:89] found id: ""
	I0920 18:20:54.355598   75577 logs.go:276] 0 containers: []
	W0920 18:20:54.355609   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:54.355618   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:54.355680   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:54.391184   75577 cri.go:89] found id: ""
	I0920 18:20:54.391216   75577 logs.go:276] 0 containers: []
	W0920 18:20:54.391227   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:54.391236   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:54.391301   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:54.425680   75577 cri.go:89] found id: ""
	I0920 18:20:54.425709   75577 logs.go:276] 0 containers: []
	W0920 18:20:54.425718   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:54.425725   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:54.425775   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:54.461514   75577 cri.go:89] found id: ""
	I0920 18:20:54.461541   75577 logs.go:276] 0 containers: []
	W0920 18:20:54.461552   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:54.461559   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:54.461625   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:54.495290   75577 cri.go:89] found id: ""
	I0920 18:20:54.495319   75577 logs.go:276] 0 containers: []
	W0920 18:20:54.495327   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:54.495333   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:54.495384   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:54.530014   75577 cri.go:89] found id: ""
	I0920 18:20:54.530038   75577 logs.go:276] 0 containers: []
	W0920 18:20:54.530046   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:54.530052   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:54.530103   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:54.562559   75577 cri.go:89] found id: ""
	I0920 18:20:54.562597   75577 logs.go:276] 0 containers: []
	W0920 18:20:54.562611   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:54.562621   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:54.562694   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:54.599894   75577 cri.go:89] found id: ""
	I0920 18:20:54.599920   75577 logs.go:276] 0 containers: []
	W0920 18:20:54.599928   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:54.599946   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:54.599982   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:54.636853   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:54.636880   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:54.687932   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:54.687978   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:54.701642   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:54.701673   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:54.769649   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:54.769678   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:54.769695   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:57.356015   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:57.368860   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:57.368923   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:57.409339   75577 cri.go:89] found id: ""
	I0920 18:20:57.409365   75577 logs.go:276] 0 containers: []
	W0920 18:20:57.409375   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:57.409382   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:57.409444   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:57.443055   75577 cri.go:89] found id: ""
	I0920 18:20:57.443085   75577 logs.go:276] 0 containers: []
	W0920 18:20:57.443095   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:57.443102   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:57.443158   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:57.481816   75577 cri.go:89] found id: ""
	I0920 18:20:57.481859   75577 logs.go:276] 0 containers: []
	W0920 18:20:57.481871   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:57.481879   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:57.481942   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:57.517327   75577 cri.go:89] found id: ""
	I0920 18:20:57.517361   75577 logs.go:276] 0 containers: []
	W0920 18:20:57.517372   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:57.517379   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:57.517442   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:57.555121   75577 cri.go:89] found id: ""
	I0920 18:20:57.555151   75577 logs.go:276] 0 containers: []
	W0920 18:20:57.555159   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:57.555164   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:57.555222   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:57.592632   75577 cri.go:89] found id: ""
	I0920 18:20:57.592666   75577 logs.go:276] 0 containers: []
	W0920 18:20:57.592679   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:57.592685   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:57.592734   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:57.627519   75577 cri.go:89] found id: ""
	I0920 18:20:57.627556   75577 logs.go:276] 0 containers: []
	W0920 18:20:57.627567   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:57.627574   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:57.627636   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:57.663817   75577 cri.go:89] found id: ""
	I0920 18:20:57.663844   75577 logs.go:276] 0 containers: []
	W0920 18:20:57.663853   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:57.663862   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:57.663877   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:57.704896   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:57.704924   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:57.757933   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:57.757972   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:57.772646   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:57.772673   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:57.850634   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:57.850665   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:57.850681   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:00.431504   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:00.445175   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:00.445241   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:00.482050   75577 cri.go:89] found id: ""
	I0920 18:21:00.482076   75577 logs.go:276] 0 containers: []
	W0920 18:21:00.482088   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:00.482095   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:00.482150   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:00.515793   75577 cri.go:89] found id: ""
	I0920 18:21:00.515822   75577 logs.go:276] 0 containers: []
	W0920 18:21:00.515833   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:00.515841   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:00.515903   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:00.550250   75577 cri.go:89] found id: ""
	I0920 18:21:00.550278   75577 logs.go:276] 0 containers: []
	W0920 18:21:00.550288   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:00.550296   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:00.550374   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:00.585977   75577 cri.go:89] found id: ""
	I0920 18:21:00.586011   75577 logs.go:276] 0 containers: []
	W0920 18:21:00.586034   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:00.586051   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:00.586118   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:00.621828   75577 cri.go:89] found id: ""
	I0920 18:21:00.621869   75577 logs.go:276] 0 containers: []
	W0920 18:21:00.621879   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:00.621886   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:00.621937   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:00.655492   75577 cri.go:89] found id: ""
	I0920 18:21:00.655528   75577 logs.go:276] 0 containers: []
	W0920 18:21:00.655540   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:00.655600   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:00.655673   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:00.709445   75577 cri.go:89] found id: ""
	I0920 18:21:00.709476   75577 logs.go:276] 0 containers: []
	W0920 18:21:00.709488   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:00.709496   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:00.709566   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:00.744098   75577 cri.go:89] found id: ""
	I0920 18:21:00.744123   75577 logs.go:276] 0 containers: []
	W0920 18:21:00.744134   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:00.744144   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:00.744164   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:00.792576   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:00.792612   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:00.807766   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:00.807794   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:00.883626   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:00.883649   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:00.883663   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:00.966233   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:00.966269   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:03.505502   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:03.520407   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:03.520534   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:03.557891   75577 cri.go:89] found id: ""
	I0920 18:21:03.557926   75577 logs.go:276] 0 containers: []
	W0920 18:21:03.557936   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:03.557944   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:03.558006   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:03.593869   75577 cri.go:89] found id: ""
	I0920 18:21:03.593898   75577 logs.go:276] 0 containers: []
	W0920 18:21:03.593908   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:03.593916   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:03.593982   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:03.629270   75577 cri.go:89] found id: ""
	I0920 18:21:03.629302   75577 logs.go:276] 0 containers: []
	W0920 18:21:03.629311   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:03.629317   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:03.629366   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:03.664658   75577 cri.go:89] found id: ""
	I0920 18:21:03.664688   75577 logs.go:276] 0 containers: []
	W0920 18:21:03.664699   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:03.664706   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:03.664769   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:03.700846   75577 cri.go:89] found id: ""
	I0920 18:21:03.700868   75577 logs.go:276] 0 containers: []
	W0920 18:21:03.700875   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:03.700882   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:03.700941   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:03.736311   75577 cri.go:89] found id: ""
	I0920 18:21:03.736345   75577 logs.go:276] 0 containers: []
	W0920 18:21:03.736355   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:03.736363   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:03.736421   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:03.770760   75577 cri.go:89] found id: ""
	I0920 18:21:03.770788   75577 logs.go:276] 0 containers: []
	W0920 18:21:03.770800   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:03.770808   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:03.770868   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:03.808724   75577 cri.go:89] found id: ""
	I0920 18:21:03.808749   75577 logs.go:276] 0 containers: []
	W0920 18:21:03.808756   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:03.808764   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:03.808775   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:03.851231   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:03.851265   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:03.899607   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:03.899641   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:03.915051   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:03.915079   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:03.984016   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:03.984038   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:03.984053   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:06.564776   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:06.578524   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:06.578604   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:06.614305   75577 cri.go:89] found id: ""
	I0920 18:21:06.614340   75577 logs.go:276] 0 containers: []
	W0920 18:21:06.614351   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:06.614365   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:06.614427   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:06.648929   75577 cri.go:89] found id: ""
	I0920 18:21:06.648958   75577 logs.go:276] 0 containers: []
	W0920 18:21:06.648968   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:06.648976   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:06.649036   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:06.689975   75577 cri.go:89] found id: ""
	I0920 18:21:06.690016   75577 logs.go:276] 0 containers: []
	W0920 18:21:06.690027   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:06.690034   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:06.690092   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:06.729721   75577 cri.go:89] found id: ""
	I0920 18:21:06.729747   75577 logs.go:276] 0 containers: []
	W0920 18:21:06.729755   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:06.729762   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:06.729808   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:06.766310   75577 cri.go:89] found id: ""
	I0920 18:21:06.766343   75577 logs.go:276] 0 containers: []
	W0920 18:21:06.766354   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:06.766361   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:06.766437   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:06.800801   75577 cri.go:89] found id: ""
	I0920 18:21:06.800829   75577 logs.go:276] 0 containers: []
	W0920 18:21:06.800839   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:06.800847   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:06.800909   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:06.836391   75577 cri.go:89] found id: ""
	I0920 18:21:06.836429   75577 logs.go:276] 0 containers: []
	W0920 18:21:06.836447   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:06.836455   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:06.836521   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:06.873014   75577 cri.go:89] found id: ""
	I0920 18:21:06.873041   75577 logs.go:276] 0 containers: []
	W0920 18:21:06.873049   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:06.873057   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:06.873070   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:06.953084   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:06.953110   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:06.953122   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:07.032398   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:07.032434   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:07.082196   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:07.082224   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:07.159604   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:07.159640   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:09.675022   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:09.687924   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:09.687999   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:09.722963   75577 cri.go:89] found id: ""
	I0920 18:21:09.722994   75577 logs.go:276] 0 containers: []
	W0920 18:21:09.723005   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:09.723017   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:09.723084   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:09.756433   75577 cri.go:89] found id: ""
	I0920 18:21:09.756458   75577 logs.go:276] 0 containers: []
	W0920 18:21:09.756472   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:09.756486   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:09.756549   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:09.792200   75577 cri.go:89] found id: ""
	I0920 18:21:09.792232   75577 logs.go:276] 0 containers: []
	W0920 18:21:09.792248   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:09.792256   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:09.792323   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:09.829058   75577 cri.go:89] found id: ""
	I0920 18:21:09.829081   75577 logs.go:276] 0 containers: []
	W0920 18:21:09.829098   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:09.829104   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:09.829150   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:09.863225   75577 cri.go:89] found id: ""
	I0920 18:21:09.863252   75577 logs.go:276] 0 containers: []
	W0920 18:21:09.863259   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:09.863265   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:09.863312   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:09.897682   75577 cri.go:89] found id: ""
	I0920 18:21:09.897708   75577 logs.go:276] 0 containers: []
	W0920 18:21:09.897725   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:09.897731   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:09.897789   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:09.936799   75577 cri.go:89] found id: ""
	I0920 18:21:09.936826   75577 logs.go:276] 0 containers: []
	W0920 18:21:09.936836   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:09.936843   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:09.936904   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:09.970502   75577 cri.go:89] found id: ""
	I0920 18:21:09.970539   75577 logs.go:276] 0 containers: []
	W0920 18:21:09.970550   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:09.970560   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:09.970574   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:09.983676   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:09.983703   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:10.058844   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:10.058865   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:10.058874   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:10.136998   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:10.137040   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:10.178902   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:10.178933   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:12.729619   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:12.743816   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:12.743876   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:12.779407   75577 cri.go:89] found id: ""
	I0920 18:21:12.779431   75577 logs.go:276] 0 containers: []
	W0920 18:21:12.779439   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:12.779446   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:12.779503   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:12.820700   75577 cri.go:89] found id: ""
	I0920 18:21:12.820730   75577 logs.go:276] 0 containers: []
	W0920 18:21:12.820740   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:12.820748   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:12.820812   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:12.862179   75577 cri.go:89] found id: ""
	I0920 18:21:12.862210   75577 logs.go:276] 0 containers: []
	W0920 18:21:12.862221   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:12.862228   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:12.862284   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:12.908976   75577 cri.go:89] found id: ""
	I0920 18:21:12.908999   75577 logs.go:276] 0 containers: []
	W0920 18:21:12.909007   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:12.909013   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:12.909072   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:12.953653   75577 cri.go:89] found id: ""
	I0920 18:21:12.953688   75577 logs.go:276] 0 containers: []
	W0920 18:21:12.953696   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:12.953702   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:12.953757   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:12.998519   75577 cri.go:89] found id: ""
	I0920 18:21:12.998546   75577 logs.go:276] 0 containers: []
	W0920 18:21:12.998557   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:12.998564   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:12.998640   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:13.035518   75577 cri.go:89] found id: ""
	I0920 18:21:13.035541   75577 logs.go:276] 0 containers: []
	W0920 18:21:13.035549   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:13.035555   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:13.035609   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:13.073551   75577 cri.go:89] found id: ""
	I0920 18:21:13.073591   75577 logs.go:276] 0 containers: []
	W0920 18:21:13.073605   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:13.073617   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:13.073638   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:13.125118   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:13.125155   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:13.139384   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:13.139415   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:13.204980   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:13.205006   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:13.205021   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:13.289400   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:13.289446   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:15.829405   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:15.842291   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:15.842354   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:15.875886   75577 cri.go:89] found id: ""
	I0920 18:21:15.875921   75577 logs.go:276] 0 containers: []
	W0920 18:21:15.875930   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:15.875937   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:15.875990   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:15.911706   75577 cri.go:89] found id: ""
	I0920 18:21:15.911746   75577 logs.go:276] 0 containers: []
	W0920 18:21:15.911760   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:15.911768   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:15.911831   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:15.950117   75577 cri.go:89] found id: ""
	I0920 18:21:15.950150   75577 logs.go:276] 0 containers: []
	W0920 18:21:15.950161   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:15.950170   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:15.950243   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:15.986029   75577 cri.go:89] found id: ""
	I0920 18:21:15.986066   75577 logs.go:276] 0 containers: []
	W0920 18:21:15.986083   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:15.986091   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:15.986159   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:16.020307   75577 cri.go:89] found id: ""
	I0920 18:21:16.020335   75577 logs.go:276] 0 containers: []
	W0920 18:21:16.020346   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:16.020354   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:16.020412   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:16.054744   75577 cri.go:89] found id: ""
	I0920 18:21:16.054769   75577 logs.go:276] 0 containers: []
	W0920 18:21:16.054777   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:16.054782   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:16.054831   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:16.091756   75577 cri.go:89] found id: ""
	I0920 18:21:16.091789   75577 logs.go:276] 0 containers: []
	W0920 18:21:16.091800   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:16.091807   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:16.091868   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:16.127818   75577 cri.go:89] found id: ""
	I0920 18:21:16.127843   75577 logs.go:276] 0 containers: []
	W0920 18:21:16.127851   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:16.127861   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:16.127877   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:16.200114   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:16.200138   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:16.200149   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:16.279473   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:16.279508   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:16.319139   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:16.319171   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:16.370721   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:16.370769   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:18.884966   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:18.900270   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:18.900355   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:18.938541   75577 cri.go:89] found id: ""
	I0920 18:21:18.938582   75577 logs.go:276] 0 containers: []
	W0920 18:21:18.938594   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:18.938602   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:18.938673   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:18.972343   75577 cri.go:89] found id: ""
	I0920 18:21:18.972380   75577 logs.go:276] 0 containers: []
	W0920 18:21:18.972391   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:18.972400   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:18.972458   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:19.011996   75577 cri.go:89] found id: ""
	I0920 18:21:19.012037   75577 logs.go:276] 0 containers: []
	W0920 18:21:19.012048   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:19.012054   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:19.012105   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:19.053789   75577 cri.go:89] found id: ""
	I0920 18:21:19.053818   75577 logs.go:276] 0 containers: []
	W0920 18:21:19.053839   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:19.053849   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:19.053907   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:19.094830   75577 cri.go:89] found id: ""
	I0920 18:21:19.094862   75577 logs.go:276] 0 containers: []
	W0920 18:21:19.094872   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:19.094881   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:19.094941   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:19.133894   75577 cri.go:89] found id: ""
	I0920 18:21:19.133923   75577 logs.go:276] 0 containers: []
	W0920 18:21:19.133934   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:19.133943   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:19.134001   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:19.171640   75577 cri.go:89] found id: ""
	I0920 18:21:19.171662   75577 logs.go:276] 0 containers: []
	W0920 18:21:19.171670   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:19.171676   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:19.171730   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:19.207821   75577 cri.go:89] found id: ""
	I0920 18:21:19.207852   75577 logs.go:276] 0 containers: []
	W0920 18:21:19.207861   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:19.207869   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:19.207880   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:19.292486   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:19.292530   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:19.332664   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:19.332695   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:19.383793   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:19.383828   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:19.399234   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:19.399269   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:19.470636   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:21.971772   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:21.985558   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:21.985630   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:22.018745   75577 cri.go:89] found id: ""
	I0920 18:21:22.018775   75577 logs.go:276] 0 containers: []
	W0920 18:21:22.018785   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:22.018795   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:22.018854   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:22.053586   75577 cri.go:89] found id: ""
	I0920 18:21:22.053617   75577 logs.go:276] 0 containers: []
	W0920 18:21:22.053627   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:22.053635   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:22.053697   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:22.088290   75577 cri.go:89] found id: ""
	I0920 18:21:22.088320   75577 logs.go:276] 0 containers: []
	W0920 18:21:22.088337   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:22.088344   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:22.088394   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:22.121026   75577 cri.go:89] found id: ""
	I0920 18:21:22.121050   75577 logs.go:276] 0 containers: []
	W0920 18:21:22.121057   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:22.121063   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:22.121117   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:22.153669   75577 cri.go:89] found id: ""
	I0920 18:21:22.153702   75577 logs.go:276] 0 containers: []
	W0920 18:21:22.153715   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:22.153725   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:22.153793   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:22.192179   75577 cri.go:89] found id: ""
	I0920 18:21:22.192208   75577 logs.go:276] 0 containers: []
	W0920 18:21:22.192218   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:22.192226   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:22.192294   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:22.226115   75577 cri.go:89] found id: ""
	I0920 18:21:22.226142   75577 logs.go:276] 0 containers: []
	W0920 18:21:22.226153   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:22.226161   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:22.226231   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:22.259032   75577 cri.go:89] found id: ""
	I0920 18:21:22.259059   75577 logs.go:276] 0 containers: []
	W0920 18:21:22.259070   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:22.259080   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:22.259094   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:22.308989   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:22.309020   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:22.322084   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:22.322113   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:22.397567   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:22.397587   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:22.397598   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:22.476551   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:22.476596   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:25.017659   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:25.032637   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:25.032698   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:25.067671   75577 cri.go:89] found id: ""
	I0920 18:21:25.067701   75577 logs.go:276] 0 containers: []
	W0920 18:21:25.067711   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:25.067718   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:25.067774   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:25.109354   75577 cri.go:89] found id: ""
	I0920 18:21:25.109385   75577 logs.go:276] 0 containers: []
	W0920 18:21:25.109396   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:25.109403   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:25.109463   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:25.142888   75577 cri.go:89] found id: ""
	I0920 18:21:25.142924   75577 logs.go:276] 0 containers: []
	W0920 18:21:25.142935   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:25.142942   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:25.143004   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:25.179005   75577 cri.go:89] found id: ""
	I0920 18:21:25.179032   75577 logs.go:276] 0 containers: []
	W0920 18:21:25.179043   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:25.179050   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:25.179114   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:25.211639   75577 cri.go:89] found id: ""
	I0920 18:21:25.211662   75577 logs.go:276] 0 containers: []
	W0920 18:21:25.211669   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:25.211676   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:25.211729   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:25.246678   75577 cri.go:89] found id: ""
	I0920 18:21:25.246709   75577 logs.go:276] 0 containers: []
	W0920 18:21:25.246718   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:25.246724   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:25.246780   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:25.284187   75577 cri.go:89] found id: ""
	I0920 18:21:25.284216   75577 logs.go:276] 0 containers: []
	W0920 18:21:25.284240   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:25.284247   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:25.284309   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:25.318882   75577 cri.go:89] found id: ""
	I0920 18:21:25.318917   75577 logs.go:276] 0 containers: []
	W0920 18:21:25.318929   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:25.318940   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:25.318954   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:25.357017   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:25.357046   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:25.407584   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:25.407627   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:25.421405   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:25.421437   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:25.492849   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:25.492870   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:25.492881   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:28.076101   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:28.088741   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:28.088805   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:28.124889   75577 cri.go:89] found id: ""
	I0920 18:21:28.124925   75577 logs.go:276] 0 containers: []
	W0920 18:21:28.124944   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:28.124952   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:28.125013   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:28.158590   75577 cri.go:89] found id: ""
	I0920 18:21:28.158615   75577 logs.go:276] 0 containers: []
	W0920 18:21:28.158623   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:28.158629   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:28.158677   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:28.195636   75577 cri.go:89] found id: ""
	I0920 18:21:28.195670   75577 logs.go:276] 0 containers: []
	W0920 18:21:28.195683   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:28.195692   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:28.195762   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:28.228714   75577 cri.go:89] found id: ""
	I0920 18:21:28.228759   75577 logs.go:276] 0 containers: []
	W0920 18:21:28.228771   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:28.228780   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:28.228840   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:28.261488   75577 cri.go:89] found id: ""
	I0920 18:21:28.261519   75577 logs.go:276] 0 containers: []
	W0920 18:21:28.261531   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:28.261538   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:28.261606   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:28.293534   75577 cri.go:89] found id: ""
	I0920 18:21:28.293560   75577 logs.go:276] 0 containers: []
	W0920 18:21:28.293568   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:28.293573   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:28.293629   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:28.330095   75577 cri.go:89] found id: ""
	I0920 18:21:28.330120   75577 logs.go:276] 0 containers: []
	W0920 18:21:28.330128   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:28.330134   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:28.330191   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:28.365643   75577 cri.go:89] found id: ""
	I0920 18:21:28.365675   75577 logs.go:276] 0 containers: []
	W0920 18:21:28.365684   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:28.365693   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:28.365712   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:28.379982   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:28.380007   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:28.456355   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:28.456386   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:28.456402   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:28.538725   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:28.538759   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:28.576067   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:28.576104   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:31.129705   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:31.145814   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:31.145919   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:31.181469   75577 cri.go:89] found id: ""
	I0920 18:21:31.181497   75577 logs.go:276] 0 containers: []
	W0920 18:21:31.181507   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:31.181514   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:31.181562   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:31.216391   75577 cri.go:89] found id: ""
	I0920 18:21:31.216416   75577 logs.go:276] 0 containers: []
	W0920 18:21:31.216426   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:31.216433   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:31.216492   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:31.250236   75577 cri.go:89] found id: ""
	I0920 18:21:31.250266   75577 logs.go:276] 0 containers: []
	W0920 18:21:31.250277   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:31.250285   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:31.250357   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:31.283406   75577 cri.go:89] found id: ""
	I0920 18:21:31.283430   75577 logs.go:276] 0 containers: []
	W0920 18:21:31.283439   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:31.283446   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:31.283519   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:31.317276   75577 cri.go:89] found id: ""
	I0920 18:21:31.317308   75577 logs.go:276] 0 containers: []
	W0920 18:21:31.317327   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:31.317335   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:31.317400   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:31.351563   75577 cri.go:89] found id: ""
	I0920 18:21:31.351588   75577 logs.go:276] 0 containers: []
	W0920 18:21:31.351595   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:31.351600   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:31.351652   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:31.385932   75577 cri.go:89] found id: ""
	I0920 18:21:31.385972   75577 logs.go:276] 0 containers: []
	W0920 18:21:31.385984   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:31.385992   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:31.386056   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:31.422679   75577 cri.go:89] found id: ""
	I0920 18:21:31.422718   75577 logs.go:276] 0 containers: []
	W0920 18:21:31.422729   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:31.422740   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:31.422755   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:31.475827   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:31.475867   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:31.489324   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:31.489354   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:31.560357   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:31.560377   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:31.560390   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:31.642137   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:31.642171   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:34.179063   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:34.193077   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:34.193157   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:34.227451   75577 cri.go:89] found id: ""
	I0920 18:21:34.227484   75577 logs.go:276] 0 containers: []
	W0920 18:21:34.227495   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:34.227503   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:34.227571   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:34.263740   75577 cri.go:89] found id: ""
	I0920 18:21:34.263766   75577 logs.go:276] 0 containers: []
	W0920 18:21:34.263774   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:34.263779   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:34.263838   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:34.298094   75577 cri.go:89] found id: ""
	I0920 18:21:34.298121   75577 logs.go:276] 0 containers: []
	W0920 18:21:34.298132   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:34.298139   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:34.298202   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:34.334253   75577 cri.go:89] found id: ""
	I0920 18:21:34.334281   75577 logs.go:276] 0 containers: []
	W0920 18:21:34.334290   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:34.334297   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:34.334361   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:34.370656   75577 cri.go:89] found id: ""
	I0920 18:21:34.370688   75577 logs.go:276] 0 containers: []
	W0920 18:21:34.370699   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:34.370706   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:34.370759   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:34.405658   75577 cri.go:89] found id: ""
	I0920 18:21:34.405681   75577 logs.go:276] 0 containers: []
	W0920 18:21:34.405689   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:34.405695   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:34.405748   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:34.442284   75577 cri.go:89] found id: ""
	I0920 18:21:34.442311   75577 logs.go:276] 0 containers: []
	W0920 18:21:34.442319   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:34.442325   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:34.442394   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:34.477895   75577 cri.go:89] found id: ""
	I0920 18:21:34.477927   75577 logs.go:276] 0 containers: []
	W0920 18:21:34.477939   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:34.477950   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:34.477964   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:34.531225   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:34.531256   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:34.544325   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:34.544359   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:34.615914   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:34.615933   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:34.615948   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:34.692119   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:34.692160   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:37.228930   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:37.242070   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:37.242151   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:37.276489   75577 cri.go:89] found id: ""
	I0920 18:21:37.276531   75577 logs.go:276] 0 containers: []
	W0920 18:21:37.276543   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:37.276551   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:37.276617   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:37.311106   75577 cri.go:89] found id: ""
	I0920 18:21:37.311140   75577 logs.go:276] 0 containers: []
	W0920 18:21:37.311148   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:37.311156   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:37.311217   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:37.343147   75577 cri.go:89] found id: ""
	I0920 18:21:37.343173   75577 logs.go:276] 0 containers: []
	W0920 18:21:37.343181   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:37.343188   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:37.343237   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:37.378858   75577 cri.go:89] found id: ""
	I0920 18:21:37.378934   75577 logs.go:276] 0 containers: []
	W0920 18:21:37.378947   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:37.378955   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:37.379005   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:37.412321   75577 cri.go:89] found id: ""
	I0920 18:21:37.412355   75577 logs.go:276] 0 containers: []
	W0920 18:21:37.412366   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:37.412374   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:37.412443   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:37.447478   75577 cri.go:89] found id: ""
	I0920 18:21:37.447510   75577 logs.go:276] 0 containers: []
	W0920 18:21:37.447520   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:37.447526   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:37.447580   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:37.481172   75577 cri.go:89] found id: ""
	I0920 18:21:37.481201   75577 logs.go:276] 0 containers: []
	W0920 18:21:37.481209   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:37.481216   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:37.481269   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:37.518001   75577 cri.go:89] found id: ""
	I0920 18:21:37.518032   75577 logs.go:276] 0 containers: []
	W0920 18:21:37.518041   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:37.518050   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:37.518062   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:37.567675   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:37.567707   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:37.582279   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:37.582308   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:37.654514   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:37.654546   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:37.654563   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:37.735895   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:37.735929   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:40.276416   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:40.291651   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:40.291713   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:40.328309   75577 cri.go:89] found id: ""
	I0920 18:21:40.328346   75577 logs.go:276] 0 containers: []
	W0920 18:21:40.328359   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:40.328378   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:40.328441   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:40.368126   75577 cri.go:89] found id: ""
	I0920 18:21:40.368154   75577 logs.go:276] 0 containers: []
	W0920 18:21:40.368162   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:40.368167   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:40.368217   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:40.408324   75577 cri.go:89] found id: ""
	I0920 18:21:40.408359   75577 logs.go:276] 0 containers: []
	W0920 18:21:40.408371   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:40.408380   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:40.408448   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:40.453867   75577 cri.go:89] found id: ""
	I0920 18:21:40.453892   75577 logs.go:276] 0 containers: []
	W0920 18:21:40.453900   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:40.453906   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:40.453969   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:40.500625   75577 cri.go:89] found id: ""
	I0920 18:21:40.500660   75577 logs.go:276] 0 containers: []
	W0920 18:21:40.500670   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:40.500678   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:40.500750   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:40.533998   75577 cri.go:89] found id: ""
	I0920 18:21:40.534028   75577 logs.go:276] 0 containers: []
	W0920 18:21:40.534039   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:40.534048   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:40.534111   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:40.566279   75577 cri.go:89] found id: ""
	I0920 18:21:40.566308   75577 logs.go:276] 0 containers: []
	W0920 18:21:40.566319   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:40.566326   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:40.566392   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:40.599154   75577 cri.go:89] found id: ""
	I0920 18:21:40.599179   75577 logs.go:276] 0 containers: []
	W0920 18:21:40.599186   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:40.599194   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:40.599210   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:40.668568   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:40.668596   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:40.668608   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:40.747895   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:40.747969   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:40.789568   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:40.789604   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:40.839816   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:40.839852   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:43.354847   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:43.369077   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:43.369167   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:43.404667   75577 cri.go:89] found id: ""
	I0920 18:21:43.404698   75577 logs.go:276] 0 containers: []
	W0920 18:21:43.404710   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:43.404718   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:43.404778   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:43.440968   75577 cri.go:89] found id: ""
	I0920 18:21:43.441001   75577 logs.go:276] 0 containers: []
	W0920 18:21:43.441012   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:43.441021   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:43.441088   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:43.474481   75577 cri.go:89] found id: ""
	I0920 18:21:43.474511   75577 logs.go:276] 0 containers: []
	W0920 18:21:43.474520   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:43.474529   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:43.474592   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:43.508196   75577 cri.go:89] found id: ""
	I0920 18:21:43.508228   75577 logs.go:276] 0 containers: []
	W0920 18:21:43.508241   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:43.508248   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:43.508307   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:43.543679   75577 cri.go:89] found id: ""
	I0920 18:21:43.543710   75577 logs.go:276] 0 containers: []
	W0920 18:21:43.543721   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:43.543728   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:43.543788   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:43.577115   75577 cri.go:89] found id: ""
	I0920 18:21:43.577138   75577 logs.go:276] 0 containers: []
	W0920 18:21:43.577145   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:43.577152   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:43.577198   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:43.611552   75577 cri.go:89] found id: ""
	I0920 18:21:43.611588   75577 logs.go:276] 0 containers: []
	W0920 18:21:43.611602   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:43.611616   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:43.611685   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:43.647913   75577 cri.go:89] found id: ""
	I0920 18:21:43.647946   75577 logs.go:276] 0 containers: []
	W0920 18:21:43.647957   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:43.647969   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:43.647983   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:43.661014   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:43.661043   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:43.736584   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:43.736607   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:43.736620   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:43.814340   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:43.814380   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:43.855968   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:43.855996   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:46.410577   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:46.423733   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:46.423800   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:46.456819   75577 cri.go:89] found id: ""
	I0920 18:21:46.456847   75577 logs.go:276] 0 containers: []
	W0920 18:21:46.456855   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:46.456861   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:46.456927   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:46.493259   75577 cri.go:89] found id: ""
	I0920 18:21:46.493291   75577 logs.go:276] 0 containers: []
	W0920 18:21:46.493301   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:46.493307   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:46.493372   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:46.531200   75577 cri.go:89] found id: ""
	I0920 18:21:46.531233   75577 logs.go:276] 0 containers: []
	W0920 18:21:46.531241   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:46.531247   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:46.531294   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:46.567508   75577 cri.go:89] found id: ""
	I0920 18:21:46.567530   75577 logs.go:276] 0 containers: []
	W0920 18:21:46.567538   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:46.567544   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:46.567601   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:46.600257   75577 cri.go:89] found id: ""
	I0920 18:21:46.600290   75577 logs.go:276] 0 containers: []
	W0920 18:21:46.600303   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:46.600311   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:46.600375   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:46.635568   75577 cri.go:89] found id: ""
	I0920 18:21:46.635598   75577 logs.go:276] 0 containers: []
	W0920 18:21:46.635606   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:46.635612   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:46.635668   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:46.668257   75577 cri.go:89] found id: ""
	I0920 18:21:46.668304   75577 logs.go:276] 0 containers: []
	W0920 18:21:46.668316   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:46.668324   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:46.668392   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:46.703631   75577 cri.go:89] found id: ""
	I0920 18:21:46.703654   75577 logs.go:276] 0 containers: []
	W0920 18:21:46.703662   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:46.703671   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:46.703680   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:46.780232   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:46.780295   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:46.816623   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:46.816647   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:46.868911   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:46.868954   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:46.882716   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:46.882743   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:46.965166   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:49.466238   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:49.480167   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:49.480228   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:49.513507   75577 cri.go:89] found id: ""
	I0920 18:21:49.513537   75577 logs.go:276] 0 containers: []
	W0920 18:21:49.513549   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:49.513558   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:49.513620   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:49.548708   75577 cri.go:89] found id: ""
	I0920 18:21:49.548739   75577 logs.go:276] 0 containers: []
	W0920 18:21:49.548750   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:49.548758   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:49.548810   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:49.583586   75577 cri.go:89] found id: ""
	I0920 18:21:49.583614   75577 logs.go:276] 0 containers: []
	W0920 18:21:49.583625   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:49.583633   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:49.583695   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:49.617574   75577 cri.go:89] found id: ""
	I0920 18:21:49.617605   75577 logs.go:276] 0 containers: []
	W0920 18:21:49.617616   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:49.617623   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:49.617684   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:49.656517   75577 cri.go:89] found id: ""
	I0920 18:21:49.656552   75577 logs.go:276] 0 containers: []
	W0920 18:21:49.656563   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:49.656571   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:49.656634   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:49.692848   75577 cri.go:89] found id: ""
	I0920 18:21:49.692870   75577 logs.go:276] 0 containers: []
	W0920 18:21:49.692877   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:49.692883   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:49.692950   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:49.728593   75577 cri.go:89] found id: ""
	I0920 18:21:49.728620   75577 logs.go:276] 0 containers: []
	W0920 18:21:49.728630   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:49.728643   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:49.728705   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:49.764187   75577 cri.go:89] found id: ""
	I0920 18:21:49.764218   75577 logs.go:276] 0 containers: []
	W0920 18:21:49.764229   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:49.764242   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:49.764260   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:49.837741   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:49.837764   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:49.837777   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:49.914941   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:49.914978   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:49.957609   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:49.957639   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:50.012075   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:50.012115   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:52.527722   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:52.542125   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:52.542206   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:52.579884   75577 cri.go:89] found id: ""
	I0920 18:21:52.579910   75577 logs.go:276] 0 containers: []
	W0920 18:21:52.579919   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:52.579924   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:52.579982   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:52.619138   75577 cri.go:89] found id: ""
	I0920 18:21:52.619167   75577 logs.go:276] 0 containers: []
	W0920 18:21:52.619180   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:52.619188   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:52.619246   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:52.654460   75577 cri.go:89] found id: ""
	I0920 18:21:52.654498   75577 logs.go:276] 0 containers: []
	W0920 18:21:52.654508   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:52.654515   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:52.654578   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:52.708015   75577 cri.go:89] found id: ""
	I0920 18:21:52.708042   75577 logs.go:276] 0 containers: []
	W0920 18:21:52.708051   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:52.708057   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:52.708127   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:52.743917   75577 cri.go:89] found id: ""
	I0920 18:21:52.743952   75577 logs.go:276] 0 containers: []
	W0920 18:21:52.743964   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:52.743972   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:52.744025   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:52.783458   75577 cri.go:89] found id: ""
	I0920 18:21:52.783481   75577 logs.go:276] 0 containers: []
	W0920 18:21:52.783488   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:52.783495   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:52.783552   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:52.817716   75577 cri.go:89] found id: ""
	I0920 18:21:52.817749   75577 logs.go:276] 0 containers: []
	W0920 18:21:52.817762   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:52.817771   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:52.817882   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:52.857141   75577 cri.go:89] found id: ""
	I0920 18:21:52.857169   75577 logs.go:276] 0 containers: []
	W0920 18:21:52.857180   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:52.857190   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:52.857204   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:52.910555   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:52.910597   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:52.923843   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:52.923873   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:52.994263   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:52.994296   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:52.994313   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:53.079782   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:53.079829   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:55.619418   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:55.633854   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:55.633922   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:55.669778   75577 cri.go:89] found id: ""
	I0920 18:21:55.669813   75577 logs.go:276] 0 containers: []
	W0920 18:21:55.669824   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:55.669853   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:55.669920   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:55.705687   75577 cri.go:89] found id: ""
	I0920 18:21:55.705724   75577 logs.go:276] 0 containers: []
	W0920 18:21:55.705736   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:55.705746   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:55.705811   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:55.739249   75577 cri.go:89] found id: ""
	I0920 18:21:55.739288   75577 logs.go:276] 0 containers: []
	W0920 18:21:55.739296   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:55.739302   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:55.739438   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:55.775077   75577 cri.go:89] found id: ""
	I0920 18:21:55.775109   75577 logs.go:276] 0 containers: []
	W0920 18:21:55.775121   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:55.775130   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:55.775181   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:55.810291   75577 cri.go:89] found id: ""
	I0920 18:21:55.810329   75577 logs.go:276] 0 containers: []
	W0920 18:21:55.810340   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:55.810349   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:55.810415   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:55.844440   75577 cri.go:89] found id: ""
	I0920 18:21:55.844468   75577 logs.go:276] 0 containers: []
	W0920 18:21:55.844476   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:55.844482   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:55.844551   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:55.878611   75577 cri.go:89] found id: ""
	I0920 18:21:55.878647   75577 logs.go:276] 0 containers: []
	W0920 18:21:55.878659   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:55.878667   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:55.878733   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:55.922064   75577 cri.go:89] found id: ""
	I0920 18:21:55.922101   75577 logs.go:276] 0 containers: []
	W0920 18:21:55.922114   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:55.922127   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:55.922147   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:55.979406   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:55.979445   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:55.993082   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:55.993111   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:56.065650   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:56.065701   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:56.065717   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:56.142640   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:56.142684   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:58.683524   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:58.698775   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:58.698833   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:58.737369   75577 cri.go:89] found id: ""
	I0920 18:21:58.737398   75577 logs.go:276] 0 containers: []
	W0920 18:21:58.737409   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:58.737417   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:58.737476   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:58.779504   75577 cri.go:89] found id: ""
	I0920 18:21:58.779533   75577 logs.go:276] 0 containers: []
	W0920 18:21:58.779544   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:58.779552   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:58.779610   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:58.814363   75577 cri.go:89] found id: ""
	I0920 18:21:58.814393   75577 logs.go:276] 0 containers: []
	W0920 18:21:58.814401   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:58.814407   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:58.814454   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:58.846219   75577 cri.go:89] found id: ""
	I0920 18:21:58.846242   75577 logs.go:276] 0 containers: []
	W0920 18:21:58.846251   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:58.846256   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:58.846307   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:58.882387   75577 cri.go:89] found id: ""
	I0920 18:21:58.882423   75577 logs.go:276] 0 containers: []
	W0920 18:21:58.882431   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:58.882437   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:58.882497   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:58.918632   75577 cri.go:89] found id: ""
	I0920 18:21:58.918668   75577 logs.go:276] 0 containers: []
	W0920 18:21:58.918679   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:58.918686   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:58.918751   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:58.953521   75577 cri.go:89] found id: ""
	I0920 18:21:58.953547   75577 logs.go:276] 0 containers: []
	W0920 18:21:58.953557   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:58.953564   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:58.953624   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:58.987412   75577 cri.go:89] found id: ""
	I0920 18:21:58.987439   75577 logs.go:276] 0 containers: []
	W0920 18:21:58.987447   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:58.987457   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:58.987471   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:59.077108   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:59.077169   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:59.141723   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:59.141758   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:59.208035   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:59.208081   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:59.221380   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:59.221404   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:59.297151   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:22:01.797684   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:22:01.812275   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:22:01.812338   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:22:01.846431   75577 cri.go:89] found id: ""
	I0920 18:22:01.846459   75577 logs.go:276] 0 containers: []
	W0920 18:22:01.846477   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:22:01.846483   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:22:01.846535   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:22:01.889016   75577 cri.go:89] found id: ""
	I0920 18:22:01.889049   75577 logs.go:276] 0 containers: []
	W0920 18:22:01.889061   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:22:01.889069   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:22:01.889144   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:22:01.926048   75577 cri.go:89] found id: ""
	I0920 18:22:01.926078   75577 logs.go:276] 0 containers: []
	W0920 18:22:01.926090   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:22:01.926108   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:22:01.926185   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:22:01.964510   75577 cri.go:89] found id: ""
	I0920 18:22:01.964536   75577 logs.go:276] 0 containers: []
	W0920 18:22:01.964547   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:22:01.964555   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:22:01.964624   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:22:02.005549   75577 cri.go:89] found id: ""
	I0920 18:22:02.005575   75577 logs.go:276] 0 containers: []
	W0920 18:22:02.005583   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:22:02.005588   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:22:02.005642   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:22:02.047359   75577 cri.go:89] found id: ""
	I0920 18:22:02.047385   75577 logs.go:276] 0 containers: []
	W0920 18:22:02.047393   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:22:02.047399   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:22:02.047455   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:22:02.082961   75577 cri.go:89] found id: ""
	I0920 18:22:02.082999   75577 logs.go:276] 0 containers: []
	W0920 18:22:02.083008   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:22:02.083015   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:22:02.083062   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:22:02.117696   75577 cri.go:89] found id: ""
	I0920 18:22:02.117732   75577 logs.go:276] 0 containers: []
	W0920 18:22:02.117741   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:22:02.117753   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:22:02.117769   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:22:02.131282   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:22:02.131311   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:22:02.200738   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:22:02.200760   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:22:02.200776   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:22:02.281162   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:22:02.281200   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:22:02.321854   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:22:02.321888   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:22:04.874353   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:22:04.889710   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:22:04.889773   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:22:04.945719   75577 cri.go:89] found id: ""
	I0920 18:22:04.945745   75577 logs.go:276] 0 containers: []
	W0920 18:22:04.945753   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:22:04.945759   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:22:04.945802   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:22:04.995492   75577 cri.go:89] found id: ""
	I0920 18:22:04.995535   75577 logs.go:276] 0 containers: []
	W0920 18:22:04.995547   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:22:04.995555   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:22:04.995615   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:22:05.037827   75577 cri.go:89] found id: ""
	I0920 18:22:05.037871   75577 logs.go:276] 0 containers: []
	W0920 18:22:05.037882   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:22:05.037888   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:22:05.037935   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:22:05.077652   75577 cri.go:89] found id: ""
	I0920 18:22:05.077691   75577 logs.go:276] 0 containers: []
	W0920 18:22:05.077704   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:22:05.077712   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:22:05.077772   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:22:05.114472   75577 cri.go:89] found id: ""
	I0920 18:22:05.114509   75577 logs.go:276] 0 containers: []
	W0920 18:22:05.114520   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:22:05.114527   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:22:05.114590   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:22:05.150809   75577 cri.go:89] found id: ""
	I0920 18:22:05.150841   75577 logs.go:276] 0 containers: []
	W0920 18:22:05.150853   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:22:05.150860   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:22:05.150908   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:22:05.188880   75577 cri.go:89] found id: ""
	I0920 18:22:05.188910   75577 logs.go:276] 0 containers: []
	W0920 18:22:05.188921   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:22:05.188929   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:22:05.188990   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:22:05.224990   75577 cri.go:89] found id: ""
	I0920 18:22:05.225015   75577 logs.go:276] 0 containers: []
	W0920 18:22:05.225023   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:22:05.225032   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:22:05.225043   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:22:05.298000   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:22:05.298023   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:22:05.298037   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:22:05.382969   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:22:05.383008   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:22:05.424911   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:22:05.424940   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:22:05.475988   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:22:05.476024   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:22:07.990660   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:22:08.005572   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:22:08.005637   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:22:08.044354   75577 cri.go:89] found id: ""
	I0920 18:22:08.044381   75577 logs.go:276] 0 containers: []
	W0920 18:22:08.044390   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:22:08.044401   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:22:08.044449   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:22:08.082899   75577 cri.go:89] found id: ""
	I0920 18:22:08.082928   75577 logs.go:276] 0 containers: []
	W0920 18:22:08.082939   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:22:08.082948   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:22:08.083009   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:22:08.129297   75577 cri.go:89] found id: ""
	I0920 18:22:08.129325   75577 logs.go:276] 0 containers: []
	W0920 18:22:08.129335   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:22:08.129343   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:22:08.129404   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:22:08.168744   75577 cri.go:89] found id: ""
	I0920 18:22:08.168774   75577 logs.go:276] 0 containers: []
	W0920 18:22:08.168787   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:22:08.168792   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:22:08.168849   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:22:08.203830   75577 cri.go:89] found id: ""
	I0920 18:22:08.203860   75577 logs.go:276] 0 containers: []
	W0920 18:22:08.203871   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:22:08.203879   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:22:08.203940   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:22:08.238226   75577 cri.go:89] found id: ""
	I0920 18:22:08.238250   75577 logs.go:276] 0 containers: []
	W0920 18:22:08.238258   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:22:08.238263   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:22:08.238331   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:22:08.271457   75577 cri.go:89] found id: ""
	I0920 18:22:08.271485   75577 logs.go:276] 0 containers: []
	W0920 18:22:08.271495   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:22:08.271502   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:22:08.271563   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:22:08.306661   75577 cri.go:89] found id: ""
	I0920 18:22:08.306692   75577 logs.go:276] 0 containers: []
	W0920 18:22:08.306703   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:22:08.306712   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:22:08.306730   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:22:08.357760   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:22:08.357793   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:22:08.372625   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:22:08.372660   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:22:08.442477   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:22:08.442501   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:22:08.442517   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:22:08.528381   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:22:08.528412   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:22:11.064842   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:22:11.078086   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:22:11.078158   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:22:11.113454   75577 cri.go:89] found id: ""
	I0920 18:22:11.113485   75577 logs.go:276] 0 containers: []
	W0920 18:22:11.113497   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:22:11.113511   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:22:11.113575   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:22:11.148039   75577 cri.go:89] found id: ""
	I0920 18:22:11.148064   75577 logs.go:276] 0 containers: []
	W0920 18:22:11.148072   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:22:11.148078   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:22:11.148127   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:22:11.182667   75577 cri.go:89] found id: ""
	I0920 18:22:11.182697   75577 logs.go:276] 0 containers: []
	W0920 18:22:11.182708   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:22:11.182715   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:22:11.182776   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:22:11.217022   75577 cri.go:89] found id: ""
	I0920 18:22:11.217055   75577 logs.go:276] 0 containers: []
	W0920 18:22:11.217067   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:22:11.217075   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:22:11.217141   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:22:11.253970   75577 cri.go:89] found id: ""
	I0920 18:22:11.254001   75577 logs.go:276] 0 containers: []
	W0920 18:22:11.254012   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:22:11.254019   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:22:11.254085   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:22:11.288070   75577 cri.go:89] found id: ""
	I0920 18:22:11.288103   75577 logs.go:276] 0 containers: []
	W0920 18:22:11.288114   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:22:11.288121   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:22:11.288189   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:22:11.324215   75577 cri.go:89] found id: ""
	I0920 18:22:11.324240   75577 logs.go:276] 0 containers: []
	W0920 18:22:11.324254   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:22:11.324261   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:22:11.324319   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:22:11.359377   75577 cri.go:89] found id: ""
	I0920 18:22:11.359406   75577 logs.go:276] 0 containers: []
	W0920 18:22:11.359414   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:22:11.359423   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:22:11.359433   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:22:11.416479   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:22:11.416520   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:22:11.430240   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:22:11.430269   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:22:11.501531   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:22:11.501553   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:22:11.501565   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:22:11.580748   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:22:11.580787   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:22:14.119084   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:22:14.132428   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:22:14.132505   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:22:14.167674   75577 cri.go:89] found id: ""
	I0920 18:22:14.167699   75577 logs.go:276] 0 containers: []
	W0920 18:22:14.167710   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:22:14.167717   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:22:14.167772   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:22:14.204570   75577 cri.go:89] found id: ""
	I0920 18:22:14.204595   75577 logs.go:276] 0 containers: []
	W0920 18:22:14.204603   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:22:14.204608   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:22:14.204655   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:22:14.241687   75577 cri.go:89] found id: ""
	I0920 18:22:14.241716   75577 logs.go:276] 0 containers: []
	W0920 18:22:14.241724   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:22:14.241731   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:22:14.241789   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:22:14.275776   75577 cri.go:89] found id: ""
	I0920 18:22:14.275802   75577 logs.go:276] 0 containers: []
	W0920 18:22:14.275810   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:22:14.275816   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:22:14.275872   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:22:14.309565   75577 cri.go:89] found id: ""
	I0920 18:22:14.309589   75577 logs.go:276] 0 containers: []
	W0920 18:22:14.309596   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:22:14.309602   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:22:14.309662   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:22:14.341858   75577 cri.go:89] found id: ""
	I0920 18:22:14.341884   75577 logs.go:276] 0 containers: []
	W0920 18:22:14.341892   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:22:14.341898   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:22:14.341963   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:22:14.377863   75577 cri.go:89] found id: ""
	I0920 18:22:14.377896   75577 logs.go:276] 0 containers: []
	W0920 18:22:14.377906   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:22:14.377912   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:22:14.377988   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:22:14.413182   75577 cri.go:89] found id: ""
	I0920 18:22:14.413207   75577 logs.go:276] 0 containers: []
	W0920 18:22:14.413214   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:22:14.413236   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:22:14.413253   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:22:14.466782   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:22:14.466820   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:22:14.480627   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:22:14.480668   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:22:14.552071   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:22:14.552104   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:22:14.552121   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:22:14.626481   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:22:14.626518   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:22:17.163264   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:22:17.181609   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:22:17.181684   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:22:17.219871   75577 cri.go:89] found id: ""
	I0920 18:22:17.219899   75577 logs.go:276] 0 containers: []
	W0920 18:22:17.219923   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:22:17.219932   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:22:17.220013   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:22:17.256315   75577 cri.go:89] found id: ""
	I0920 18:22:17.256346   75577 logs.go:276] 0 containers: []
	W0920 18:22:17.256356   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:22:17.256364   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:22:17.256434   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:22:17.294326   75577 cri.go:89] found id: ""
	I0920 18:22:17.294352   75577 logs.go:276] 0 containers: []
	W0920 18:22:17.294360   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:22:17.294366   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:22:17.294421   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:22:17.336461   75577 cri.go:89] found id: ""
	I0920 18:22:17.336491   75577 logs.go:276] 0 containers: []
	W0920 18:22:17.336500   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:22:17.336506   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:22:17.336568   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:22:17.376242   75577 cri.go:89] found id: ""
	I0920 18:22:17.376283   75577 logs.go:276] 0 containers: []
	W0920 18:22:17.376295   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:22:17.376302   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:22:17.376363   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:22:17.409147   75577 cri.go:89] found id: ""
	I0920 18:22:17.409180   75577 logs.go:276] 0 containers: []
	W0920 18:22:17.409190   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:22:17.409198   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:22:17.409269   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:22:17.444698   75577 cri.go:89] found id: ""
	I0920 18:22:17.444725   75577 logs.go:276] 0 containers: []
	W0920 18:22:17.444734   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:22:17.444738   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:22:17.444791   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:22:17.485914   75577 cri.go:89] found id: ""
	I0920 18:22:17.485948   75577 logs.go:276] 0 containers: []
	W0920 18:22:17.485959   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:22:17.485970   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:22:17.485983   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:22:17.523567   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:22:17.523601   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:22:17.588864   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:22:17.588905   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:22:17.605052   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:22:17.605092   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:22:17.677012   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:22:17.677039   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:22:17.677056   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:22:20.252258   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:22:20.267607   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:22:20.267698   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:22:20.302260   75577 cri.go:89] found id: ""
	I0920 18:22:20.302291   75577 logs.go:276] 0 containers: []
	W0920 18:22:20.302301   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:22:20.302309   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:22:20.302373   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:22:20.335343   75577 cri.go:89] found id: ""
	I0920 18:22:20.335377   75577 logs.go:276] 0 containers: []
	W0920 18:22:20.335389   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:22:20.335397   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:22:20.335460   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:22:20.369612   75577 cri.go:89] found id: ""
	I0920 18:22:20.369641   75577 logs.go:276] 0 containers: []
	W0920 18:22:20.369649   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:22:20.369655   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:22:20.369703   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:22:20.403703   75577 cri.go:89] found id: ""
	I0920 18:22:20.403732   75577 logs.go:276] 0 containers: []
	W0920 18:22:20.403740   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:22:20.403746   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:22:20.403804   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:22:20.436288   75577 cri.go:89] found id: ""
	I0920 18:22:20.436316   75577 logs.go:276] 0 containers: []
	W0920 18:22:20.436328   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:22:20.436336   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:22:20.436399   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:22:20.471536   75577 cri.go:89] found id: ""
	I0920 18:22:20.471572   75577 logs.go:276] 0 containers: []
	W0920 18:22:20.471584   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:22:20.471593   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:22:20.471657   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:22:20.506012   75577 cri.go:89] found id: ""
	I0920 18:22:20.506107   75577 logs.go:276] 0 containers: []
	W0920 18:22:20.506134   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:22:20.506143   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:22:20.506199   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:22:20.542614   75577 cri.go:89] found id: ""
	I0920 18:22:20.542650   75577 logs.go:276] 0 containers: []
	W0920 18:22:20.542660   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:22:20.542671   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:22:20.542687   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:22:20.596316   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:22:20.596357   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:22:20.610438   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:22:20.610465   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:22:20.688061   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:22:20.688079   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:22:20.688091   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:22:20.768249   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:22:20.768296   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:22:23.307149   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:22:23.319889   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:22:23.319972   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:22:23.356613   75577 cri.go:89] found id: ""
	I0920 18:22:23.356645   75577 logs.go:276] 0 containers: []
	W0920 18:22:23.356656   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:22:23.356663   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:22:23.356728   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:22:23.391632   75577 cri.go:89] found id: ""
	I0920 18:22:23.391663   75577 logs.go:276] 0 containers: []
	W0920 18:22:23.391675   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:22:23.391683   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:22:23.391748   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:22:23.426908   75577 cri.go:89] found id: ""
	I0920 18:22:23.426936   75577 logs.go:276] 0 containers: []
	W0920 18:22:23.426946   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:22:23.426952   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:22:23.427013   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:22:23.461890   75577 cri.go:89] found id: ""
	I0920 18:22:23.461925   75577 logs.go:276] 0 containers: []
	W0920 18:22:23.461938   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:22:23.461947   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:22:23.462014   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:22:23.503517   75577 cri.go:89] found id: ""
	I0920 18:22:23.503549   75577 logs.go:276] 0 containers: []
	W0920 18:22:23.503560   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:22:23.503566   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:22:23.503615   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:22:23.540699   75577 cri.go:89] found id: ""
	I0920 18:22:23.540722   75577 logs.go:276] 0 containers: []
	W0920 18:22:23.540731   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:22:23.540736   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:22:23.540783   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:22:23.575485   75577 cri.go:89] found id: ""
	I0920 18:22:23.575509   75577 logs.go:276] 0 containers: []
	W0920 18:22:23.575517   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:22:23.575523   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:22:23.575576   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:22:23.610998   75577 cri.go:89] found id: ""
	I0920 18:22:23.611028   75577 logs.go:276] 0 containers: []
	W0920 18:22:23.611039   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:22:23.611051   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:22:23.611065   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:22:23.687072   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:22:23.687109   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:22:23.724790   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:22:23.724824   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:22:23.778905   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:22:23.778945   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:22:23.794298   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:22:23.794329   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:22:23.873341   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:22:26.374541   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:22:26.388151   75577 kubeadm.go:597] duration metric: took 4m2.956358154s to restartPrimaryControlPlane
	W0920 18:22:26.388229   75577 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0920 18:22:26.388260   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0920 18:22:27.454521   75577 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.066231187s)
	I0920 18:22:27.454602   75577 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:22:27.469409   75577 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 18:22:27.480314   75577 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 18:22:27.491389   75577 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 18:22:27.491416   75577 kubeadm.go:157] found existing configuration files:
	
	I0920 18:22:27.491468   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 18:22:27.501448   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 18:22:27.501534   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 18:22:27.511370   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 18:22:27.521767   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 18:22:27.521855   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 18:22:27.531717   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 18:22:27.540517   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 18:22:27.540575   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 18:22:27.549644   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 18:22:27.558412   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 18:22:27.558480   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 18:22:27.567702   75577 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 18:22:27.790386   75577 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 18:24:23.891635   75577 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0920 18:24:23.891735   75577 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0920 18:24:23.893591   75577 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0920 18:24:23.893681   75577 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 18:24:23.893785   75577 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 18:24:23.893913   75577 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 18:24:23.894056   75577 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0920 18:24:23.894134   75577 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 18:24:23.895929   75577 out.go:235]   - Generating certificates and keys ...
	I0920 18:24:23.896024   75577 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 18:24:23.896127   75577 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 18:24:23.896245   75577 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 18:24:23.896329   75577 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 18:24:23.896426   75577 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 18:24:23.896510   75577 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 18:24:23.896603   75577 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 18:24:23.896686   75577 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 18:24:23.896783   75577 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 18:24:23.896865   75577 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 18:24:23.896916   75577 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 18:24:23.896984   75577 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 18:24:23.897054   75577 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 18:24:23.897112   75577 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 18:24:23.897174   75577 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 18:24:23.897237   75577 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 18:24:23.897367   75577 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 18:24:23.897488   75577 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 18:24:23.897554   75577 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 18:24:23.897660   75577 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 18:24:23.899056   75577 out.go:235]   - Booting up control plane ...
	I0920 18:24:23.899173   75577 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 18:24:23.899287   75577 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 18:24:23.899371   75577 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 18:24:23.899460   75577 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 18:24:23.899639   75577 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0920 18:24:23.899719   75577 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0920 18:24:23.899823   75577 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:24:23.900047   75577 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:24:23.900124   75577 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:24:23.900322   75577 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:24:23.900392   75577 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:24:23.900556   75577 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:24:23.900634   75577 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:24:23.900826   75577 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:24:23.900884   75577 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:24:23.901044   75577 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:24:23.901052   75577 kubeadm.go:310] 
	I0920 18:24:23.901094   75577 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0920 18:24:23.901129   75577 kubeadm.go:310] 		timed out waiting for the condition
	I0920 18:24:23.901135   75577 kubeadm.go:310] 
	I0920 18:24:23.901179   75577 kubeadm.go:310] 	This error is likely caused by:
	I0920 18:24:23.901209   75577 kubeadm.go:310] 		- The kubelet is not running
	I0920 18:24:23.901314   75577 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0920 18:24:23.901335   75577 kubeadm.go:310] 
	I0920 18:24:23.901458   75577 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0920 18:24:23.901515   75577 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0920 18:24:23.901547   75577 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0920 18:24:23.901556   75577 kubeadm.go:310] 
	I0920 18:24:23.901719   75577 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0920 18:24:23.901859   75577 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0920 18:24:23.901870   75577 kubeadm.go:310] 
	I0920 18:24:23.902030   75577 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0920 18:24:23.902122   75577 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0920 18:24:23.902186   75577 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0920 18:24:23.902264   75577 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0920 18:24:23.902299   75577 kubeadm.go:310] 
	W0920 18:24:23.902388   75577 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0920 18:24:23.902435   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0920 18:24:24.370490   75577 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:24:24.385383   75577 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 18:24:24.395443   75577 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 18:24:24.395463   75577 kubeadm.go:157] found existing configuration files:
	
	I0920 18:24:24.395505   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 18:24:24.404947   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 18:24:24.405021   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 18:24:24.415145   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 18:24:24.424199   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 18:24:24.424279   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 18:24:24.435108   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 18:24:24.446684   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 18:24:24.446742   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 18:24:24.457199   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 18:24:24.467240   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 18:24:24.467315   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 18:24:24.479293   75577 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 18:24:24.704601   75577 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 18:26:21.167677   75577 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0920 18:26:21.167760   75577 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0920 18:26:21.170176   75577 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0920 18:26:21.170249   75577 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 18:26:21.170353   75577 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 18:26:21.170485   75577 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 18:26:21.170653   75577 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0920 18:26:21.170778   75577 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 18:26:21.172479   75577 out.go:235]   - Generating certificates and keys ...
	I0920 18:26:21.172559   75577 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 18:26:21.172623   75577 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 18:26:21.172734   75577 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 18:26:21.172820   75577 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 18:26:21.172901   75577 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 18:26:21.172948   75577 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 18:26:21.173054   75577 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 18:26:21.173145   75577 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 18:26:21.173238   75577 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 18:26:21.173325   75577 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 18:26:21.173381   75577 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 18:26:21.173468   75577 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 18:26:21.173517   75577 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 18:26:21.173562   75577 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 18:26:21.173657   75577 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 18:26:21.173753   75577 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 18:26:21.173931   75577 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 18:26:21.174042   75577 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 18:26:21.174120   75577 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 18:26:21.174226   75577 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 18:26:21.175785   75577 out.go:235]   - Booting up control plane ...
	I0920 18:26:21.175897   75577 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 18:26:21.176062   75577 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 18:26:21.176179   75577 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 18:26:21.176283   75577 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 18:26:21.176482   75577 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0920 18:26:21.176567   75577 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0920 18:26:21.176654   75577 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:26:21.176850   75577 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:26:21.176952   75577 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:26:21.177146   75577 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:26:21.177255   75577 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:26:21.177460   75577 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:26:21.177527   75577 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:26:21.177681   75577 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:26:21.177758   75577 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:26:21.177952   75577 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:26:21.177966   75577 kubeadm.go:310] 
	I0920 18:26:21.178001   75577 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0920 18:26:21.178035   75577 kubeadm.go:310] 		timed out waiting for the condition
	I0920 18:26:21.178041   75577 kubeadm.go:310] 
	I0920 18:26:21.178070   75577 kubeadm.go:310] 	This error is likely caused by:
	I0920 18:26:21.178099   75577 kubeadm.go:310] 		- The kubelet is not running
	I0920 18:26:21.178239   75577 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0920 18:26:21.178251   75577 kubeadm.go:310] 
	I0920 18:26:21.178348   75577 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0920 18:26:21.178378   75577 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0920 18:26:21.178415   75577 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0920 18:26:21.178421   75577 kubeadm.go:310] 
	I0920 18:26:21.178505   75577 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0920 18:26:21.178581   75577 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0920 18:26:21.178590   75577 kubeadm.go:310] 
	I0920 18:26:21.178695   75577 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0920 18:26:21.178770   75577 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0920 18:26:21.178832   75577 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0920 18:26:21.178893   75577 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0920 18:26:21.178950   75577 kubeadm.go:394] duration metric: took 7m57.80706374s to StartCluster
	I0920 18:26:21.178961   75577 kubeadm.go:310] 
	I0920 18:26:21.178989   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:26:21.179047   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:26:21.229528   75577 cri.go:89] found id: ""
	I0920 18:26:21.229554   75577 logs.go:276] 0 containers: []
	W0920 18:26:21.229564   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:26:21.229572   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:26:21.229644   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:26:21.265997   75577 cri.go:89] found id: ""
	I0920 18:26:21.266021   75577 logs.go:276] 0 containers: []
	W0920 18:26:21.266029   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:26:21.266034   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:26:21.266081   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:26:21.309358   75577 cri.go:89] found id: ""
	I0920 18:26:21.309391   75577 logs.go:276] 0 containers: []
	W0920 18:26:21.309402   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:26:21.309409   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:26:21.309469   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:26:21.349418   75577 cri.go:89] found id: ""
	I0920 18:26:21.349442   75577 logs.go:276] 0 containers: []
	W0920 18:26:21.349451   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:26:21.349457   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:26:21.349537   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:26:21.388067   75577 cri.go:89] found id: ""
	I0920 18:26:21.388092   75577 logs.go:276] 0 containers: []
	W0920 18:26:21.388105   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:26:21.388111   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:26:21.388165   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:26:21.428474   75577 cri.go:89] found id: ""
	I0920 18:26:21.428501   75577 logs.go:276] 0 containers: []
	W0920 18:26:21.428512   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:26:21.428519   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:26:21.428580   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:26:21.466188   75577 cri.go:89] found id: ""
	I0920 18:26:21.466215   75577 logs.go:276] 0 containers: []
	W0920 18:26:21.466225   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:26:21.466232   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:26:21.466291   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:26:21.504396   75577 cri.go:89] found id: ""
	I0920 18:26:21.504422   75577 logs.go:276] 0 containers: []
	W0920 18:26:21.504432   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:26:21.504443   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:26:21.504457   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:26:21.608057   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:26:21.608098   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:26:21.672536   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:26:21.672564   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:26:21.723219   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:26:21.723251   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:26:21.736436   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:26:21.736463   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:26:21.809055   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0920 18:26:21.809079   75577 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0920 18:26:21.809128   75577 out.go:270] * 
	* 
	W0920 18:26:21.809197   75577 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0920 18:26:21.809217   75577 out.go:270] * 
	* 
	W0920 18:26:21.810193   75577 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 18:26:21.814385   75577 out.go:201] 
	W0920 18:26:21.816118   75577 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0920 18:26:21.816186   75577 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0920 18:26:21.816225   75577 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0920 18:26:21.818478   75577 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-744025 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-744025 -n old-k8s-version-744025
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-744025 -n old-k8s-version-744025: exit status 2 (242.065215ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-744025 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-744025 logs -n 25: (1.685879468s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-833505 sudo cat                             | flannel-833505               | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC | 20 Sep 24 18:09 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p flannel-833505 sudo                                 | flannel-833505               | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC | 20 Sep 24 18:09 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p flannel-833505 sudo                                 | flannel-833505               | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC | 20 Sep 24 18:09 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p flannel-833505 sudo                                 | flannel-833505               | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC | 20 Sep 24 18:09 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p flannel-833505 sudo find                            | flannel-833505               | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC | 20 Sep 24 18:09 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p flannel-833505 sudo crio                            | flannel-833505               | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC | 20 Sep 24 18:09 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p flannel-833505                                      | flannel-833505               | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC | 20 Sep 24 18:09 UTC |
	| delete  | -p                                                     | disable-driver-mounts-739804 | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC | 20 Sep 24 18:09 UTC |
	|         | disable-driver-mounts-739804                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-553719 | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC | 20 Sep 24 18:10 UTC |
	|         | default-k8s-diff-port-553719                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-956403             | no-preload-956403            | jenkins | v1.34.0 | 20 Sep 24 18:10 UTC | 20 Sep 24 18:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-956403                                   | no-preload-956403            | jenkins | v1.34.0 | 20 Sep 24 18:10 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-768431            | embed-certs-768431           | jenkins | v1.34.0 | 20 Sep 24 18:10 UTC | 20 Sep 24 18:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-768431                                  | embed-certs-768431           | jenkins | v1.34.0 | 20 Sep 24 18:10 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-553719  | default-k8s-diff-port-553719 | jenkins | v1.34.0 | 20 Sep 24 18:11 UTC | 20 Sep 24 18:11 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-553719 | jenkins | v1.34.0 | 20 Sep 24 18:11 UTC |                     |
	|         | default-k8s-diff-port-553719                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-956403                  | no-preload-956403            | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-744025        | old-k8s-version-744025       | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p no-preload-956403                                   | no-preload-956403            | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC | 20 Sep 24 18:23 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-768431                 | embed-certs-768431           | jenkins | v1.34.0 | 20 Sep 24 18:13 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-768431                                  | embed-certs-768431           | jenkins | v1.34.0 | 20 Sep 24 18:13 UTC | 20 Sep 24 18:22 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-553719       | default-k8s-diff-port-553719 | jenkins | v1.34.0 | 20 Sep 24 18:13 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-553719 | jenkins | v1.34.0 | 20 Sep 24 18:13 UTC | 20 Sep 24 18:22 UTC |
	|         | default-k8s-diff-port-553719                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-744025                              | old-k8s-version-744025       | jenkins | v1.34.0 | 20 Sep 24 18:14 UTC | 20 Sep 24 18:14 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-744025             | old-k8s-version-744025       | jenkins | v1.34.0 | 20 Sep 24 18:14 UTC | 20 Sep 24 18:14 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-744025                              | old-k8s-version-744025       | jenkins | v1.34.0 | 20 Sep 24 18:14 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 18:14:12
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 18:14:12.735020   75577 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:14:12.735385   75577 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:14:12.735400   75577 out.go:358] Setting ErrFile to fd 2...
	I0920 18:14:12.735407   75577 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:14:12.735610   75577 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-8777/.minikube/bin
	I0920 18:14:12.736161   75577 out.go:352] Setting JSON to false
	I0920 18:14:12.737033   75577 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6996,"bootTime":1726849057,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 18:14:12.737131   75577 start.go:139] virtualization: kvm guest
	I0920 18:14:12.739127   75577 out.go:177] * [old-k8s-version-744025] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 18:14:12.740288   75577 notify.go:220] Checking for updates...
	I0920 18:14:12.740310   75577 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 18:14:12.741542   75577 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 18:14:12.742727   75577 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-8777/kubeconfig
	I0920 18:14:12.743806   75577 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-8777/.minikube
	I0920 18:14:12.744863   75577 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 18:14:12.745877   75577 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 18:14:12.747201   75577 config.go:182] Loaded profile config "old-k8s-version-744025": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0920 18:14:12.747636   75577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:14:12.747678   75577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:14:12.762352   75577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42889
	I0920 18:14:12.762734   75577 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:14:12.763321   75577 main.go:141] libmachine: Using API Version  1
	I0920 18:14:12.763348   75577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:14:12.763742   75577 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:14:12.763968   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .DriverName
	I0920 18:14:12.765767   75577 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0920 18:14:12.767036   75577 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 18:14:12.767377   75577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:14:12.767449   75577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:14:12.782473   75577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40777
	I0920 18:14:12.782971   75577 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:14:12.783455   75577 main.go:141] libmachine: Using API Version  1
	I0920 18:14:12.783477   75577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:14:12.783807   75577 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:14:12.784035   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .DriverName
	I0920 18:14:12.820265   75577 out.go:177] * Using the kvm2 driver based on existing profile
	I0920 18:14:12.821397   75577 start.go:297] selected driver: kvm2
	I0920 18:14:12.821409   75577 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-744025 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-744025 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.207 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:14:12.821519   75577 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 18:14:12.822217   75577 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:14:12.822309   75577 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19672-8777/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 18:14:12.837641   75577 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 18:14:12.838107   75577 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:14:12.838143   75577 cni.go:84] Creating CNI manager for ""
	I0920 18:14:12.838193   75577 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:14:12.838231   75577 start.go:340] cluster config:
	{Name:old-k8s-version-744025 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-744025 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.207 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:14:12.838329   75577 iso.go:125] acquiring lock: {Name:mkba95ef0488e46f622333e9f317f43def93040b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:14:12.840059   75577 out.go:177] * Starting "old-k8s-version-744025" primary control-plane node in "old-k8s-version-744025" cluster
	I0920 18:14:15.230099   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:14:12.841339   75577 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0920 18:14:12.841384   75577 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0920 18:14:12.841394   75577 cache.go:56] Caching tarball of preloaded images
	I0920 18:14:12.841473   75577 preload.go:172] Found /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 18:14:12.841482   75577 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0920 18:14:12.841594   75577 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/config.json ...
	I0920 18:14:12.841781   75577 start.go:360] acquireMachinesLock for old-k8s-version-744025: {Name:mkfeedb385cf08b5d2aa00913e85815d02a180c2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 18:14:18.302232   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:14:24.382087   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:14:27.454110   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:14:33.534076   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:14:36.606161   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:14:42.686057   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:14:45.758152   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:14:51.838087   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:14:54.910159   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:00.990183   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:04.062105   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:10.142153   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:13.218090   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:19.294132   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:22.366139   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:28.446103   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:31.518062   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:37.598126   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:40.670145   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:46.750116   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:49.822142   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:55.902120   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:58.974169   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:16:05.054139   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:16:08.126122   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:16:14.206089   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:16:17.278137   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:16:23.358156   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:16:26.430180   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:16:32.510115   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:16:35.582114   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:16:41.662126   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:16:44.734140   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:16:50.814123   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:16:53.886188   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:16:59.966139   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:17:03.038067   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:17:09.118169   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:17:12.190091   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:17:15.194165   75086 start.go:364] duration metric: took 4m1.091611985s to acquireMachinesLock for "embed-certs-768431"
	I0920 18:17:15.194223   75086 start.go:96] Skipping create...Using existing machine configuration
	I0920 18:17:15.194231   75086 fix.go:54] fixHost starting: 
	I0920 18:17:15.194737   75086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:17:15.194794   75086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:17:15.210558   75086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43191
	I0920 18:17:15.211084   75086 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:17:15.211527   75086 main.go:141] libmachine: Using API Version  1
	I0920 18:17:15.211550   75086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:17:15.211882   75086 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:17:15.212099   75086 main.go:141] libmachine: (embed-certs-768431) Calling .DriverName
	I0920 18:17:15.212217   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetState
	I0920 18:17:15.213955   75086 fix.go:112] recreateIfNeeded on embed-certs-768431: state=Stopped err=<nil>
	I0920 18:17:15.213995   75086 main.go:141] libmachine: (embed-certs-768431) Calling .DriverName
	W0920 18:17:15.214129   75086 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 18:17:15.215650   75086 out.go:177] * Restarting existing kvm2 VM for "embed-certs-768431" ...
	I0920 18:17:15.191760   74753 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:17:15.191794   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetMachineName
	I0920 18:17:15.192105   74753 buildroot.go:166] provisioning hostname "no-preload-956403"
	I0920 18:17:15.192133   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetMachineName
	I0920 18:17:15.192325   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:17:15.194039   74753 machine.go:96] duration metric: took 4m37.421116123s to provisionDockerMachine
	I0920 18:17:15.194077   74753 fix.go:56] duration metric: took 4m37.444193238s for fixHost
	I0920 18:17:15.194087   74753 start.go:83] releasing machines lock for "no-preload-956403", held for 4m37.444234787s
	W0920 18:17:15.194109   74753 start.go:714] error starting host: provision: host is not running
	W0920 18:17:15.194214   74753 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0920 18:17:15.194222   74753 start.go:729] Will try again in 5 seconds ...
	I0920 18:17:15.216590   75086 main.go:141] libmachine: (embed-certs-768431) Calling .Start
	I0920 18:17:15.216773   75086 main.go:141] libmachine: (embed-certs-768431) Ensuring networks are active...
	I0920 18:17:15.217439   75086 main.go:141] libmachine: (embed-certs-768431) Ensuring network default is active
	I0920 18:17:15.217769   75086 main.go:141] libmachine: (embed-certs-768431) Ensuring network mk-embed-certs-768431 is active
	I0920 18:17:15.218058   75086 main.go:141] libmachine: (embed-certs-768431) Getting domain xml...
	I0920 18:17:15.218674   75086 main.go:141] libmachine: (embed-certs-768431) Creating domain...
	I0920 18:17:16.454922   75086 main.go:141] libmachine: (embed-certs-768431) Waiting to get IP...
	I0920 18:17:16.456011   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:16.456437   75086 main.go:141] libmachine: (embed-certs-768431) DBG | unable to find current IP address of domain embed-certs-768431 in network mk-embed-certs-768431
	I0920 18:17:16.456518   75086 main.go:141] libmachine: (embed-certs-768431) DBG | I0920 18:17:16.456416   76214 retry.go:31] will retry after 282.081004ms: waiting for machine to come up
	I0920 18:17:16.740062   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:16.740455   75086 main.go:141] libmachine: (embed-certs-768431) DBG | unable to find current IP address of domain embed-certs-768431 in network mk-embed-certs-768431
	I0920 18:17:16.740505   75086 main.go:141] libmachine: (embed-certs-768431) DBG | I0920 18:17:16.740420   76214 retry.go:31] will retry after 335.306847ms: waiting for machine to come up
	I0920 18:17:17.077169   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:17.077564   75086 main.go:141] libmachine: (embed-certs-768431) DBG | unable to find current IP address of domain embed-certs-768431 in network mk-embed-certs-768431
	I0920 18:17:17.077594   75086 main.go:141] libmachine: (embed-certs-768431) DBG | I0920 18:17:17.077513   76214 retry.go:31] will retry after 455.255315ms: waiting for machine to come up
	I0920 18:17:17.534166   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:17.534545   75086 main.go:141] libmachine: (embed-certs-768431) DBG | unable to find current IP address of domain embed-certs-768431 in network mk-embed-certs-768431
	I0920 18:17:17.534570   75086 main.go:141] libmachine: (embed-certs-768431) DBG | I0920 18:17:17.534511   76214 retry.go:31] will retry after 476.184378ms: waiting for machine to come up
	I0920 18:17:18.011940   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:18.012405   75086 main.go:141] libmachine: (embed-certs-768431) DBG | unable to find current IP address of domain embed-certs-768431 in network mk-embed-certs-768431
	I0920 18:17:18.012436   75086 main.go:141] libmachine: (embed-certs-768431) DBG | I0920 18:17:18.012359   76214 retry.go:31] will retry after 597.678215ms: waiting for machine to come up
	I0920 18:17:18.611179   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:18.611560   75086 main.go:141] libmachine: (embed-certs-768431) DBG | unable to find current IP address of domain embed-certs-768431 in network mk-embed-certs-768431
	I0920 18:17:18.611588   75086 main.go:141] libmachine: (embed-certs-768431) DBG | I0920 18:17:18.611521   76214 retry.go:31] will retry after 696.074491ms: waiting for machine to come up
	I0920 18:17:20.194460   74753 start.go:360] acquireMachinesLock for no-preload-956403: {Name:mkfeedb385cf08b5d2aa00913e85815d02a180c2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 18:17:19.309493   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:19.309940   75086 main.go:141] libmachine: (embed-certs-768431) DBG | unable to find current IP address of domain embed-certs-768431 in network mk-embed-certs-768431
	I0920 18:17:19.309958   75086 main.go:141] libmachine: (embed-certs-768431) DBG | I0920 18:17:19.309909   76214 retry.go:31] will retry after 825.638908ms: waiting for machine to come up
	I0920 18:17:20.137018   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:20.137380   75086 main.go:141] libmachine: (embed-certs-768431) DBG | unable to find current IP address of domain embed-certs-768431 in network mk-embed-certs-768431
	I0920 18:17:20.137407   75086 main.go:141] libmachine: (embed-certs-768431) DBG | I0920 18:17:20.137338   76214 retry.go:31] will retry after 997.909719ms: waiting for machine to come up
	I0920 18:17:21.137154   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:21.137608   75086 main.go:141] libmachine: (embed-certs-768431) DBG | unable to find current IP address of domain embed-certs-768431 in network mk-embed-certs-768431
	I0920 18:17:21.137630   75086 main.go:141] libmachine: (embed-certs-768431) DBG | I0920 18:17:21.137552   76214 retry.go:31] will retry after 1.368594293s: waiting for machine to come up
	I0920 18:17:22.507834   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:22.508202   75086 main.go:141] libmachine: (embed-certs-768431) DBG | unable to find current IP address of domain embed-certs-768431 in network mk-embed-certs-768431
	I0920 18:17:22.508228   75086 main.go:141] libmachine: (embed-certs-768431) DBG | I0920 18:17:22.508152   76214 retry.go:31] will retry after 1.922265011s: waiting for machine to come up
	I0920 18:17:24.431977   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:24.432422   75086 main.go:141] libmachine: (embed-certs-768431) DBG | unable to find current IP address of domain embed-certs-768431 in network mk-embed-certs-768431
	I0920 18:17:24.432452   75086 main.go:141] libmachine: (embed-certs-768431) DBG | I0920 18:17:24.432370   76214 retry.go:31] will retry after 2.875158038s: waiting for machine to come up
	I0920 18:17:27.309993   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:27.310512   75086 main.go:141] libmachine: (embed-certs-768431) DBG | unable to find current IP address of domain embed-certs-768431 in network mk-embed-certs-768431
	I0920 18:17:27.310539   75086 main.go:141] libmachine: (embed-certs-768431) DBG | I0920 18:17:27.310459   76214 retry.go:31] will retry after 3.089759463s: waiting for machine to come up
	I0920 18:17:30.402254   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:30.402718   75086 main.go:141] libmachine: (embed-certs-768431) DBG | unable to find current IP address of domain embed-certs-768431 in network mk-embed-certs-768431
	I0920 18:17:30.402757   75086 main.go:141] libmachine: (embed-certs-768431) DBG | I0920 18:17:30.402671   76214 retry.go:31] will retry after 3.42897838s: waiting for machine to come up
	I0920 18:17:33.835196   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:33.835576   75086 main.go:141] libmachine: (embed-certs-768431) Found IP for machine: 192.168.61.202
	I0920 18:17:33.835606   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has current primary IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:33.835615   75086 main.go:141] libmachine: (embed-certs-768431) Reserving static IP address...
	I0920 18:17:33.835974   75086 main.go:141] libmachine: (embed-certs-768431) Reserved static IP address: 192.168.61.202
	I0920 18:17:33.836010   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "embed-certs-768431", mac: "52:54:00:d2:f2:e2", ip: "192.168.61.202"} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:33.836024   75086 main.go:141] libmachine: (embed-certs-768431) Waiting for SSH to be available...
	I0920 18:17:33.836053   75086 main.go:141] libmachine: (embed-certs-768431) DBG | skip adding static IP to network mk-embed-certs-768431 - found existing host DHCP lease matching {name: "embed-certs-768431", mac: "52:54:00:d2:f2:e2", ip: "192.168.61.202"}
	I0920 18:17:33.836065   75086 main.go:141] libmachine: (embed-certs-768431) DBG | Getting to WaitForSSH function...
	I0920 18:17:33.838215   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:33.838561   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:33.838593   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:33.838735   75086 main.go:141] libmachine: (embed-certs-768431) DBG | Using SSH client type: external
	I0920 18:17:33.838768   75086 main.go:141] libmachine: (embed-certs-768431) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/embed-certs-768431/id_rsa (-rw-------)
	I0920 18:17:33.838809   75086 main.go:141] libmachine: (embed-certs-768431) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.202 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-8777/.minikube/machines/embed-certs-768431/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 18:17:33.838826   75086 main.go:141] libmachine: (embed-certs-768431) DBG | About to run SSH command:
	I0920 18:17:33.838838   75086 main.go:141] libmachine: (embed-certs-768431) DBG | exit 0
	I0920 18:17:33.962092   75086 main.go:141] libmachine: (embed-certs-768431) DBG | SSH cmd err, output: <nil>: 
	I0920 18:17:33.962513   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetConfigRaw
	I0920 18:17:33.963115   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetIP
	I0920 18:17:33.965714   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:33.966036   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:33.966056   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:33.966391   75086 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/embed-certs-768431/config.json ...
	I0920 18:17:33.966621   75086 machine.go:93] provisionDockerMachine start ...
	I0920 18:17:33.966641   75086 main.go:141] libmachine: (embed-certs-768431) Calling .DriverName
	I0920 18:17:33.966848   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHHostname
	I0920 18:17:33.968954   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:33.969235   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:33.969266   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:33.969358   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHPort
	I0920 18:17:33.969542   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:33.969694   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:33.969854   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHUsername
	I0920 18:17:33.970001   75086 main.go:141] libmachine: Using SSH client type: native
	I0920 18:17:33.970194   75086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0920 18:17:33.970204   75086 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 18:17:35.178887   75264 start.go:364] duration metric: took 3m58.041934266s to acquireMachinesLock for "default-k8s-diff-port-553719"
	I0920 18:17:35.178955   75264 start.go:96] Skipping create...Using existing machine configuration
	I0920 18:17:35.178968   75264 fix.go:54] fixHost starting: 
	I0920 18:17:35.179541   75264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:17:35.179604   75264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:17:35.199776   75264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39643
	I0920 18:17:35.200255   75264 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:17:35.200832   75264 main.go:141] libmachine: Using API Version  1
	I0920 18:17:35.200858   75264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:17:35.201213   75264 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:17:35.201427   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .DriverName
	I0920 18:17:35.201575   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetState
	I0920 18:17:35.203239   75264 fix.go:112] recreateIfNeeded on default-k8s-diff-port-553719: state=Stopped err=<nil>
	I0920 18:17:35.203279   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .DriverName
	W0920 18:17:35.203415   75264 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 18:17:35.205529   75264 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-553719" ...
	I0920 18:17:34.078412   75086 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 18:17:34.078447   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetMachineName
	I0920 18:17:34.078648   75086 buildroot.go:166] provisioning hostname "embed-certs-768431"
	I0920 18:17:34.078676   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetMachineName
	I0920 18:17:34.078854   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHHostname
	I0920 18:17:34.081703   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.082042   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:34.082079   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.082175   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHPort
	I0920 18:17:34.082371   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:34.082525   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:34.082656   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHUsername
	I0920 18:17:34.082778   75086 main.go:141] libmachine: Using SSH client type: native
	I0920 18:17:34.082937   75086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0920 18:17:34.082949   75086 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-768431 && echo "embed-certs-768431" | sudo tee /etc/hostname
	I0920 18:17:34.199661   75086 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-768431
	
	I0920 18:17:34.199726   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHHostname
	I0920 18:17:34.202468   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.202875   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:34.202903   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.203084   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHPort
	I0920 18:17:34.203328   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:34.203477   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:34.203630   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHUsername
	I0920 18:17:34.203769   75086 main.go:141] libmachine: Using SSH client type: native
	I0920 18:17:34.203936   75086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0920 18:17:34.203952   75086 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-768431' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-768431/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-768431' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 18:17:34.314604   75086 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:17:34.314632   75086 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-8777/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-8777/.minikube}
	I0920 18:17:34.314656   75086 buildroot.go:174] setting up certificates
	I0920 18:17:34.314666   75086 provision.go:84] configureAuth start
	I0920 18:17:34.314674   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetMachineName
	I0920 18:17:34.314940   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetIP
	I0920 18:17:34.317999   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.318306   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:34.318326   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.318538   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHHostname
	I0920 18:17:34.320715   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.321046   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:34.321073   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.321187   75086 provision.go:143] copyHostCerts
	I0920 18:17:34.321256   75086 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem, removing ...
	I0920 18:17:34.321276   75086 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem
	I0920 18:17:34.321359   75086 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem (1082 bytes)
	I0920 18:17:34.321452   75086 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem, removing ...
	I0920 18:17:34.321459   75086 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem
	I0920 18:17:34.321493   75086 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem (1123 bytes)
	I0920 18:17:34.321549   75086 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem, removing ...
	I0920 18:17:34.321558   75086 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem
	I0920 18:17:34.321580   75086 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem (1675 bytes)
	I0920 18:17:34.321692   75086 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem org=jenkins.embed-certs-768431 san=[127.0.0.1 192.168.61.202 embed-certs-768431 localhost minikube]
	I0920 18:17:34.547266   75086 provision.go:177] copyRemoteCerts
	I0920 18:17:34.547343   75086 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 18:17:34.547373   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHHostname
	I0920 18:17:34.550265   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.550648   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:34.550680   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.550895   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHPort
	I0920 18:17:34.551084   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:34.551227   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHUsername
	I0920 18:17:34.551359   75086 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/embed-certs-768431/id_rsa Username:docker}
	I0920 18:17:34.632404   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0920 18:17:34.656468   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 18:17:34.681704   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0920 18:17:34.706486   75086 provision.go:87] duration metric: took 391.807931ms to configureAuth
	I0920 18:17:34.706514   75086 buildroot.go:189] setting minikube options for container-runtime
	I0920 18:17:34.706681   75086 config.go:182] Loaded profile config "embed-certs-768431": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:17:34.706750   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHHostname
	I0920 18:17:34.709500   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.709851   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:34.709915   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.709972   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHPort
	I0920 18:17:34.710199   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:34.710337   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:34.710511   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHUsername
	I0920 18:17:34.710669   75086 main.go:141] libmachine: Using SSH client type: native
	I0920 18:17:34.710854   75086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0920 18:17:34.710875   75086 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 18:17:34.941246   75086 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 18:17:34.941277   75086 machine.go:96] duration metric: took 974.641764ms to provisionDockerMachine
	I0920 18:17:34.941293   75086 start.go:293] postStartSetup for "embed-certs-768431" (driver="kvm2")
	I0920 18:17:34.941341   75086 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 18:17:34.941370   75086 main.go:141] libmachine: (embed-certs-768431) Calling .DriverName
	I0920 18:17:34.941757   75086 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 18:17:34.941792   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHHostname
	I0920 18:17:34.944366   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.944712   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:34.944749   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.944861   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHPort
	I0920 18:17:34.945051   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:34.945198   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHUsername
	I0920 18:17:34.945320   75086 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/embed-certs-768431/id_rsa Username:docker}
	I0920 18:17:35.028570   75086 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 18:17:35.032711   75086 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 18:17:35.032751   75086 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/addons for local assets ...
	I0920 18:17:35.032859   75086 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/files for local assets ...
	I0920 18:17:35.032973   75086 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem -> 159732.pem in /etc/ssl/certs
	I0920 18:17:35.033127   75086 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 18:17:35.042212   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /etc/ssl/certs/159732.pem (1708 bytes)
	I0920 18:17:35.066379   75086 start.go:296] duration metric: took 125.0653ms for postStartSetup
	I0920 18:17:35.066429   75086 fix.go:56] duration metric: took 19.872197784s for fixHost
	I0920 18:17:35.066456   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHHostname
	I0920 18:17:35.069888   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:35.070261   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:35.070286   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:35.070472   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHPort
	I0920 18:17:35.070693   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:35.070836   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:35.070989   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHUsername
	I0920 18:17:35.071204   75086 main.go:141] libmachine: Using SSH client type: native
	I0920 18:17:35.071396   75086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0920 18:17:35.071407   75086 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 18:17:35.178674   75086 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726856255.154624802
	
	I0920 18:17:35.178704   75086 fix.go:216] guest clock: 1726856255.154624802
	I0920 18:17:35.178712   75086 fix.go:229] Guest: 2024-09-20 18:17:35.154624802 +0000 UTC Remote: 2024-09-20 18:17:35.066435161 +0000 UTC m=+261.108982198 (delta=88.189641ms)
	I0920 18:17:35.178734   75086 fix.go:200] guest clock delta is within tolerance: 88.189641ms
	I0920 18:17:35.178760   75086 start.go:83] releasing machines lock for "embed-certs-768431", held for 19.984535459s
	I0920 18:17:35.178801   75086 main.go:141] libmachine: (embed-certs-768431) Calling .DriverName
	I0920 18:17:35.179101   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetIP
	I0920 18:17:35.181807   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:35.182179   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:35.182208   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:35.182353   75086 main.go:141] libmachine: (embed-certs-768431) Calling .DriverName
	I0920 18:17:35.182850   75086 main.go:141] libmachine: (embed-certs-768431) Calling .DriverName
	I0920 18:17:35.182993   75086 main.go:141] libmachine: (embed-certs-768431) Calling .DriverName
	I0920 18:17:35.183097   75086 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 18:17:35.183171   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHHostname
	I0920 18:17:35.183171   75086 ssh_runner.go:195] Run: cat /version.json
	I0920 18:17:35.183230   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHHostname
	I0920 18:17:35.185905   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:35.185942   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:35.186249   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:35.186279   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:35.186317   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:35.186331   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:35.186476   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHPort
	I0920 18:17:35.186597   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHPort
	I0920 18:17:35.186687   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:35.186791   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:35.186816   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHUsername
	I0920 18:17:35.186974   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHUsername
	I0920 18:17:35.186992   75086 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/embed-certs-768431/id_rsa Username:docker}
	I0920 18:17:35.187127   75086 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/embed-certs-768431/id_rsa Username:docker}
	I0920 18:17:35.311021   75086 ssh_runner.go:195] Run: systemctl --version
	I0920 18:17:35.317587   75086 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 18:17:35.462634   75086 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 18:17:35.469331   75086 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 18:17:35.469444   75086 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 18:17:35.485752   75086 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 18:17:35.485780   75086 start.go:495] detecting cgroup driver to use...
	I0920 18:17:35.485903   75086 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 18:17:35.507004   75086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 18:17:35.525868   75086 docker.go:217] disabling cri-docker service (if available) ...
	I0920 18:17:35.525934   75086 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 18:17:35.542533   75086 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 18:17:35.557622   75086 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 18:17:35.669845   75086 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 18:17:35.827784   75086 docker.go:233] disabling docker service ...
	I0920 18:17:35.827855   75086 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 18:17:35.842877   75086 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 18:17:35.859468   75086 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 18:17:35.994094   75086 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 18:17:36.123353   75086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 18:17:36.137941   75086 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 18:17:36.157781   75086 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 18:17:36.157871   75086 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:36.168857   75086 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 18:17:36.168929   75086 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:36.179807   75086 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:36.192260   75086 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:36.203220   75086 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 18:17:36.214388   75086 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:36.224686   75086 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:36.242143   75086 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:36.257047   75086 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 18:17:36.272014   75086 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 18:17:36.272130   75086 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 18:17:36.286083   75086 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 18:17:36.296624   75086 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:17:36.414196   75086 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 18:17:36.511654   75086 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 18:17:36.511720   75086 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 18:17:36.516593   75086 start.go:563] Will wait 60s for crictl version
	I0920 18:17:36.516640   75086 ssh_runner.go:195] Run: which crictl
	I0920 18:17:36.520360   75086 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 18:17:36.556846   75086 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 18:17:36.556949   75086 ssh_runner.go:195] Run: crio --version
	I0920 18:17:36.584699   75086 ssh_runner.go:195] Run: crio --version
	I0920 18:17:36.615179   75086 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 18:17:35.206839   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .Start
	I0920 18:17:35.207055   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Ensuring networks are active...
	I0920 18:17:35.207928   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Ensuring network default is active
	I0920 18:17:35.208260   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Ensuring network mk-default-k8s-diff-port-553719 is active
	I0920 18:17:35.208679   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Getting domain xml...
	I0920 18:17:35.209488   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Creating domain...
	I0920 18:17:36.500751   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting to get IP...
	I0920 18:17:36.501630   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:36.502033   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | unable to find current IP address of domain default-k8s-diff-port-553719 in network mk-default-k8s-diff-port-553719
	I0920 18:17:36.502137   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | I0920 18:17:36.502044   76346 retry.go:31] will retry after 216.050114ms: waiting for machine to come up
	I0920 18:17:36.719559   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:36.720002   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | unable to find current IP address of domain default-k8s-diff-port-553719 in network mk-default-k8s-diff-port-553719
	I0920 18:17:36.720026   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | I0920 18:17:36.719977   76346 retry.go:31] will retry after 303.832728ms: waiting for machine to come up
	I0920 18:17:37.025606   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:37.026096   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | unable to find current IP address of domain default-k8s-diff-port-553719 in network mk-default-k8s-diff-port-553719
	I0920 18:17:37.026137   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | I0920 18:17:37.026050   76346 retry.go:31] will retry after 316.827461ms: waiting for machine to come up
	I0920 18:17:36.616640   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetIP
	I0920 18:17:36.619927   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:36.620377   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:36.620407   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:36.620642   75086 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0920 18:17:36.627189   75086 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:17:36.639842   75086 kubeadm.go:883] updating cluster {Name:embed-certs-768431 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-768431 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.202 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 18:17:36.639953   75086 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:17:36.640019   75086 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:17:36.676519   75086 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 18:17:36.676599   75086 ssh_runner.go:195] Run: which lz4
	I0920 18:17:36.680472   75086 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 18:17:36.684650   75086 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 18:17:36.684683   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0920 18:17:38.090996   75086 crio.go:462] duration metric: took 1.410558742s to copy over tarball
	I0920 18:17:38.091068   75086 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 18:17:37.344880   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:37.345393   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | unable to find current IP address of domain default-k8s-diff-port-553719 in network mk-default-k8s-diff-port-553719
	I0920 18:17:37.345419   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | I0920 18:17:37.345323   76346 retry.go:31] will retry after 456.966571ms: waiting for machine to come up
	I0920 18:17:37.804436   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:37.804919   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | unable to find current IP address of domain default-k8s-diff-port-553719 in network mk-default-k8s-diff-port-553719
	I0920 18:17:37.804954   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | I0920 18:17:37.804891   76346 retry.go:31] will retry after 582.103589ms: waiting for machine to come up
	I0920 18:17:38.388738   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:38.389266   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | unable to find current IP address of domain default-k8s-diff-port-553719 in network mk-default-k8s-diff-port-553719
	I0920 18:17:38.389297   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | I0920 18:17:38.389217   76346 retry.go:31] will retry after 884.882678ms: waiting for machine to come up
	I0920 18:17:39.276048   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:39.276478   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | unable to find current IP address of domain default-k8s-diff-port-553719 in network mk-default-k8s-diff-port-553719
	I0920 18:17:39.276504   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | I0920 18:17:39.276453   76346 retry.go:31] will retry after 807.001285ms: waiting for machine to come up
	I0920 18:17:40.085749   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:40.086228   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | unable to find current IP address of domain default-k8s-diff-port-553719 in network mk-default-k8s-diff-port-553719
	I0920 18:17:40.086256   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | I0920 18:17:40.086190   76346 retry.go:31] will retry after 1.283354255s: waiting for machine to come up
	I0920 18:17:41.370861   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:41.371314   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | unable to find current IP address of domain default-k8s-diff-port-553719 in network mk-default-k8s-diff-port-553719
	I0920 18:17:41.371343   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | I0920 18:17:41.371265   76346 retry.go:31] will retry after 1.756084886s: waiting for machine to come up
	I0920 18:17:40.301535   75086 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.210437306s)
	I0920 18:17:40.301567   75086 crio.go:469] duration metric: took 2.210539553s to extract the tarball
	I0920 18:17:40.301578   75086 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 18:17:40.338638   75086 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:17:40.381753   75086 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 18:17:40.381780   75086 cache_images.go:84] Images are preloaded, skipping loading
	I0920 18:17:40.381788   75086 kubeadm.go:934] updating node { 192.168.61.202 8443 v1.31.1 crio true true} ...
	I0920 18:17:40.381914   75086 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-768431 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.202
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-768431 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 18:17:40.381998   75086 ssh_runner.go:195] Run: crio config
	I0920 18:17:40.428457   75086 cni.go:84] Creating CNI manager for ""
	I0920 18:17:40.428482   75086 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:17:40.428491   75086 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 18:17:40.428512   75086 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.202 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-768431 NodeName:embed-certs-768431 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.202"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.202 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 18:17:40.428681   75086 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.202
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-768431"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.202
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.202"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 18:17:40.428800   75086 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 18:17:40.439049   75086 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 18:17:40.439123   75086 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 18:17:40.449220   75086 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0920 18:17:40.466012   75086 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 18:17:40.482158   75086 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0920 18:17:40.499225   75086 ssh_runner.go:195] Run: grep 192.168.61.202	control-plane.minikube.internal$ /etc/hosts
	I0920 18:17:40.502817   75086 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.202	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:17:40.515241   75086 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:17:40.638615   75086 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:17:40.655790   75086 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/embed-certs-768431 for IP: 192.168.61.202
	I0920 18:17:40.655814   75086 certs.go:194] generating shared ca certs ...
	I0920 18:17:40.655834   75086 certs.go:226] acquiring lock for ca certs: {Name:mkc7ef6c737c6bdc3fdd9dcff8f57029c020d8f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:17:40.656006   75086 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key
	I0920 18:17:40.656070   75086 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key
	I0920 18:17:40.656084   75086 certs.go:256] generating profile certs ...
	I0920 18:17:40.656193   75086 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/embed-certs-768431/client.key
	I0920 18:17:40.656281   75086 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/embed-certs-768431/apiserver.key.a3dbf377
	I0920 18:17:40.656354   75086 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/embed-certs-768431/proxy-client.key
	I0920 18:17:40.656508   75086 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem (1338 bytes)
	W0920 18:17:40.656560   75086 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973_empty.pem, impossibly tiny 0 bytes
	I0920 18:17:40.656573   75086 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 18:17:40.656620   75086 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem (1082 bytes)
	I0920 18:17:40.656654   75086 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem (1123 bytes)
	I0920 18:17:40.656679   75086 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem (1675 bytes)
	I0920 18:17:40.656733   75086 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem (1708 bytes)
	I0920 18:17:40.657567   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 18:17:40.693111   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 18:17:40.733664   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 18:17:40.774367   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0920 18:17:40.800930   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/embed-certs-768431/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0920 18:17:40.827793   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/embed-certs-768431/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 18:17:40.851189   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/embed-certs-768431/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 18:17:40.875092   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/embed-certs-768431/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 18:17:40.898639   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem --> /usr/share/ca-certificates/15973.pem (1338 bytes)
	I0920 18:17:40.921569   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /usr/share/ca-certificates/159732.pem (1708 bytes)
	I0920 18:17:40.945739   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 18:17:40.969942   75086 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 18:17:40.987563   75086 ssh_runner.go:195] Run: openssl version
	I0920 18:17:40.993363   75086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15973.pem && ln -fs /usr/share/ca-certificates/15973.pem /etc/ssl/certs/15973.pem"
	I0920 18:17:41.004673   75086 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15973.pem
	I0920 18:17:41.008985   75086 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:04 /usr/share/ca-certificates/15973.pem
	I0920 18:17:41.009032   75086 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15973.pem
	I0920 18:17:41.014950   75086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15973.pem /etc/ssl/certs/51391683.0"
	I0920 18:17:41.027944   75086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/159732.pem && ln -fs /usr/share/ca-certificates/159732.pem /etc/ssl/certs/159732.pem"
	I0920 18:17:41.040711   75086 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/159732.pem
	I0920 18:17:41.045304   75086 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:04 /usr/share/ca-certificates/159732.pem
	I0920 18:17:41.045361   75086 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/159732.pem
	I0920 18:17:41.050891   75086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/159732.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 18:17:41.061542   75086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 18:17:41.072622   75086 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:17:41.076916   75086 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:17:41.076965   75086 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:17:41.082644   75086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 18:17:41.093136   75086 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 18:17:41.097496   75086 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 18:17:41.103393   75086 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 18:17:41.109444   75086 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 18:17:41.115844   75086 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 18:17:41.121578   75086 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 18:17:41.127292   75086 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 18:17:41.133310   75086 kubeadm.go:392] StartCluster: {Name:embed-certs-768431 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-768431 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.202 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:17:41.133393   75086 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 18:17:41.133439   75086 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:17:41.177284   75086 cri.go:89] found id: ""
	I0920 18:17:41.177380   75086 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 18:17:41.187289   75086 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 18:17:41.187311   75086 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 18:17:41.187362   75086 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 18:17:41.196445   75086 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 18:17:41.197939   75086 kubeconfig.go:125] found "embed-certs-768431" server: "https://192.168.61.202:8443"
	I0920 18:17:41.201030   75086 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 18:17:41.210356   75086 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.202
	I0920 18:17:41.210394   75086 kubeadm.go:1160] stopping kube-system containers ...
	I0920 18:17:41.210422   75086 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0920 18:17:41.210499   75086 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:17:41.244659   75086 cri.go:89] found id: ""
	I0920 18:17:41.244721   75086 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 18:17:41.264341   75086 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 18:17:41.275229   75086 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 18:17:41.275252   75086 kubeadm.go:157] found existing configuration files:
	
	I0920 18:17:41.275314   75086 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 18:17:41.285420   75086 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 18:17:41.285502   75086 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 18:17:41.295902   75086 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 18:17:41.305951   75086 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 18:17:41.306015   75086 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 18:17:41.316567   75086 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 18:17:41.325623   75086 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 18:17:41.325691   75086 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 18:17:41.336405   75086 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 18:17:41.346501   75086 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 18:17:41.346574   75086 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 18:17:41.357340   75086 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 18:17:41.367956   75086 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:17:41.479022   75086 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:17:42.796122   75086 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.317061067s)
	I0920 18:17:42.796155   75086 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:17:43.053135   75086 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:17:43.139770   75086 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:17:43.244066   75086 api_server.go:52] waiting for apiserver process to appear ...
	I0920 18:17:43.244166   75086 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:17:43.744622   75086 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:17:43.129383   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:43.129827   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | unable to find current IP address of domain default-k8s-diff-port-553719 in network mk-default-k8s-diff-port-553719
	I0920 18:17:43.129864   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | I0920 18:17:43.129798   76346 retry.go:31] will retry after 2.259775905s: waiting for machine to come up
	I0920 18:17:45.391508   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:45.391915   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | unable to find current IP address of domain default-k8s-diff-port-553719 in network mk-default-k8s-diff-port-553719
	I0920 18:17:45.391941   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | I0920 18:17:45.391885   76346 retry.go:31] will retry after 1.767770692s: waiting for machine to come up
	I0920 18:17:44.244723   75086 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:17:44.265463   75086 api_server.go:72] duration metric: took 1.02139562s to wait for apiserver process to appear ...
	I0920 18:17:44.265500   75086 api_server.go:88] waiting for apiserver healthz status ...
	I0920 18:17:44.265528   75086 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0920 18:17:46.935510   75086 api_server.go:279] https://192.168.61.202:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 18:17:46.935571   75086 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 18:17:46.935589   75086 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0920 18:17:46.947553   75086 api_server.go:279] https://192.168.61.202:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 18:17:46.947586   75086 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 18:17:47.265919   75086 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0920 18:17:47.272986   75086 api_server.go:279] https://192.168.61.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 18:17:47.273020   75086 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 18:17:47.766662   75086 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0920 18:17:47.783700   75086 api_server.go:279] https://192.168.61.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 18:17:47.783749   75086 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 18:17:48.266012   75086 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0920 18:17:48.270573   75086 api_server.go:279] https://192.168.61.202:8443/healthz returned 200:
	ok
	I0920 18:17:48.277290   75086 api_server.go:141] control plane version: v1.31.1
	I0920 18:17:48.277331   75086 api_server.go:131] duration metric: took 4.011813655s to wait for apiserver health ...
	I0920 18:17:48.277342   75086 cni.go:84] Creating CNI manager for ""
	I0920 18:17:48.277351   75086 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:17:48.278863   75086 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 18:17:48.279934   75086 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 18:17:48.304037   75086 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 18:17:48.337082   75086 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 18:17:48.349485   75086 system_pods.go:59] 8 kube-system pods found
	I0920 18:17:48.349541   75086 system_pods.go:61] "coredns-7c65d6cfc9-cskt4" [2ce6fa43-bdf9-4625-a198-8f59ee9fcf0b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 18:17:48.349554   75086 system_pods.go:61] "etcd-embed-certs-768431" [656af9fd-b380-4934-a0b5-5d39a755de44] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0920 18:17:48.349565   75086 system_pods.go:61] "kube-apiserver-embed-certs-768431" [e128f4ff-3432-4d27-83de-ed76b686403d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0920 18:17:48.349573   75086 system_pods.go:61] "kube-controller-manager-embed-certs-768431" [a2c7e6d7-e351-4e6a-ab95-638aecdaab28] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0920 18:17:48.349579   75086 system_pods.go:61] "kube-proxy-cshjm" [86a82f1b-d8d5-4ab7-89fb-84b836dc0470] Running
	I0920 18:17:48.349586   75086 system_pods.go:61] "kube-scheduler-embed-certs-768431" [64bc50a6-2e99-4992-b249-9ffdf463539a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0920 18:17:48.349601   75086 system_pods.go:61] "metrics-server-6867b74b74-dwnt6" [bb0a120f-ca9e-4b54-826b-55aa20b2f6a4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 18:17:48.349606   75086 system_pods.go:61] "storage-provisioner" [083c6964-a774-4b85-8e6d-acd04ededb3c] Running
	I0920 18:17:48.349617   75086 system_pods.go:74] duration metric: took 12.507925ms to wait for pod list to return data ...
	I0920 18:17:48.349630   75086 node_conditions.go:102] verifying NodePressure condition ...
	I0920 18:17:48.353604   75086 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 18:17:48.353635   75086 node_conditions.go:123] node cpu capacity is 2
	I0920 18:17:48.353648   75086 node_conditions.go:105] duration metric: took 4.012436ms to run NodePressure ...
	I0920 18:17:48.353668   75086 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:17:48.623666   75086 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0920 18:17:48.634113   75086 kubeadm.go:739] kubelet initialised
	I0920 18:17:48.634184   75086 kubeadm.go:740] duration metric: took 10.458173ms waiting for restarted kubelet to initialise ...
	I0920 18:17:48.634205   75086 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:17:48.642334   75086 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-cskt4" in "kube-system" namespace to be "Ready" ...
	I0920 18:17:47.161670   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:47.162064   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | unable to find current IP address of domain default-k8s-diff-port-553719 in network mk-default-k8s-diff-port-553719
	I0920 18:17:47.162094   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | I0920 18:17:47.162020   76346 retry.go:31] will retry after 2.299554771s: waiting for machine to come up
	I0920 18:17:49.463022   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:49.463483   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | unable to find current IP address of domain default-k8s-diff-port-553719 in network mk-default-k8s-diff-port-553719
	I0920 18:17:49.463515   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | I0920 18:17:49.463442   76346 retry.go:31] will retry after 4.372569793s: waiting for machine to come up
	I0920 18:17:50.650781   75086 pod_ready.go:103] pod "coredns-7c65d6cfc9-cskt4" in "kube-system" namespace has status "Ready":"False"
	I0920 18:17:53.149156   75086 pod_ready.go:103] pod "coredns-7c65d6cfc9-cskt4" in "kube-system" namespace has status "Ready":"False"
	I0920 18:17:55.087078   75577 start.go:364] duration metric: took 3m42.245259516s to acquireMachinesLock for "old-k8s-version-744025"
	I0920 18:17:55.087155   75577 start.go:96] Skipping create...Using existing machine configuration
	I0920 18:17:55.087166   75577 fix.go:54] fixHost starting: 
	I0920 18:17:55.087618   75577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:17:55.087671   75577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:17:55.107336   75577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34989
	I0920 18:17:55.107839   75577 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:17:55.108462   75577 main.go:141] libmachine: Using API Version  1
	I0920 18:17:55.108496   75577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:17:55.108855   75577 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:17:55.109072   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .DriverName
	I0920 18:17:55.109222   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetState
	I0920 18:17:55.110831   75577 fix.go:112] recreateIfNeeded on old-k8s-version-744025: state=Stopped err=<nil>
	I0920 18:17:55.110857   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .DriverName
	W0920 18:17:55.110973   75577 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 18:17:55.113408   75577 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-744025" ...
	I0920 18:17:53.840215   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:53.840817   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has current primary IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:53.840844   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Found IP for machine: 192.168.72.190
	I0920 18:17:53.840859   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Reserving static IP address...
	I0920 18:17:53.841438   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Reserved static IP address: 192.168.72.190
	I0920 18:17:53.841460   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for SSH to be available...
	I0920 18:17:53.841475   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-553719", mac: "52:54:00:dd:93:60", ip: "192.168.72.190"} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:53.841503   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | skip adding static IP to network mk-default-k8s-diff-port-553719 - found existing host DHCP lease matching {name: "default-k8s-diff-port-553719", mac: "52:54:00:dd:93:60", ip: "192.168.72.190"}
	I0920 18:17:53.841583   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | Getting to WaitForSSH function...
	I0920 18:17:53.843772   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:53.844082   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:53.844115   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:53.844201   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | Using SSH client type: external
	I0920 18:17:53.844238   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/default-k8s-diff-port-553719/id_rsa (-rw-------)
	I0920 18:17:53.844282   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.190 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-8777/.minikube/machines/default-k8s-diff-port-553719/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 18:17:53.844296   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | About to run SSH command:
	I0920 18:17:53.844309   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | exit 0
	I0920 18:17:53.965938   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | SSH cmd err, output: <nil>: 
	I0920 18:17:53.966354   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetConfigRaw
	I0920 18:17:53.967105   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetIP
	I0920 18:17:53.969715   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:53.970112   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:53.970140   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:53.970429   75264 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/default-k8s-diff-port-553719/config.json ...
	I0920 18:17:53.970686   75264 machine.go:93] provisionDockerMachine start ...
	I0920 18:17:53.970707   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .DriverName
	I0920 18:17:53.970924   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHHostname
	I0920 18:17:53.973207   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:53.973541   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:53.973571   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:53.973716   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHPort
	I0920 18:17:53.973901   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:53.974041   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:53.974155   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHUsername
	I0920 18:17:53.974298   75264 main.go:141] libmachine: Using SSH client type: native
	I0920 18:17:53.974518   75264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.190 22 <nil> <nil>}
	I0920 18:17:53.974533   75264 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 18:17:54.078095   75264 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 18:17:54.078121   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetMachineName
	I0920 18:17:54.078377   75264 buildroot.go:166] provisioning hostname "default-k8s-diff-port-553719"
	I0920 18:17:54.078397   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetMachineName
	I0920 18:17:54.078540   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHHostname
	I0920 18:17:54.081589   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.081998   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:54.082032   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.082179   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHPort
	I0920 18:17:54.082375   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:54.082539   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:54.082743   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHUsername
	I0920 18:17:54.082949   75264 main.go:141] libmachine: Using SSH client type: native
	I0920 18:17:54.083153   75264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.190 22 <nil> <nil>}
	I0920 18:17:54.083173   75264 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-553719 && echo "default-k8s-diff-port-553719" | sudo tee /etc/hostname
	I0920 18:17:54.199155   75264 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-553719
	
	I0920 18:17:54.199192   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHHostname
	I0920 18:17:54.202231   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.202650   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:54.202685   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.202835   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHPort
	I0920 18:17:54.203112   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:54.203289   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:54.203458   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHUsername
	I0920 18:17:54.203696   75264 main.go:141] libmachine: Using SSH client type: native
	I0920 18:17:54.203944   75264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.190 22 <nil> <nil>}
	I0920 18:17:54.203969   75264 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-553719' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-553719/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-553719' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 18:17:54.312356   75264 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:17:54.312386   75264 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-8777/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-8777/.minikube}
	I0920 18:17:54.312432   75264 buildroot.go:174] setting up certificates
	I0920 18:17:54.312450   75264 provision.go:84] configureAuth start
	I0920 18:17:54.312464   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetMachineName
	I0920 18:17:54.312758   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetIP
	I0920 18:17:54.315327   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.315697   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:54.315725   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.315870   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHHostname
	I0920 18:17:54.318203   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.318653   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:54.318679   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.318824   75264 provision.go:143] copyHostCerts
	I0920 18:17:54.318885   75264 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem, removing ...
	I0920 18:17:54.318906   75264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem
	I0920 18:17:54.318978   75264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem (1675 bytes)
	I0920 18:17:54.319096   75264 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem, removing ...
	I0920 18:17:54.319107   75264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem
	I0920 18:17:54.319141   75264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem (1082 bytes)
	I0920 18:17:54.319384   75264 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem, removing ...
	I0920 18:17:54.319401   75264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem
	I0920 18:17:54.319465   75264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem (1123 bytes)
	I0920 18:17:54.319551   75264 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-553719 san=[127.0.0.1 192.168.72.190 default-k8s-diff-port-553719 localhost minikube]
	I0920 18:17:54.453062   75264 provision.go:177] copyRemoteCerts
	I0920 18:17:54.453131   75264 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 18:17:54.453160   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHHostname
	I0920 18:17:54.456047   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.456402   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:54.456431   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.456598   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHPort
	I0920 18:17:54.456796   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:54.456970   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHUsername
	I0920 18:17:54.457092   75264 sshutil.go:53] new ssh client: &{IP:192.168.72.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/default-k8s-diff-port-553719/id_rsa Username:docker}
	I0920 18:17:54.544567   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0920 18:17:54.570009   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0920 18:17:54.594249   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0920 18:17:54.617652   75264 provision.go:87] duration metric: took 305.186554ms to configureAuth
	I0920 18:17:54.617686   75264 buildroot.go:189] setting minikube options for container-runtime
	I0920 18:17:54.617971   75264 config.go:182] Loaded profile config "default-k8s-diff-port-553719": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:17:54.618064   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHHostname
	I0920 18:17:54.621408   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.621882   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:54.621915   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.622140   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHPort
	I0920 18:17:54.622368   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:54.622592   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:54.622833   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHUsername
	I0920 18:17:54.623047   75264 main.go:141] libmachine: Using SSH client type: native
	I0920 18:17:54.623287   75264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.190 22 <nil> <nil>}
	I0920 18:17:54.623305   75264 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 18:17:54.859786   75264 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 18:17:54.859810   75264 machine.go:96] duration metric: took 889.110491ms to provisionDockerMachine
	I0920 18:17:54.859820   75264 start.go:293] postStartSetup for "default-k8s-diff-port-553719" (driver="kvm2")
	I0920 18:17:54.859831   75264 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 18:17:54.859850   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .DriverName
	I0920 18:17:54.860209   75264 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 18:17:54.860258   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHHostname
	I0920 18:17:54.863576   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.863933   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:54.863966   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.864168   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHPort
	I0920 18:17:54.864345   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:54.864494   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHUsername
	I0920 18:17:54.864640   75264 sshutil.go:53] new ssh client: &{IP:192.168.72.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/default-k8s-diff-port-553719/id_rsa Username:docker}
	I0920 18:17:54.944717   75264 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 18:17:54.948836   75264 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 18:17:54.948870   75264 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/addons for local assets ...
	I0920 18:17:54.948937   75264 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/files for local assets ...
	I0920 18:17:54.949014   75264 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem -> 159732.pem in /etc/ssl/certs
	I0920 18:17:54.949116   75264 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 18:17:54.958726   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /etc/ssl/certs/159732.pem (1708 bytes)
	I0920 18:17:54.982792   75264 start.go:296] duration metric: took 122.958621ms for postStartSetup
	I0920 18:17:54.982831   75264 fix.go:56] duration metric: took 19.803863913s for fixHost
	I0920 18:17:54.982856   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHHostname
	I0920 18:17:54.985588   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.985924   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:54.985966   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.986145   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHPort
	I0920 18:17:54.986407   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:54.986586   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:54.986784   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHUsername
	I0920 18:17:54.986982   75264 main.go:141] libmachine: Using SSH client type: native
	I0920 18:17:54.987233   75264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.190 22 <nil> <nil>}
	I0920 18:17:54.987248   75264 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 18:17:55.086859   75264 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726856275.038902431
	
	I0920 18:17:55.086888   75264 fix.go:216] guest clock: 1726856275.038902431
	I0920 18:17:55.086900   75264 fix.go:229] Guest: 2024-09-20 18:17:55.038902431 +0000 UTC Remote: 2024-09-20 18:17:54.98283641 +0000 UTC m=+257.985357778 (delta=56.066021ms)
	I0920 18:17:55.086959   75264 fix.go:200] guest clock delta is within tolerance: 56.066021ms
	I0920 18:17:55.086968   75264 start.go:83] releasing machines lock for "default-k8s-diff-port-553719", held for 19.908037967s
	I0920 18:17:55.087009   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .DriverName
	I0920 18:17:55.087303   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetIP
	I0920 18:17:55.090396   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:55.090777   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:55.090805   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:55.090973   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .DriverName
	I0920 18:17:55.091481   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .DriverName
	I0920 18:17:55.091684   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .DriverName
	I0920 18:17:55.091772   75264 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 18:17:55.091827   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHHostname
	I0920 18:17:55.091951   75264 ssh_runner.go:195] Run: cat /version.json
	I0920 18:17:55.091976   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHHostname
	I0920 18:17:55.094742   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:55.094878   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:55.095102   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:55.095133   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:55.095265   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHPort
	I0920 18:17:55.095362   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:55.095400   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:55.095443   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:55.095550   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHPort
	I0920 18:17:55.095619   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHUsername
	I0920 18:17:55.095689   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:55.095763   75264 sshutil.go:53] new ssh client: &{IP:192.168.72.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/default-k8s-diff-port-553719/id_rsa Username:docker}
	I0920 18:17:55.095795   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHUsername
	I0920 18:17:55.095950   75264 sshutil.go:53] new ssh client: &{IP:192.168.72.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/default-k8s-diff-port-553719/id_rsa Username:docker}
	I0920 18:17:55.213223   75264 ssh_runner.go:195] Run: systemctl --version
	I0920 18:17:55.220952   75264 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 18:17:55.370747   75264 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 18:17:55.377509   75264 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 18:17:55.377595   75264 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 18:17:55.395830   75264 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 18:17:55.395854   75264 start.go:495] detecting cgroup driver to use...
	I0920 18:17:55.395920   75264 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 18:17:55.412885   75264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 18:17:55.428380   75264 docker.go:217] disabling cri-docker service (if available) ...
	I0920 18:17:55.428433   75264 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 18:17:55.444371   75264 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 18:17:55.459485   75264 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 18:17:55.583649   75264 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 18:17:55.768003   75264 docker.go:233] disabling docker service ...
	I0920 18:17:55.768065   75264 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 18:17:55.787062   75264 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 18:17:55.802662   75264 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 18:17:55.967892   75264 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 18:17:56.105744   75264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 18:17:56.120499   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 18:17:56.140527   75264 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 18:17:56.140613   75264 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:56.151282   75264 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 18:17:56.151355   75264 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:56.164680   75264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:56.176142   75264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:56.187120   75264 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 18:17:56.198384   75264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:56.209298   75264 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:56.226714   75264 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:56.237886   75264 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 18:17:56.253664   75264 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 18:17:56.253778   75264 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 18:17:56.267429   75264 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 18:17:56.279118   75264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:17:56.412692   75264 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 18:17:56.513349   75264 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 18:17:56.513438   75264 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 18:17:56.517974   75264 start.go:563] Will wait 60s for crictl version
	I0920 18:17:56.518042   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:17:56.521966   75264 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 18:17:56.561446   75264 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 18:17:56.561520   75264 ssh_runner.go:195] Run: crio --version
	I0920 18:17:56.592009   75264 ssh_runner.go:195] Run: crio --version
	I0920 18:17:56.626555   75264 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 18:17:56.627668   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetIP
	I0920 18:17:56.630995   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:56.631450   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:56.631480   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:56.631751   75264 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0920 18:17:56.636473   75264 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:17:56.648824   75264 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-553719 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-553719 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.190 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 18:17:56.648937   75264 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:17:56.648980   75264 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:17:56.691029   75264 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 18:17:56.691104   75264 ssh_runner.go:195] Run: which lz4
	I0920 18:17:56.695538   75264 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 18:17:56.699703   75264 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 18:17:56.699735   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0920 18:17:55.114625   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .Start
	I0920 18:17:55.114814   75577 main.go:141] libmachine: (old-k8s-version-744025) Ensuring networks are active...
	I0920 18:17:55.115808   75577 main.go:141] libmachine: (old-k8s-version-744025) Ensuring network default is active
	I0920 18:17:55.116209   75577 main.go:141] libmachine: (old-k8s-version-744025) Ensuring network mk-old-k8s-version-744025 is active
	I0920 18:17:55.116615   75577 main.go:141] libmachine: (old-k8s-version-744025) Getting domain xml...
	I0920 18:17:55.117403   75577 main.go:141] libmachine: (old-k8s-version-744025) Creating domain...
	I0920 18:17:56.411359   75577 main.go:141] libmachine: (old-k8s-version-744025) Waiting to get IP...
	I0920 18:17:56.412507   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:17:56.413010   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:17:56.413094   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:17:56.412996   76990 retry.go:31] will retry after 300.110477ms: waiting for machine to come up
	I0920 18:17:56.714648   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:17:56.715163   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:17:56.715183   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:17:56.715114   76990 retry.go:31] will retry after 352.948637ms: waiting for machine to come up
	I0920 18:17:57.069760   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:17:57.070449   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:17:57.070474   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:17:57.070404   76990 retry.go:31] will retry after 335.58281ms: waiting for machine to come up
	I0920 18:17:57.408023   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:17:57.408521   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:17:57.408546   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:17:57.408469   76990 retry.go:31] will retry after 498.040148ms: waiting for machine to come up
	I0920 18:17:54.653346   75086 pod_ready.go:93] pod "coredns-7c65d6cfc9-cskt4" in "kube-system" namespace has status "Ready":"True"
	I0920 18:17:54.653373   75086 pod_ready.go:82] duration metric: took 6.011015799s for pod "coredns-7c65d6cfc9-cskt4" in "kube-system" namespace to be "Ready" ...
	I0920 18:17:54.653383   75086 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:17:56.660606   75086 pod_ready.go:103] pod "etcd-embed-certs-768431" in "kube-system" namespace has status "Ready":"False"
	I0920 18:17:58.162253   75086 pod_ready.go:93] pod "etcd-embed-certs-768431" in "kube-system" namespace has status "Ready":"True"
	I0920 18:17:58.162282   75086 pod_ready.go:82] duration metric: took 3.508893207s for pod "etcd-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:17:58.162292   75086 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:17:58.109419   75264 crio.go:462] duration metric: took 1.413913626s to copy over tarball
	I0920 18:17:58.109504   75264 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 18:18:00.360623   75264 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.25108327s)
	I0920 18:18:00.360655   75264 crio.go:469] duration metric: took 2.251201466s to extract the tarball
	I0920 18:18:00.360703   75264 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 18:18:00.400822   75264 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:18:00.451366   75264 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 18:18:00.451402   75264 cache_images.go:84] Images are preloaded, skipping loading
	I0920 18:18:00.451413   75264 kubeadm.go:934] updating node { 192.168.72.190 8444 v1.31.1 crio true true} ...
	I0920 18:18:00.451590   75264 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-553719 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.190
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-553719 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 18:18:00.451688   75264 ssh_runner.go:195] Run: crio config
	I0920 18:18:00.502703   75264 cni.go:84] Creating CNI manager for ""
	I0920 18:18:00.502729   75264 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:18:00.502740   75264 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 18:18:00.502778   75264 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.190 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-553719 NodeName:default-k8s-diff-port-553719 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.190"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.190 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 18:18:00.502927   75264 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.190
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-553719"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.190
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.190"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 18:18:00.502996   75264 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 18:18:00.513768   75264 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 18:18:00.513870   75264 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 18:18:00.523631   75264 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0920 18:18:00.540428   75264 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 18:18:00.556858   75264 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0920 18:18:00.574954   75264 ssh_runner.go:195] Run: grep 192.168.72.190	control-plane.minikube.internal$ /etc/hosts
	I0920 18:18:00.578951   75264 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.190	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:18:00.592592   75264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:18:00.712035   75264 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:18:00.728657   75264 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/default-k8s-diff-port-553719 for IP: 192.168.72.190
	I0920 18:18:00.728684   75264 certs.go:194] generating shared ca certs ...
	I0920 18:18:00.728706   75264 certs.go:226] acquiring lock for ca certs: {Name:mkc7ef6c737c6bdc3fdd9dcff8f57029c020d8f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:18:00.728877   75264 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key
	I0920 18:18:00.728937   75264 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key
	I0920 18:18:00.728948   75264 certs.go:256] generating profile certs ...
	I0920 18:18:00.729055   75264 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/default-k8s-diff-port-553719/client.key
	I0920 18:18:00.729151   75264 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/default-k8s-diff-port-553719/apiserver.key.cc1f66e7
	I0920 18:18:00.729205   75264 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/default-k8s-diff-port-553719/proxy-client.key
	I0920 18:18:00.729368   75264 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem (1338 bytes)
	W0920 18:18:00.729415   75264 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973_empty.pem, impossibly tiny 0 bytes
	I0920 18:18:00.729425   75264 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 18:18:00.729453   75264 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem (1082 bytes)
	I0920 18:18:00.729474   75264 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem (1123 bytes)
	I0920 18:18:00.729501   75264 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem (1675 bytes)
	I0920 18:18:00.729538   75264 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem (1708 bytes)
	I0920 18:18:00.730192   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 18:18:00.775634   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 18:18:00.812006   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 18:18:00.846904   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0920 18:18:00.877908   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/default-k8s-diff-port-553719/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0920 18:18:00.910057   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/default-k8s-diff-port-553719/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 18:18:00.935377   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/default-k8s-diff-port-553719/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 18:18:00.960143   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/default-k8s-diff-port-553719/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 18:18:00.983906   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 18:18:01.007105   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem --> /usr/share/ca-certificates/15973.pem (1338 bytes)
	I0920 18:18:01.030331   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /usr/share/ca-certificates/159732.pem (1708 bytes)
	I0920 18:18:01.055976   75264 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 18:18:01.073892   75264 ssh_runner.go:195] Run: openssl version
	I0920 18:18:01.081246   75264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 18:18:01.092174   75264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:18:01.096541   75264 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:18:01.096595   75264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:18:01.103564   75264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 18:18:01.117697   75264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15973.pem && ln -fs /usr/share/ca-certificates/15973.pem /etc/ssl/certs/15973.pem"
	I0920 18:18:01.130349   75264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15973.pem
	I0920 18:18:01.135397   75264 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:04 /usr/share/ca-certificates/15973.pem
	I0920 18:18:01.135471   75264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15973.pem
	I0920 18:18:01.141399   75264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15973.pem /etc/ssl/certs/51391683.0"
	I0920 18:18:01.153123   75264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/159732.pem && ln -fs /usr/share/ca-certificates/159732.pem /etc/ssl/certs/159732.pem"
	I0920 18:18:01.165157   75264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/159732.pem
	I0920 18:18:01.170596   75264 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:04 /usr/share/ca-certificates/159732.pem
	I0920 18:18:01.170678   75264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/159732.pem
	I0920 18:18:01.176534   75264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/159732.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 18:18:01.188576   75264 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 18:18:01.193401   75264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 18:18:01.199557   75264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 18:18:01.205410   75264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 18:18:01.211296   75264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 18:18:01.216953   75264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 18:18:01.223417   75264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 18:18:01.229172   75264 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-553719 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-553719 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.190 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:18:01.229428   75264 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 18:18:01.229488   75264 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:18:01.270537   75264 cri.go:89] found id: ""
	I0920 18:18:01.270610   75264 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 18:18:01.284566   75264 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 18:18:01.284588   75264 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 18:18:01.284638   75264 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 18:18:01.297884   75264 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 18:18:01.298879   75264 kubeconfig.go:125] found "default-k8s-diff-port-553719" server: "https://192.168.72.190:8444"
	I0920 18:18:01.300996   75264 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 18:18:01.312183   75264 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.190
	I0920 18:18:01.312251   75264 kubeadm.go:1160] stopping kube-system containers ...
	I0920 18:18:01.312266   75264 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0920 18:18:01.312324   75264 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:18:01.358873   75264 cri.go:89] found id: ""
	I0920 18:18:01.358959   75264 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 18:18:01.377027   75264 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 18:18:01.386872   75264 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 18:18:01.386890   75264 kubeadm.go:157] found existing configuration files:
	
	I0920 18:18:01.386931   75264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0920 18:18:01.396044   75264 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 18:18:01.396118   75264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 18:18:01.405783   75264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0920 18:18:01.415296   75264 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 18:18:01.415367   75264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 18:18:01.427782   75264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0920 18:18:01.438838   75264 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 18:18:01.438897   75264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 18:18:01.450255   75264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0920 18:18:01.461237   75264 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 18:18:01.461313   75264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 18:18:01.472543   75264 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 18:18:01.483456   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:01.607155   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:17:57.908137   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:17:57.908561   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:17:57.908589   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:17:57.908522   76990 retry.go:31] will retry after 749.044696ms: waiting for machine to come up
	I0920 18:17:58.658869   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:17:58.659393   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:17:58.659424   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:17:58.659342   76990 retry.go:31] will retry after 949.936088ms: waiting for machine to come up
	I0920 18:17:59.610936   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:17:59.611467   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:17:59.611513   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:17:59.611430   76990 retry.go:31] will retry after 762.437104ms: waiting for machine to come up
	I0920 18:18:00.375768   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:00.376207   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:18:00.376235   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:18:00.376152   76990 retry.go:31] will retry after 1.228102027s: waiting for machine to come up
	I0920 18:18:01.606490   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:01.606958   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:18:01.606982   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:18:01.606907   76990 retry.go:31] will retry after 1.383186524s: waiting for machine to come up
	I0920 18:18:00.169862   75086 pod_ready.go:103] pod "kube-apiserver-embed-certs-768431" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:01.669351   75086 pod_ready.go:93] pod "kube-apiserver-embed-certs-768431" in "kube-system" namespace has status "Ready":"True"
	I0920 18:18:01.669378   75086 pod_ready.go:82] duration metric: took 3.507078668s for pod "kube-apiserver-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:01.669392   75086 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:01.674981   75086 pod_ready.go:93] pod "kube-controller-manager-embed-certs-768431" in "kube-system" namespace has status "Ready":"True"
	I0920 18:18:01.675006   75086 pod_ready.go:82] duration metric: took 5.604682ms for pod "kube-controller-manager-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:01.675018   75086 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-cshjm" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:01.680522   75086 pod_ready.go:93] pod "kube-proxy-cshjm" in "kube-system" namespace has status "Ready":"True"
	I0920 18:18:01.680547   75086 pod_ready.go:82] duration metric: took 5.521295ms for pod "kube-proxy-cshjm" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:01.680559   75086 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:01.685097   75086 pod_ready.go:93] pod "kube-scheduler-embed-certs-768431" in "kube-system" namespace has status "Ready":"True"
	I0920 18:18:01.685120   75086 pod_ready.go:82] duration metric: took 4.551724ms for pod "kube-scheduler-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:01.685131   75086 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:03.693234   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:02.268120   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:02.471363   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:02.530979   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:02.580339   75264 api_server.go:52] waiting for apiserver process to appear ...
	I0920 18:18:02.580455   75264 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:03.081156   75264 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:03.580564   75264 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:04.080991   75264 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:04.095108   75264 api_server.go:72] duration metric: took 1.514770034s to wait for apiserver process to appear ...
	I0920 18:18:04.095139   75264 api_server.go:88] waiting for apiserver healthz status ...
	I0920 18:18:04.095164   75264 api_server.go:253] Checking apiserver healthz at https://192.168.72.190:8444/healthz ...
	I0920 18:18:04.095704   75264 api_server.go:269] stopped: https://192.168.72.190:8444/healthz: Get "https://192.168.72.190:8444/healthz": dial tcp 192.168.72.190:8444: connect: connection refused
	I0920 18:18:04.595270   75264 api_server.go:253] Checking apiserver healthz at https://192.168.72.190:8444/healthz ...
	I0920 18:18:06.378496   75264 api_server.go:279] https://192.168.72.190:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 18:18:06.378532   75264 api_server.go:103] status: https://192.168.72.190:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 18:18:06.378550   75264 api_server.go:253] Checking apiserver healthz at https://192.168.72.190:8444/healthz ...
	I0920 18:18:06.425353   75264 api_server.go:279] https://192.168.72.190:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 18:18:06.425386   75264 api_server.go:103] status: https://192.168.72.190:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 18:18:06.595786   75264 api_server.go:253] Checking apiserver healthz at https://192.168.72.190:8444/healthz ...
	I0920 18:18:06.602114   75264 api_server.go:279] https://192.168.72.190:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 18:18:06.602148   75264 api_server.go:103] status: https://192.168.72.190:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 18:18:07.096095   75264 api_server.go:253] Checking apiserver healthz at https://192.168.72.190:8444/healthz ...
	I0920 18:18:07.102320   75264 api_server.go:279] https://192.168.72.190:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 18:18:07.102348   75264 api_server.go:103] status: https://192.168.72.190:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 18:18:07.595804   75264 api_server.go:253] Checking apiserver healthz at https://192.168.72.190:8444/healthz ...
	I0920 18:18:07.600025   75264 api_server.go:279] https://192.168.72.190:8444/healthz returned 200:
	ok
	I0920 18:18:07.606598   75264 api_server.go:141] control plane version: v1.31.1
	I0920 18:18:07.606626   75264 api_server.go:131] duration metric: took 3.511479158s to wait for apiserver health ...
	I0920 18:18:07.606637   75264 cni.go:84] Creating CNI manager for ""
	I0920 18:18:07.606645   75264 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:18:07.608423   75264 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 18:18:02.992412   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:02.992887   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:18:02.992919   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:18:02.992822   76990 retry.go:31] will retry after 2.100326569s: waiting for machine to come up
	I0920 18:18:05.095088   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:05.095546   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:18:05.095569   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:18:05.095507   76990 retry.go:31] will retry after 1.758181729s: waiting for machine to come up
	I0920 18:18:06.855172   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:06.855654   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:18:06.855678   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:18:06.855606   76990 retry.go:31] will retry after 2.478116743s: waiting for machine to come up
	I0920 18:18:05.694350   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:08.191644   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:07.609444   75264 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 18:18:07.620420   75264 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 18:18:07.637796   75264 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 18:18:07.652676   75264 system_pods.go:59] 8 kube-system pods found
	I0920 18:18:07.652710   75264 system_pods.go:61] "coredns-7c65d6cfc9-dmdfb" [8e4ffdb3-37e0-4498-bdf5-d6e7dbcad020] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 18:18:07.652721   75264 system_pods.go:61] "etcd-default-k8s-diff-port-553719" [e8de0e96-5f3a-4f3f-ae69-c5da7d3c2eb7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0920 18:18:07.652727   75264 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-553719" [5164e26c-7fcd-41dc-865f-429046a9ad61] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0920 18:18:07.652735   75264 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-553719" [b2c00a76-9645-4c63-9812-43e6ed9f1d4f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0920 18:18:07.652741   75264 system_pods.go:61] "kube-proxy-p9crq" [83e0f53d-6960-42c4-904d-ea85ba9160f4] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0920 18:18:07.652746   75264 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-553719" [42155f7b-95bb-467b-acf5-89eb4f80cf76] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0920 18:18:07.652751   75264 system_pods.go:61] "metrics-server-6867b74b74-vtl79" [29e0b6eb-22a9-4e37-97f9-83b48cc38193] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 18:18:07.652760   75264 system_pods.go:61] "storage-provisioner" [6fad2d07-f99e-45ac-9657-bce6d73d7fce] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0920 18:18:07.652769   75264 system_pods.go:74] duration metric: took 14.950839ms to wait for pod list to return data ...
	I0920 18:18:07.652780   75264 node_conditions.go:102] verifying NodePressure condition ...
	I0920 18:18:07.658890   75264 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 18:18:07.658927   75264 node_conditions.go:123] node cpu capacity is 2
	I0920 18:18:07.658941   75264 node_conditions.go:105] duration metric: took 6.15496ms to run NodePressure ...
	I0920 18:18:07.658964   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:07.932231   75264 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0920 18:18:07.936474   75264 kubeadm.go:739] kubelet initialised
	I0920 18:18:07.936500   75264 kubeadm.go:740] duration metric: took 4.236071ms waiting for restarted kubelet to initialise ...
	I0920 18:18:07.936507   75264 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:18:07.941918   75264 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-dmdfb" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:07.947609   75264 pod_ready.go:98] node "default-k8s-diff-port-553719" hosting pod "coredns-7c65d6cfc9-dmdfb" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:07.947642   75264 pod_ready.go:82] duration metric: took 5.693968ms for pod "coredns-7c65d6cfc9-dmdfb" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:07.947653   75264 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-553719" hosting pod "coredns-7c65d6cfc9-dmdfb" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:07.947660   75264 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:07.954317   75264 pod_ready.go:98] node "default-k8s-diff-port-553719" hosting pod "etcd-default-k8s-diff-port-553719" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:07.954358   75264 pod_ready.go:82] duration metric: took 6.689603ms for pod "etcd-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:07.954373   75264 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-553719" hosting pod "etcd-default-k8s-diff-port-553719" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:07.954382   75264 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:07.966099   75264 pod_ready.go:98] node "default-k8s-diff-port-553719" hosting pod "kube-apiserver-default-k8s-diff-port-553719" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:07.966126   75264 pod_ready.go:82] duration metric: took 11.735727ms for pod "kube-apiserver-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:07.966141   75264 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-553719" hosting pod "kube-apiserver-default-k8s-diff-port-553719" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:07.966154   75264 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:08.041768   75264 pod_ready.go:98] node "default-k8s-diff-port-553719" hosting pod "kube-controller-manager-default-k8s-diff-port-553719" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:08.041794   75264 pod_ready.go:82] duration metric: took 75.630916ms for pod "kube-controller-manager-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:08.041805   75264 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-553719" hosting pod "kube-controller-manager-default-k8s-diff-port-553719" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:08.041816   75264 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-p9crq" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:08.440423   75264 pod_ready.go:98] node "default-k8s-diff-port-553719" hosting pod "kube-proxy-p9crq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:08.440459   75264 pod_ready.go:82] duration metric: took 398.635469ms for pod "kube-proxy-p9crq" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:08.440471   75264 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-553719" hosting pod "kube-proxy-p9crq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:08.440480   75264 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:08.841917   75264 pod_ready.go:98] node "default-k8s-diff-port-553719" hosting pod "kube-scheduler-default-k8s-diff-port-553719" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:08.841953   75264 pod_ready.go:82] duration metric: took 401.459059ms for pod "kube-scheduler-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:08.841968   75264 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-553719" hosting pod "kube-scheduler-default-k8s-diff-port-553719" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:08.841977   75264 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:09.241198   75264 pod_ready.go:98] node "default-k8s-diff-port-553719" hosting pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:09.241225   75264 pod_ready.go:82] duration metric: took 399.238784ms for pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:09.241237   75264 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-553719" hosting pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:09.241245   75264 pod_ready.go:39] duration metric: took 1.304729447s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:18:09.241259   75264 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 18:18:09.253519   75264 ops.go:34] apiserver oom_adj: -16
	I0920 18:18:09.253546   75264 kubeadm.go:597] duration metric: took 7.96895197s to restartPrimaryControlPlane
	I0920 18:18:09.253558   75264 kubeadm.go:394] duration metric: took 8.024395552s to StartCluster
	I0920 18:18:09.253586   75264 settings.go:142] acquiring lock: {Name:mk2a13c58cdc0faf8cddca5d6716175d45db9bfd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:18:09.253682   75264 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19672-8777/kubeconfig
	I0920 18:18:09.255907   75264 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/kubeconfig: {Name:mkf32a4c736808e023459b2f0e40188618a38db1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:18:09.256208   75264 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.190 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:18:09.256322   75264 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 18:18:09.256394   75264 config.go:182] Loaded profile config "default-k8s-diff-port-553719": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:18:09.256420   75264 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-553719"
	I0920 18:18:09.256428   75264 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-553719"
	I0920 18:18:09.256440   75264 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-553719"
	I0920 18:18:09.256448   75264 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-553719"
	I0920 18:18:09.256457   75264 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-553719"
	W0920 18:18:09.256468   75264 addons.go:243] addon metrics-server should already be in state true
	I0920 18:18:09.256496   75264 host.go:66] Checking if "default-k8s-diff-port-553719" exists ...
	I0920 18:18:09.256441   75264 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-553719"
	W0920 18:18:09.256569   75264 addons.go:243] addon storage-provisioner should already be in state true
	I0920 18:18:09.256602   75264 host.go:66] Checking if "default-k8s-diff-port-553719" exists ...
	I0920 18:18:09.256814   75264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:09.256844   75264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:09.256882   75264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:09.256926   75264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:09.256930   75264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:09.256957   75264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:09.258738   75264 out.go:177] * Verifying Kubernetes components...
	I0920 18:18:09.259893   75264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:18:09.272294   75264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45635
	I0920 18:18:09.272357   75264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39993
	I0920 18:18:09.272304   75264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32923
	I0920 18:18:09.272766   75264 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:09.272889   75264 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:09.272918   75264 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:09.273279   75264 main.go:141] libmachine: Using API Version  1
	I0920 18:18:09.273294   75264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:09.273416   75264 main.go:141] libmachine: Using API Version  1
	I0920 18:18:09.273438   75264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:09.273457   75264 main.go:141] libmachine: Using API Version  1
	I0920 18:18:09.273478   75264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:09.273657   75264 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:09.273850   75264 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:09.273922   75264 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:09.274017   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetState
	I0920 18:18:09.274244   75264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:09.274287   75264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:09.274399   75264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:09.274430   75264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:09.277535   75264 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-553719"
	W0920 18:18:09.277556   75264 addons.go:243] addon default-storageclass should already be in state true
	I0920 18:18:09.277586   75264 host.go:66] Checking if "default-k8s-diff-port-553719" exists ...
	I0920 18:18:09.277990   75264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:09.278041   75264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:09.290955   75264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39845
	I0920 18:18:09.291487   75264 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:09.292058   75264 main.go:141] libmachine: Using API Version  1
	I0920 18:18:09.292087   75264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:09.292409   75264 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:09.292607   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetState
	I0920 18:18:09.293351   75264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44575
	I0920 18:18:09.293790   75264 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:09.294056   75264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44517
	I0920 18:18:09.294412   75264 main.go:141] libmachine: Using API Version  1
	I0920 18:18:09.294438   75264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:09.294540   75264 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:09.294854   75264 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:09.294902   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .DriverName
	I0920 18:18:09.295362   75264 main.go:141] libmachine: Using API Version  1
	I0920 18:18:09.295387   75264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:09.295521   75264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:09.295569   75264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:09.295783   75264 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:09.295993   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetState
	I0920 18:18:09.297231   75264 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0920 18:18:09.298214   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .DriverName
	I0920 18:18:09.298825   75264 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 18:18:09.298849   75264 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 18:18:09.298870   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHHostname
	I0920 18:18:09.299849   75264 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:18:09.301018   75264 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 18:18:09.301034   75264 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 18:18:09.301048   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHHostname
	I0920 18:18:09.302841   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:18:09.303141   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:18:09.303165   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:18:09.303335   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHPort
	I0920 18:18:09.303491   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:18:09.303627   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHUsername
	I0920 18:18:09.303772   75264 sshutil.go:53] new ssh client: &{IP:192.168.72.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/default-k8s-diff-port-553719/id_rsa Username:docker}
	I0920 18:18:09.304104   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:18:09.304507   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:18:09.304552   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:18:09.304623   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHPort
	I0920 18:18:09.304786   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:18:09.304946   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHUsername
	I0920 18:18:09.305085   75264 sshutil.go:53] new ssh client: &{IP:192.168.72.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/default-k8s-diff-port-553719/id_rsa Username:docker}
	I0920 18:18:09.312912   75264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35085
	I0920 18:18:09.313277   75264 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:09.313780   75264 main.go:141] libmachine: Using API Version  1
	I0920 18:18:09.313801   75264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:09.314355   75264 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:09.314525   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetState
	I0920 18:18:09.316088   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .DriverName
	I0920 18:18:09.316415   75264 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 18:18:09.316429   75264 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 18:18:09.316448   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHHostname
	I0920 18:18:09.319116   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:18:09.319484   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:18:09.319512   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:18:09.319664   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHPort
	I0920 18:18:09.319832   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:18:09.319984   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHUsername
	I0920 18:18:09.320098   75264 sshutil.go:53] new ssh client: &{IP:192.168.72.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/default-k8s-diff-port-553719/id_rsa Username:docker}
	I0920 18:18:09.457570   75264 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:18:09.476315   75264 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-553719" to be "Ready" ...
	I0920 18:18:09.599420   75264 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 18:18:09.599442   75264 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0920 18:18:09.600777   75264 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 18:18:09.621891   75264 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 18:18:09.626217   75264 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 18:18:09.626251   75264 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 18:18:09.675537   75264 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 18:18:09.675569   75264 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 18:18:09.723167   75264 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 18:18:10.813006   75264 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.191083617s)
	I0920 18:18:10.813080   75264 main.go:141] libmachine: Making call to close driver server
	I0920 18:18:10.813095   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .Close
	I0920 18:18:10.813403   75264 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:18:10.813452   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | Closing plugin on server side
	I0920 18:18:10.813455   75264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:18:10.813497   75264 main.go:141] libmachine: Making call to close driver server
	I0920 18:18:10.813518   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .Close
	I0920 18:18:10.813748   75264 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:18:10.813764   75264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:18:10.813783   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | Closing plugin on server side
	I0920 18:18:10.813975   75264 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.213167075s)
	I0920 18:18:10.814018   75264 main.go:141] libmachine: Making call to close driver server
	I0920 18:18:10.814034   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .Close
	I0920 18:18:10.814323   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | Closing plugin on server side
	I0920 18:18:10.814325   75264 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:18:10.814345   75264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:18:10.814358   75264 main.go:141] libmachine: Making call to close driver server
	I0920 18:18:10.814368   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .Close
	I0920 18:18:10.814653   75264 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:18:10.814673   75264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:18:10.821768   75264 main.go:141] libmachine: Making call to close driver server
	I0920 18:18:10.821785   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .Close
	I0920 18:18:10.822016   75264 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:18:10.822034   75264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:18:10.846062   75264 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.122849108s)
	I0920 18:18:10.846122   75264 main.go:141] libmachine: Making call to close driver server
	I0920 18:18:10.846139   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .Close
	I0920 18:18:10.846441   75264 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:18:10.846469   75264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:18:10.846469   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | Closing plugin on server side
	I0920 18:18:10.846479   75264 main.go:141] libmachine: Making call to close driver server
	I0920 18:18:10.846488   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .Close
	I0920 18:18:10.846715   75264 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:18:10.846737   75264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:18:10.846748   75264 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-553719"
	I0920 18:18:10.846749   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | Closing plugin on server side
	I0920 18:18:10.848702   75264 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0920 18:18:10.849990   75264 addons.go:510] duration metric: took 1.593667614s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0920 18:18:11.480490   75264 node_ready.go:53] node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:09.334928   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:09.335367   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:18:09.335402   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:18:09.335314   76990 retry.go:31] will retry after 4.194120768s: waiting for machine to come up
	I0920 18:18:10.192078   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:12.192186   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:14.778926   74753 start.go:364] duration metric: took 54.584395971s to acquireMachinesLock for "no-preload-956403"
	I0920 18:18:14.778979   74753 start.go:96] Skipping create...Using existing machine configuration
	I0920 18:18:14.778987   74753 fix.go:54] fixHost starting: 
	I0920 18:18:14.779392   74753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:14.779430   74753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:14.796004   74753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45761
	I0920 18:18:14.796509   74753 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:14.797006   74753 main.go:141] libmachine: Using API Version  1
	I0920 18:18:14.797028   74753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:14.797330   74753 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:14.797499   74753 main.go:141] libmachine: (no-preload-956403) Calling .DriverName
	I0920 18:18:14.797650   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetState
	I0920 18:18:14.799376   74753 fix.go:112] recreateIfNeeded on no-preload-956403: state=Stopped err=<nil>
	I0920 18:18:14.799400   74753 main.go:141] libmachine: (no-preload-956403) Calling .DriverName
	W0920 18:18:14.799564   74753 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 18:18:14.802451   74753 out.go:177] * Restarting existing kvm2 VM for "no-preload-956403" ...
	I0920 18:18:13.533702   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.534281   75577 main.go:141] libmachine: (old-k8s-version-744025) Found IP for machine: 192.168.39.207
	I0920 18:18:13.534309   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has current primary IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.534314   75577 main.go:141] libmachine: (old-k8s-version-744025) Reserving static IP address...
	I0920 18:18:13.534725   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "old-k8s-version-744025", mac: "52:54:00:e5:57:41", ip: "192.168.39.207"} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:13.534761   75577 main.go:141] libmachine: (old-k8s-version-744025) Reserved static IP address: 192.168.39.207
	I0920 18:18:13.534783   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | skip adding static IP to network mk-old-k8s-version-744025 - found existing host DHCP lease matching {name: "old-k8s-version-744025", mac: "52:54:00:e5:57:41", ip: "192.168.39.207"}
	I0920 18:18:13.534800   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | Getting to WaitForSSH function...
	I0920 18:18:13.534816   75577 main.go:141] libmachine: (old-k8s-version-744025) Waiting for SSH to be available...
	I0920 18:18:13.536879   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.537271   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:13.537301   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.537391   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | Using SSH client type: external
	I0920 18:18:13.537439   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/old-k8s-version-744025/id_rsa (-rw-------)
	I0920 18:18:13.537476   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.207 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-8777/.minikube/machines/old-k8s-version-744025/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 18:18:13.537486   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | About to run SSH command:
	I0920 18:18:13.537498   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | exit 0
	I0920 18:18:13.662214   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | SSH cmd err, output: <nil>: 
	I0920 18:18:13.662565   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetConfigRaw
	I0920 18:18:13.663269   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetIP
	I0920 18:18:13.666068   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.666530   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:13.666561   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.666899   75577 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/config.json ...
	I0920 18:18:13.667111   75577 machine.go:93] provisionDockerMachine start ...
	I0920 18:18:13.667129   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .DriverName
	I0920 18:18:13.667331   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHHostname
	I0920 18:18:13.669347   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.669717   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:13.669743   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.669908   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHPort
	I0920 18:18:13.670167   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:13.670334   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:13.670453   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHUsername
	I0920 18:18:13.670583   75577 main.go:141] libmachine: Using SSH client type: native
	I0920 18:18:13.670759   75577 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.207 22 <nil> <nil>}
	I0920 18:18:13.670770   75577 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 18:18:13.774059   75577 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 18:18:13.774093   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetMachineName
	I0920 18:18:13.774406   75577 buildroot.go:166] provisioning hostname "old-k8s-version-744025"
	I0920 18:18:13.774434   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetMachineName
	I0920 18:18:13.774618   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHHostname
	I0920 18:18:13.777175   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.777482   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:13.777513   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.777633   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHPort
	I0920 18:18:13.777803   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:13.777966   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:13.778082   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHUsername
	I0920 18:18:13.778235   75577 main.go:141] libmachine: Using SSH client type: native
	I0920 18:18:13.778404   75577 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.207 22 <nil> <nil>}
	I0920 18:18:13.778417   75577 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-744025 && echo "old-k8s-version-744025" | sudo tee /etc/hostname
	I0920 18:18:13.900180   75577 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-744025
	
	I0920 18:18:13.900224   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHHostname
	I0920 18:18:13.902958   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.903309   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:13.903340   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.903543   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHPort
	I0920 18:18:13.903762   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:13.903931   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:13.904051   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHUsername
	I0920 18:18:13.904214   75577 main.go:141] libmachine: Using SSH client type: native
	I0920 18:18:13.904393   75577 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.207 22 <nil> <nil>}
	I0920 18:18:13.904409   75577 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-744025' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-744025/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-744025' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 18:18:14.023748   75577 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:18:14.023781   75577 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-8777/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-8777/.minikube}
	I0920 18:18:14.023834   75577 buildroot.go:174] setting up certificates
	I0920 18:18:14.023851   75577 provision.go:84] configureAuth start
	I0920 18:18:14.023866   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetMachineName
	I0920 18:18:14.024154   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetIP
	I0920 18:18:14.027240   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.027640   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:14.027778   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.027867   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHHostname
	I0920 18:18:14.030383   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.030741   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:14.030765   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.030915   75577 provision.go:143] copyHostCerts
	I0920 18:18:14.030979   75577 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem, removing ...
	I0920 18:18:14.030999   75577 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem
	I0920 18:18:14.031072   75577 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem (1082 bytes)
	I0920 18:18:14.031188   75577 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem, removing ...
	I0920 18:18:14.031205   75577 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem
	I0920 18:18:14.031240   75577 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem (1123 bytes)
	I0920 18:18:14.031340   75577 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem, removing ...
	I0920 18:18:14.031351   75577 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem
	I0920 18:18:14.031378   75577 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem (1675 bytes)
	I0920 18:18:14.031455   75577 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-744025 san=[127.0.0.1 192.168.39.207 localhost minikube old-k8s-version-744025]
	I0920 18:18:14.140775   75577 provision.go:177] copyRemoteCerts
	I0920 18:18:14.140847   75577 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 18:18:14.140883   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHHostname
	I0920 18:18:14.143599   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.144016   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:14.144062   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.144199   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHPort
	I0920 18:18:14.144377   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:14.144526   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHUsername
	I0920 18:18:14.144656   75577 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/old-k8s-version-744025/id_rsa Username:docker}
	I0920 18:18:14.228286   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 18:18:14.257293   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0920 18:18:14.281335   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0920 18:18:14.305736   75577 provision.go:87] duration metric: took 281.853458ms to configureAuth
	I0920 18:18:14.305762   75577 buildroot.go:189] setting minikube options for container-runtime
	I0920 18:18:14.305983   75577 config.go:182] Loaded profile config "old-k8s-version-744025": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0920 18:18:14.306076   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHHostname
	I0920 18:18:14.308833   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.309220   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:14.309244   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.309535   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHPort
	I0920 18:18:14.309779   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:14.309974   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:14.310124   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHUsername
	I0920 18:18:14.310316   75577 main.go:141] libmachine: Using SSH client type: native
	I0920 18:18:14.310535   75577 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.207 22 <nil> <nil>}
	I0920 18:18:14.310551   75577 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 18:18:14.536416   75577 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 18:18:14.536442   75577 machine.go:96] duration metric: took 869.318772ms to provisionDockerMachine
	I0920 18:18:14.536456   75577 start.go:293] postStartSetup for "old-k8s-version-744025" (driver="kvm2")
	I0920 18:18:14.536468   75577 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 18:18:14.536510   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .DriverName
	I0920 18:18:14.536803   75577 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 18:18:14.536831   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHHostname
	I0920 18:18:14.539503   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.539923   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:14.539951   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.540126   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHPort
	I0920 18:18:14.540303   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:14.540463   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHUsername
	I0920 18:18:14.540649   75577 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/old-k8s-version-744025/id_rsa Username:docker}
	I0920 18:18:14.625866   75577 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 18:18:14.630527   75577 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 18:18:14.630551   75577 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/addons for local assets ...
	I0920 18:18:14.630637   75577 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/files for local assets ...
	I0920 18:18:14.630727   75577 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem -> 159732.pem in /etc/ssl/certs
	I0920 18:18:14.630840   75577 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 18:18:14.640965   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /etc/ssl/certs/159732.pem (1708 bytes)
	I0920 18:18:14.668227   75577 start.go:296] duration metric: took 131.756559ms for postStartSetup
	I0920 18:18:14.668268   75577 fix.go:56] duration metric: took 19.581104117s for fixHost
	I0920 18:18:14.668295   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHHostname
	I0920 18:18:14.671138   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.671520   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:14.671549   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.671777   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHPort
	I0920 18:18:14.671981   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:14.672141   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:14.672280   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHUsername
	I0920 18:18:14.672436   75577 main.go:141] libmachine: Using SSH client type: native
	I0920 18:18:14.672606   75577 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.207 22 <nil> <nil>}
	I0920 18:18:14.672616   75577 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 18:18:14.778752   75577 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726856294.753230182
	
	I0920 18:18:14.778785   75577 fix.go:216] guest clock: 1726856294.753230182
	I0920 18:18:14.778796   75577 fix.go:229] Guest: 2024-09-20 18:18:14.753230182 +0000 UTC Remote: 2024-09-20 18:18:14.668273351 +0000 UTC m=+241.967312055 (delta=84.956831ms)
	I0920 18:18:14.778827   75577 fix.go:200] guest clock delta is within tolerance: 84.956831ms
	I0920 18:18:14.778836   75577 start.go:83] releasing machines lock for "old-k8s-version-744025", held for 19.691716285s
	I0920 18:18:14.778874   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .DriverName
	I0920 18:18:14.779135   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetIP
	I0920 18:18:14.781932   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.782357   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:14.782386   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.782572   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .DriverName
	I0920 18:18:14.783124   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .DriverName
	I0920 18:18:14.783328   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .DriverName
	I0920 18:18:14.783401   75577 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 18:18:14.783465   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHHostname
	I0920 18:18:14.783519   75577 ssh_runner.go:195] Run: cat /version.json
	I0920 18:18:14.783552   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHHostname
	I0920 18:18:14.786645   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.786817   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.786994   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:14.787023   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.787207   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:14.787259   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.787264   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHPort
	I0920 18:18:14.787404   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHPort
	I0920 18:18:14.787469   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:14.787561   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:14.787612   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHUsername
	I0920 18:18:14.787711   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHUsername
	I0920 18:18:14.787845   75577 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/old-k8s-version-744025/id_rsa Username:docker}
	I0920 18:18:14.787845   75577 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/old-k8s-version-744025/id_rsa Username:docker}
	I0920 18:18:14.872183   75577 ssh_runner.go:195] Run: systemctl --version
	I0920 18:18:14.913275   75577 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 18:18:15.071299   75577 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 18:18:15.078725   75577 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 18:18:15.078806   75577 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 18:18:15.101497   75577 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 18:18:15.101522   75577 start.go:495] detecting cgroup driver to use...
	I0920 18:18:15.101579   75577 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 18:18:15.120626   75577 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 18:18:15.137297   75577 docker.go:217] disabling cri-docker service (if available) ...
	I0920 18:18:15.137401   75577 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 18:18:15.152152   75577 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 18:18:15.167359   75577 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 18:18:15.288763   75577 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 18:18:15.475429   75577 docker.go:233] disabling docker service ...
	I0920 18:18:15.475512   75577 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 18:18:15.493331   75577 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 18:18:15.510326   75577 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 18:18:15.629119   75577 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 18:18:15.758923   75577 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 18:18:15.776778   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 18:18:15.798980   75577 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0920 18:18:15.799042   75577 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:18:15.810264   75577 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 18:18:15.810322   75577 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:18:15.821060   75577 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:18:15.832685   75577 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:18:15.843026   75577 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 18:18:15.853996   75577 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 18:18:15.864126   75577 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 18:18:15.864183   75577 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 18:18:15.878216   75577 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 18:18:15.889820   75577 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:18:16.015555   75577 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 18:18:16.118797   75577 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 18:18:16.118880   75577 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 18:18:16.124430   75577 start.go:563] Will wait 60s for crictl version
	I0920 18:18:16.124485   75577 ssh_runner.go:195] Run: which crictl
	I0920 18:18:16.128632   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 18:18:16.165596   75577 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 18:18:16.165707   75577 ssh_runner.go:195] Run: crio --version
	I0920 18:18:16.198772   75577 ssh_runner.go:195] Run: crio --version
	I0920 18:18:16.238511   75577 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0920 18:18:13.980331   75264 node_ready.go:53] node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:15.981435   75264 node_ready.go:53] node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:16.982412   75264 node_ready.go:49] node "default-k8s-diff-port-553719" has status "Ready":"True"
	I0920 18:18:16.982441   75264 node_ready.go:38] duration metric: took 7.506086526s for node "default-k8s-diff-port-553719" to be "Ready" ...
	I0920 18:18:16.982457   75264 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:18:16.989870   75264 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-dmdfb" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:14.803646   74753 main.go:141] libmachine: (no-preload-956403) Calling .Start
	I0920 18:18:14.803856   74753 main.go:141] libmachine: (no-preload-956403) Ensuring networks are active...
	I0920 18:18:14.804594   74753 main.go:141] libmachine: (no-preload-956403) Ensuring network default is active
	I0920 18:18:14.804920   74753 main.go:141] libmachine: (no-preload-956403) Ensuring network mk-no-preload-956403 is active
	I0920 18:18:14.805341   74753 main.go:141] libmachine: (no-preload-956403) Getting domain xml...
	I0920 18:18:14.806068   74753 main.go:141] libmachine: (no-preload-956403) Creating domain...
	I0920 18:18:16.196663   74753 main.go:141] libmachine: (no-preload-956403) Waiting to get IP...
	I0920 18:18:16.197381   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:16.197762   74753 main.go:141] libmachine: (no-preload-956403) DBG | unable to find current IP address of domain no-preload-956403 in network mk-no-preload-956403
	I0920 18:18:16.197885   74753 main.go:141] libmachine: (no-preload-956403) DBG | I0920 18:18:16.197764   77223 retry.go:31] will retry after 214.087819ms: waiting for machine to come up
	I0920 18:18:16.413365   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:16.413951   74753 main.go:141] libmachine: (no-preload-956403) DBG | unable to find current IP address of domain no-preload-956403 in network mk-no-preload-956403
	I0920 18:18:16.413977   74753 main.go:141] libmachine: (no-preload-956403) DBG | I0920 18:18:16.413916   77223 retry.go:31] will retry after 249.35647ms: waiting for machine to come up
	I0920 18:18:16.665587   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:16.666168   74753 main.go:141] libmachine: (no-preload-956403) DBG | unable to find current IP address of domain no-preload-956403 in network mk-no-preload-956403
	I0920 18:18:16.666203   74753 main.go:141] libmachine: (no-preload-956403) DBG | I0920 18:18:16.666132   77223 retry.go:31] will retry after 374.598012ms: waiting for machine to come up
	I0920 18:18:17.042911   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:17.043594   74753 main.go:141] libmachine: (no-preload-956403) DBG | unable to find current IP address of domain no-preload-956403 in network mk-no-preload-956403
	I0920 18:18:17.043618   74753 main.go:141] libmachine: (no-preload-956403) DBG | I0920 18:18:17.043550   77223 retry.go:31] will retry after 536.252353ms: waiting for machine to come up
	I0920 18:18:17.581141   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:17.581582   74753 main.go:141] libmachine: (no-preload-956403) DBG | unable to find current IP address of domain no-preload-956403 in network mk-no-preload-956403
	I0920 18:18:17.581616   74753 main.go:141] libmachine: (no-preload-956403) DBG | I0920 18:18:17.581528   77223 retry.go:31] will retry after 459.241867ms: waiting for machine to come up
	I0920 18:18:16.239946   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetIP
	I0920 18:18:16.242727   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:16.243147   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:16.243184   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:16.243561   75577 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 18:18:16.247928   75577 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:18:16.262165   75577 kubeadm.go:883] updating cluster {Name:old-k8s-version-744025 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-744025 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.207 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 18:18:16.262310   75577 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0920 18:18:16.262358   75577 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:18:16.313771   75577 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0920 18:18:16.313854   75577 ssh_runner.go:195] Run: which lz4
	I0920 18:18:16.318361   75577 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 18:18:16.322529   75577 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 18:18:16.322570   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0920 18:18:14.192319   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:16.194161   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:18.699498   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:18.999985   75264 pod_ready.go:103] pod "coredns-7c65d6cfc9-dmdfb" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:21.497825   75264 pod_ready.go:103] pod "coredns-7c65d6cfc9-dmdfb" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:18.042075   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:18.042566   74753 main.go:141] libmachine: (no-preload-956403) DBG | unable to find current IP address of domain no-preload-956403 in network mk-no-preload-956403
	I0920 18:18:18.042603   74753 main.go:141] libmachine: (no-preload-956403) DBG | I0920 18:18:18.042533   77223 retry.go:31] will retry after 833.585895ms: waiting for machine to come up
	I0920 18:18:18.877534   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:18.878023   74753 main.go:141] libmachine: (no-preload-956403) DBG | unable to find current IP address of domain no-preload-956403 in network mk-no-preload-956403
	I0920 18:18:18.878047   74753 main.go:141] libmachine: (no-preload-956403) DBG | I0920 18:18:18.877988   77223 retry.go:31] will retry after 1.035805905s: waiting for machine to come up
	I0920 18:18:19.915316   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:19.915735   74753 main.go:141] libmachine: (no-preload-956403) DBG | unable to find current IP address of domain no-preload-956403 in network mk-no-preload-956403
	I0920 18:18:19.915859   74753 main.go:141] libmachine: (no-preload-956403) DBG | I0920 18:18:19.915787   77223 retry.go:31] will retry after 978.827371ms: waiting for machine to come up
	I0920 18:18:20.896532   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:20.897185   74753 main.go:141] libmachine: (no-preload-956403) DBG | unable to find current IP address of domain no-preload-956403 in network mk-no-preload-956403
	I0920 18:18:20.897216   74753 main.go:141] libmachine: (no-preload-956403) DBG | I0920 18:18:20.897122   77223 retry.go:31] will retry after 1.808853939s: waiting for machine to come up
	I0920 18:18:17.953626   75577 crio.go:462] duration metric: took 1.635321078s to copy over tarball
	I0920 18:18:17.953717   75577 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 18:18:21.049561   75577 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.095809102s)
	I0920 18:18:21.049588   75577 crio.go:469] duration metric: took 3.095926273s to extract the tarball
	I0920 18:18:21.049596   75577 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 18:18:21.093521   75577 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:18:21.132318   75577 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0920 18:18:21.132361   75577 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0920 18:18:21.132426   75577 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:18:21.132449   75577 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 18:18:21.132432   75577 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 18:18:21.132551   75577 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 18:18:21.132587   75577 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0920 18:18:21.132708   75577 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0920 18:18:21.132819   75577 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 18:18:21.132557   75577 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0920 18:18:21.134217   75577 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0920 18:18:21.134256   75577 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0920 18:18:21.134277   75577 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 18:18:21.134313   75577 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0920 18:18:21.134327   75577 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 18:18:21.134341   75577 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 18:18:21.134266   75577 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 18:18:21.134356   75577 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:18:21.421546   75577 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0920 18:18:21.467311   75577 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0920 18:18:21.467349   75577 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0920 18:18:21.467400   75577 ssh_runner.go:195] Run: which crictl
	I0920 18:18:21.471158   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 18:18:21.474543   75577 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0920 18:18:21.480501   75577 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 18:18:21.499826   75577 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0920 18:18:21.499835   75577 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0920 18:18:21.505712   75577 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0920 18:18:21.514377   75577 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0920 18:18:21.514897   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 18:18:21.601531   75577 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0920 18:18:21.601582   75577 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0920 18:18:21.601635   75577 ssh_runner.go:195] Run: which crictl
	I0920 18:18:21.660417   75577 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0920 18:18:21.660457   75577 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 18:18:21.660504   75577 ssh_runner.go:195] Run: which crictl
	I0920 18:18:21.660528   75577 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0920 18:18:21.660571   75577 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 18:18:21.660625   75577 ssh_runner.go:195] Run: which crictl
	I0920 18:18:21.667538   75577 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0920 18:18:21.667558   75577 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0920 18:18:21.667583   75577 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 18:18:21.667595   75577 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0920 18:18:21.667633   75577 ssh_runner.go:195] Run: which crictl
	I0920 18:18:21.667652   75577 ssh_runner.go:195] Run: which crictl
	I0920 18:18:21.676529   75577 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0920 18:18:21.676580   75577 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 18:18:21.676602   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 18:18:21.676633   75577 ssh_runner.go:195] Run: which crictl
	I0920 18:18:21.676643   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 18:18:21.680041   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 18:18:21.681078   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 18:18:21.681180   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 18:18:21.683409   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 18:18:21.804439   75577 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0920 18:18:21.804492   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 18:18:21.804530   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 18:18:21.804585   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 18:18:21.823434   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 18:18:21.823497   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 18:18:21.823539   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 18:18:21.932568   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 18:18:21.932619   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 18:18:21.932639   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 18:18:21.958001   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 18:18:21.971089   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 18:18:21.975643   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 18:18:22.100118   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 18:18:22.100173   75577 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0920 18:18:22.100197   75577 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0920 18:18:22.100276   75577 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0920 18:18:22.100333   75577 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0920 18:18:22.109895   75577 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0920 18:18:22.137798   75577 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0920 18:18:22.331884   75577 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:18:22.476470   75577 cache_images.go:92] duration metric: took 1.344086626s to LoadCachedImages
	W0920 18:18:22.476575   75577 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0920 18:18:22.476595   75577 kubeadm.go:934] updating node { 192.168.39.207 8443 v1.20.0 crio true true} ...
	I0920 18:18:22.476742   75577 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-744025 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.207
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-744025 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 18:18:22.476833   75577 ssh_runner.go:195] Run: crio config
	I0920 18:18:22.528964   75577 cni.go:84] Creating CNI manager for ""
	I0920 18:18:22.528990   75577 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:18:22.528999   75577 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 18:18:22.529016   75577 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.207 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-744025 NodeName:old-k8s-version-744025 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.207"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.207 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0920 18:18:22.529173   75577 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.207
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-744025"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.207
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.207"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 18:18:22.529233   75577 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0920 18:18:22.540849   75577 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 18:18:22.540935   75577 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 18:18:22.551148   75577 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0920 18:18:22.569199   75577 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 18:18:22.585909   75577 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0920 18:18:22.604987   75577 ssh_runner.go:195] Run: grep 192.168.39.207	control-plane.minikube.internal$ /etc/hosts
	I0920 18:18:22.609152   75577 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.207	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:18:22.628042   75577 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:18:21.191366   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:23.193152   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:23.914443   75264 pod_ready.go:103] pod "coredns-7c65d6cfc9-dmdfb" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:25.002152   75264 pod_ready.go:93] pod "coredns-7c65d6cfc9-dmdfb" in "kube-system" namespace has status "Ready":"True"
	I0920 18:18:25.002185   75264 pod_ready.go:82] duration metric: took 8.012280973s for pod "coredns-7c65d6cfc9-dmdfb" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:25.002198   75264 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:25.010541   75264 pod_ready.go:93] pod "etcd-default-k8s-diff-port-553719" in "kube-system" namespace has status "Ready":"True"
	I0920 18:18:25.010570   75264 pod_ready.go:82] duration metric: took 8.362504ms for pod "etcd-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:25.010591   75264 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:25.023701   75264 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-553719" in "kube-system" namespace has status "Ready":"True"
	I0920 18:18:25.023741   75264 pod_ready.go:82] duration metric: took 13.139423ms for pod "kube-apiserver-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:25.023758   75264 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:25.031247   75264 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-553719" in "kube-system" namespace has status "Ready":"True"
	I0920 18:18:25.031282   75264 pod_ready.go:82] duration metric: took 7.515474ms for pod "kube-controller-manager-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:25.031307   75264 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-p9crq" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:25.119242   75264 pod_ready.go:93] pod "kube-proxy-p9crq" in "kube-system" namespace has status "Ready":"True"
	I0920 18:18:25.119267   75264 pod_ready.go:82] duration metric: took 87.951791ms for pod "kube-proxy-p9crq" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:25.119280   75264 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:25.518243   75264 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-553719" in "kube-system" namespace has status "Ready":"True"
	I0920 18:18:25.518267   75264 pod_ready.go:82] duration metric: took 398.979314ms for pod "kube-scheduler-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:25.518281   75264 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:22.707778   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:22.708271   74753 main.go:141] libmachine: (no-preload-956403) DBG | unable to find current IP address of domain no-preload-956403 in network mk-no-preload-956403
	I0920 18:18:22.708333   74753 main.go:141] libmachine: (no-preload-956403) DBG | I0920 18:18:22.708161   77223 retry.go:31] will retry after 1.516042611s: waiting for machine to come up
	I0920 18:18:24.225560   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:24.225987   74753 main.go:141] libmachine: (no-preload-956403) DBG | unable to find current IP address of domain no-preload-956403 in network mk-no-preload-956403
	I0920 18:18:24.226041   74753 main.go:141] libmachine: (no-preload-956403) DBG | I0920 18:18:24.225968   77223 retry.go:31] will retry after 2.135874415s: waiting for machine to come up
	I0920 18:18:26.363371   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:26.363861   74753 main.go:141] libmachine: (no-preload-956403) DBG | unable to find current IP address of domain no-preload-956403 in network mk-no-preload-956403
	I0920 18:18:26.363895   74753 main.go:141] libmachine: (no-preload-956403) DBG | I0920 18:18:26.363777   77223 retry.go:31] will retry after 3.29383193s: waiting for machine to come up
	I0920 18:18:22.796651   75577 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:18:22.813789   75577 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025 for IP: 192.168.39.207
	I0920 18:18:22.813817   75577 certs.go:194] generating shared ca certs ...
	I0920 18:18:22.813848   75577 certs.go:226] acquiring lock for ca certs: {Name:mkc7ef6c737c6bdc3fdd9dcff8f57029c020d8f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:18:22.814045   75577 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key
	I0920 18:18:22.814101   75577 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key
	I0920 18:18:22.814118   75577 certs.go:256] generating profile certs ...
	I0920 18:18:22.814290   75577 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/client.key
	I0920 18:18:22.814383   75577 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/apiserver.key.3105b99d
	I0920 18:18:22.814445   75577 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/proxy-client.key
	I0920 18:18:22.814626   75577 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem (1338 bytes)
	W0920 18:18:22.814660   75577 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973_empty.pem, impossibly tiny 0 bytes
	I0920 18:18:22.814666   75577 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 18:18:22.814691   75577 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem (1082 bytes)
	I0920 18:18:22.814717   75577 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem (1123 bytes)
	I0920 18:18:22.814749   75577 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem (1675 bytes)
	I0920 18:18:22.814813   75577 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem (1708 bytes)
	I0920 18:18:22.815736   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 18:18:22.863832   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 18:18:22.921747   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 18:18:22.959556   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0920 18:18:22.992097   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0920 18:18:23.027565   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 18:18:23.057374   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 18:18:23.094290   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 18:18:23.120095   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem --> /usr/share/ca-certificates/15973.pem (1338 bytes)
	I0920 18:18:23.144374   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /usr/share/ca-certificates/159732.pem (1708 bytes)
	I0920 18:18:23.169431   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 18:18:23.195779   75577 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 18:18:23.212820   75577 ssh_runner.go:195] Run: openssl version
	I0920 18:18:23.218876   75577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 18:18:23.229684   75577 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:18:23.234533   75577 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:18:23.234603   75577 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:18:23.240460   75577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 18:18:23.251940   75577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15973.pem && ln -fs /usr/share/ca-certificates/15973.pem /etc/ssl/certs/15973.pem"
	I0920 18:18:23.263308   75577 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15973.pem
	I0920 18:18:23.268059   75577 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:04 /usr/share/ca-certificates/15973.pem
	I0920 18:18:23.268128   75577 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15973.pem
	I0920 18:18:23.274199   75577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15973.pem /etc/ssl/certs/51391683.0"
	I0920 18:18:23.286362   75577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/159732.pem && ln -fs /usr/share/ca-certificates/159732.pem /etc/ssl/certs/159732.pem"
	I0920 18:18:23.303962   75577 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/159732.pem
	I0920 18:18:23.310907   75577 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:04 /usr/share/ca-certificates/159732.pem
	I0920 18:18:23.310981   75577 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/159732.pem
	I0920 18:18:23.317881   75577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/159732.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 18:18:23.329247   75577 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 18:18:23.334223   75577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 18:18:23.340565   75577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 18:18:23.346929   75577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 18:18:23.353681   75577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 18:18:23.359699   75577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 18:18:23.365749   75577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 18:18:23.371899   75577 kubeadm.go:392] StartCluster: {Name:old-k8s-version-744025 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-744025 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.207 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:18:23.371981   75577 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 18:18:23.372027   75577 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:18:23.419904   75577 cri.go:89] found id: ""
	I0920 18:18:23.419982   75577 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 18:18:23.431761   75577 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 18:18:23.431782   75577 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 18:18:23.431833   75577 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 18:18:23.444358   75577 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 18:18:23.445545   75577 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-744025" does not appear in /home/jenkins/minikube-integration/19672-8777/kubeconfig
	I0920 18:18:23.446531   75577 kubeconfig.go:62] /home/jenkins/minikube-integration/19672-8777/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-744025" cluster setting kubeconfig missing "old-k8s-version-744025" context setting]
	I0920 18:18:23.447808   75577 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/kubeconfig: {Name:mkf32a4c736808e023459b2f0e40188618a38db1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:18:23.568927   75577 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 18:18:23.579991   75577 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.207
	I0920 18:18:23.580025   75577 kubeadm.go:1160] stopping kube-system containers ...
	I0920 18:18:23.580038   75577 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0920 18:18:23.580097   75577 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:18:23.625568   75577 cri.go:89] found id: ""
	I0920 18:18:23.625648   75577 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 18:18:23.643938   75577 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 18:18:23.654375   75577 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 18:18:23.654398   75577 kubeadm.go:157] found existing configuration files:
	
	I0920 18:18:23.654453   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 18:18:23.664335   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 18:18:23.664409   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 18:18:23.674996   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 18:18:23.685310   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 18:18:23.685401   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 18:18:23.696241   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 18:18:23.706386   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 18:18:23.706465   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 18:18:23.716491   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 18:18:23.726566   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 18:18:23.726626   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 18:18:23.738576   75577 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 18:18:23.749510   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:23.877503   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:24.789322   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:25.054969   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:25.152117   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:25.249140   75577 api_server.go:52] waiting for apiserver process to appear ...
	I0920 18:18:25.249245   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:25.749427   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:26.249360   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:26.749895   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:27.249636   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:25.194678   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:27.693680   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:27.524378   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:29.524998   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:32.025310   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:29.661278   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:29.661743   74753 main.go:141] libmachine: (no-preload-956403) DBG | unable to find current IP address of domain no-preload-956403 in network mk-no-preload-956403
	I0920 18:18:29.661762   74753 main.go:141] libmachine: (no-preload-956403) DBG | I0920 18:18:29.661714   77223 retry.go:31] will retry after 3.154777794s: waiting for machine to come up
	I0920 18:18:27.749629   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:28.250139   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:28.749691   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:29.249709   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:29.749550   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:30.250186   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:30.749562   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:31.249622   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:31.749925   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:32.250324   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:30.191419   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:32.191873   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:32.820331   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:32.820870   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has current primary IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:32.820895   74753 main.go:141] libmachine: (no-preload-956403) Found IP for machine: 192.168.50.47
	I0920 18:18:32.820908   74753 main.go:141] libmachine: (no-preload-956403) Reserving static IP address...
	I0920 18:18:32.821313   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "no-preload-956403", mac: "52:54:00:b6:13:30", ip: "192.168.50.47"} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:32.821339   74753 main.go:141] libmachine: (no-preload-956403) Reserved static IP address: 192.168.50.47
	I0920 18:18:32.821355   74753 main.go:141] libmachine: (no-preload-956403) DBG | skip adding static IP to network mk-no-preload-956403 - found existing host DHCP lease matching {name: "no-preload-956403", mac: "52:54:00:b6:13:30", ip: "192.168.50.47"}
	I0920 18:18:32.821372   74753 main.go:141] libmachine: (no-preload-956403) DBG | Getting to WaitForSSH function...
	I0920 18:18:32.821389   74753 main.go:141] libmachine: (no-preload-956403) Waiting for SSH to be available...
	I0920 18:18:32.823590   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:32.823894   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:32.823928   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:32.824019   74753 main.go:141] libmachine: (no-preload-956403) DBG | Using SSH client type: external
	I0920 18:18:32.824046   74753 main.go:141] libmachine: (no-preload-956403) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/no-preload-956403/id_rsa (-rw-------)
	I0920 18:18:32.824077   74753 main.go:141] libmachine: (no-preload-956403) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.47 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-8777/.minikube/machines/no-preload-956403/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 18:18:32.824090   74753 main.go:141] libmachine: (no-preload-956403) DBG | About to run SSH command:
	I0920 18:18:32.824102   74753 main.go:141] libmachine: (no-preload-956403) DBG | exit 0
	I0920 18:18:32.950122   74753 main.go:141] libmachine: (no-preload-956403) DBG | SSH cmd err, output: <nil>: 
	I0920 18:18:32.950537   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetConfigRaw
	I0920 18:18:32.951187   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetIP
	I0920 18:18:32.953731   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:32.954073   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:32.954102   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:32.954365   74753 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/no-preload-956403/config.json ...
	I0920 18:18:32.954603   74753 machine.go:93] provisionDockerMachine start ...
	I0920 18:18:32.954621   74753 main.go:141] libmachine: (no-preload-956403) Calling .DriverName
	I0920 18:18:32.954814   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:18:32.957049   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:32.957379   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:32.957425   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:32.957538   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHPort
	I0920 18:18:32.957717   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:32.957920   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:32.958094   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHUsername
	I0920 18:18:32.958282   74753 main.go:141] libmachine: Using SSH client type: native
	I0920 18:18:32.958494   74753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.47 22 <nil> <nil>}
	I0920 18:18:32.958506   74753 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 18:18:33.058187   74753 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 18:18:33.058222   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetMachineName
	I0920 18:18:33.058477   74753 buildroot.go:166] provisioning hostname "no-preload-956403"
	I0920 18:18:33.058508   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetMachineName
	I0920 18:18:33.058668   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:18:33.061310   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.061616   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:33.061649   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.061783   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHPort
	I0920 18:18:33.061961   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:33.062089   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:33.062197   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHUsername
	I0920 18:18:33.062340   74753 main.go:141] libmachine: Using SSH client type: native
	I0920 18:18:33.062536   74753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.47 22 <nil> <nil>}
	I0920 18:18:33.062553   74753 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-956403 && echo "no-preload-956403" | sudo tee /etc/hostname
	I0920 18:18:33.175825   74753 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-956403
	
	I0920 18:18:33.175860   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:18:33.178483   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.178749   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:33.178769   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.178904   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHPort
	I0920 18:18:33.179077   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:33.179226   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:33.179366   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHUsername
	I0920 18:18:33.179545   74753 main.go:141] libmachine: Using SSH client type: native
	I0920 18:18:33.179710   74753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.47 22 <nil> <nil>}
	I0920 18:18:33.179726   74753 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-956403' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-956403/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-956403' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 18:18:33.290675   74753 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:18:33.290701   74753 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-8777/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-8777/.minikube}
	I0920 18:18:33.290722   74753 buildroot.go:174] setting up certificates
	I0920 18:18:33.290735   74753 provision.go:84] configureAuth start
	I0920 18:18:33.290747   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetMachineName
	I0920 18:18:33.291015   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetIP
	I0920 18:18:33.293810   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.294244   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:33.294282   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.294337   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:18:33.296376   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.296749   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:33.296776   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.296930   74753 provision.go:143] copyHostCerts
	I0920 18:18:33.296985   74753 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem, removing ...
	I0920 18:18:33.296995   74753 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem
	I0920 18:18:33.297048   74753 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem (1082 bytes)
	I0920 18:18:33.297166   74753 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem, removing ...
	I0920 18:18:33.297177   74753 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem
	I0920 18:18:33.297211   74753 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem (1123 bytes)
	I0920 18:18:33.297287   74753 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem, removing ...
	I0920 18:18:33.297296   74753 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem
	I0920 18:18:33.297320   74753 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem (1675 bytes)
	I0920 18:18:33.297387   74753 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem org=jenkins.no-preload-956403 san=[127.0.0.1 192.168.50.47 localhost minikube no-preload-956403]
	I0920 18:18:33.366768   74753 provision.go:177] copyRemoteCerts
	I0920 18:18:33.366830   74753 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 18:18:33.366852   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:18:33.369441   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.369755   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:33.369787   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.369958   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHPort
	I0920 18:18:33.370127   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:33.370293   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHUsername
	I0920 18:18:33.370490   74753 sshutil.go:53] new ssh client: &{IP:192.168.50.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/no-preload-956403/id_rsa Username:docker}
	I0920 18:18:33.452070   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0920 18:18:33.477564   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0920 18:18:33.501002   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 18:18:33.526725   74753 provision.go:87] duration metric: took 235.975552ms to configureAuth
	I0920 18:18:33.526755   74753 buildroot.go:189] setting minikube options for container-runtime
	I0920 18:18:33.526943   74753 config.go:182] Loaded profile config "no-preload-956403": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:18:33.527011   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:18:33.529870   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.530338   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:33.530371   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.530485   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHPort
	I0920 18:18:33.530708   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:33.530897   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:33.531057   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHUsername
	I0920 18:18:33.531276   74753 main.go:141] libmachine: Using SSH client type: native
	I0920 18:18:33.531497   74753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.47 22 <nil> <nil>}
	I0920 18:18:33.531519   74753 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 18:18:33.758711   74753 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 18:18:33.758739   74753 machine.go:96] duration metric: took 804.122904ms to provisionDockerMachine
	I0920 18:18:33.758753   74753 start.go:293] postStartSetup for "no-preload-956403" (driver="kvm2")
	I0920 18:18:33.758771   74753 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 18:18:33.758796   74753 main.go:141] libmachine: (no-preload-956403) Calling .DriverName
	I0920 18:18:33.759207   74753 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 18:18:33.759262   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:18:33.762524   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.762991   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:33.763027   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.763207   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHPort
	I0920 18:18:33.763471   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:33.763643   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHUsername
	I0920 18:18:33.763861   74753 sshutil.go:53] new ssh client: &{IP:192.168.50.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/no-preload-956403/id_rsa Username:docker}
	I0920 18:18:33.844910   74753 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 18:18:33.849111   74753 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 18:18:33.849137   74753 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/addons for local assets ...
	I0920 18:18:33.849205   74753 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/files for local assets ...
	I0920 18:18:33.849280   74753 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem -> 159732.pem in /etc/ssl/certs
	I0920 18:18:33.849367   74753 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 18:18:33.858757   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /etc/ssl/certs/159732.pem (1708 bytes)
	I0920 18:18:33.883430   74753 start.go:296] duration metric: took 124.663757ms for postStartSetup
	I0920 18:18:33.883468   74753 fix.go:56] duration metric: took 19.104481875s for fixHost
	I0920 18:18:33.883488   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:18:33.886703   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.887047   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:33.887076   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.887276   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHPort
	I0920 18:18:33.887502   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:33.887685   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:33.887816   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHUsername
	I0920 18:18:33.887962   74753 main.go:141] libmachine: Using SSH client type: native
	I0920 18:18:33.888137   74753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.47 22 <nil> <nil>}
	I0920 18:18:33.888148   74753 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 18:18:33.994635   74753 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726856313.964367484
	
	I0920 18:18:33.994660   74753 fix.go:216] guest clock: 1726856313.964367484
	I0920 18:18:33.994670   74753 fix.go:229] Guest: 2024-09-20 18:18:33.964367484 +0000 UTC Remote: 2024-09-20 18:18:33.883472234 +0000 UTC m=+356.278282007 (delta=80.89525ms)
	I0920 18:18:33.994695   74753 fix.go:200] guest clock delta is within tolerance: 80.89525ms
	I0920 18:18:33.994701   74753 start.go:83] releasing machines lock for "no-preload-956403", held for 19.215743841s
	I0920 18:18:33.994726   74753 main.go:141] libmachine: (no-preload-956403) Calling .DriverName
	I0920 18:18:33.994976   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetIP
	I0920 18:18:33.997685   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.998089   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:33.998114   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.998291   74753 main.go:141] libmachine: (no-preload-956403) Calling .DriverName
	I0920 18:18:33.998765   74753 main.go:141] libmachine: (no-preload-956403) Calling .DriverName
	I0920 18:18:33.998923   74753 main.go:141] libmachine: (no-preload-956403) Calling .DriverName
	I0920 18:18:33.999007   74753 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 18:18:33.999054   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:18:33.999142   74753 ssh_runner.go:195] Run: cat /version.json
	I0920 18:18:33.999168   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:18:34.001725   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:34.001891   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:34.002197   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:34.002225   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:34.002369   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHPort
	I0920 18:18:34.002424   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:34.002452   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:34.002533   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:34.002625   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHPort
	I0920 18:18:34.002695   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHUsername
	I0920 18:18:34.002744   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:34.002813   74753 sshutil.go:53] new ssh client: &{IP:192.168.50.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/no-preload-956403/id_rsa Username:docker}
	I0920 18:18:34.002888   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHUsername
	I0920 18:18:34.003032   74753 sshutil.go:53] new ssh client: &{IP:192.168.50.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/no-preload-956403/id_rsa Username:docker}
	I0920 18:18:34.079092   74753 ssh_runner.go:195] Run: systemctl --version
	I0920 18:18:34.112066   74753 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 18:18:34.257205   74753 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 18:18:34.265774   74753 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 18:18:34.265871   74753 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 18:18:34.285222   74753 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 18:18:34.285247   74753 start.go:495] detecting cgroup driver to use...
	I0920 18:18:34.285320   74753 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 18:18:34.302192   74753 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 18:18:34.316624   74753 docker.go:217] disabling cri-docker service (if available) ...
	I0920 18:18:34.316695   74753 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 18:18:34.331098   74753 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 18:18:34.345433   74753 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 18:18:34.481886   74753 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 18:18:34.644442   74753 docker.go:233] disabling docker service ...
	I0920 18:18:34.644530   74753 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 18:18:34.658714   74753 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 18:18:34.671506   74753 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 18:18:34.809548   74753 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 18:18:34.957438   74753 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 18:18:34.972102   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 18:18:34.993129   74753 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 18:18:34.993199   74753 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:18:35.004196   74753 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 18:18:35.004273   74753 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:18:35.015258   74753 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:18:35.026240   74753 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:18:35.037658   74753 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 18:18:35.048637   74753 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:18:35.059494   74753 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:18:35.077264   74753 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:18:35.087777   74753 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 18:18:35.097812   74753 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 18:18:35.097947   74753 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 18:18:35.112200   74753 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 18:18:35.123381   74753 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:18:35.262386   74753 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 18:18:35.361152   74753 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 18:18:35.361257   74753 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 18:18:35.366321   74753 start.go:563] Will wait 60s for crictl version
	I0920 18:18:35.366390   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:18:35.370379   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 18:18:35.415080   74753 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 18:18:35.415182   74753 ssh_runner.go:195] Run: crio --version
	I0920 18:18:35.446934   74753 ssh_runner.go:195] Run: crio --version
	I0920 18:18:35.478823   74753 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 18:18:34.026138   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:36.525133   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:35.479956   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetIP
	I0920 18:18:35.483428   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:35.483820   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:35.483848   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:35.484079   74753 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0920 18:18:35.488686   74753 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:18:35.501184   74753 kubeadm.go:883] updating cluster {Name:no-preload-956403 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-956403 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.47 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 18:18:35.501348   74753 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:18:35.501391   74753 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:18:35.535377   74753 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 18:18:35.535400   74753 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0920 18:18:35.535466   74753 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 18:18:35.535499   74753 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0920 18:18:35.535510   74753 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0920 18:18:35.535534   74753 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0920 18:18:35.535538   74753 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0920 18:18:35.535513   74753 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0920 18:18:35.535625   74753 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0920 18:18:35.535483   74753 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:18:35.537100   74753 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 18:18:35.537104   74753 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0920 18:18:35.537126   74753 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0920 18:18:35.537091   74753 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0920 18:18:35.537164   74753 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0920 18:18:35.537180   74753 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0920 18:18:35.537191   74753 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0920 18:18:35.537105   74753 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:18:35.770046   74753 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0920 18:18:35.796364   74753 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0920 18:18:35.800776   74753 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0920 18:18:35.802291   74753 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0920 18:18:35.845969   74753 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0920 18:18:35.860263   74753 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0920 18:18:35.892349   74753 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 18:18:35.919269   74753 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0920 18:18:35.919323   74753 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0920 18:18:35.919375   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:18:35.929045   74753 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0920 18:18:35.929096   74753 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0920 18:18:35.929143   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:18:35.929181   74753 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0920 18:18:35.929235   74753 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0920 18:18:35.929299   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:18:35.968513   74753 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0920 18:18:35.968557   74753 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0920 18:18:35.968553   74753 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0920 18:18:35.968589   74753 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0920 18:18:35.968612   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:18:35.968636   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:18:35.984335   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0920 18:18:35.984366   74753 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0920 18:18:35.984379   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0920 18:18:35.984403   74753 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 18:18:35.984433   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0920 18:18:35.984450   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:18:35.984474   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0920 18:18:35.984505   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0920 18:18:36.106947   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0920 18:18:36.106964   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 18:18:36.106954   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0920 18:18:36.107025   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0920 18:18:36.107112   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0920 18:18:36.107142   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0920 18:18:36.231892   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0920 18:18:36.231989   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0920 18:18:36.232026   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0920 18:18:36.232143   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0920 18:18:36.232193   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0920 18:18:36.232281   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 18:18:36.355534   74753 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0920 18:18:36.355632   74753 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0920 18:18:36.355650   74753 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0920 18:18:36.355730   74753 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0920 18:18:36.358584   74753 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0920 18:18:36.358653   74753 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0920 18:18:36.358678   74753 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0920 18:18:36.358677   74753 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0920 18:18:36.358723   74753 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0920 18:18:36.358751   74753 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0920 18:18:36.367463   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 18:18:36.372389   74753 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0920 18:18:36.372412   74753 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0920 18:18:36.372457   74753 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0920 18:18:36.372580   74753 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I0920 18:18:36.374496   74753 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0920 18:18:36.374538   74753 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I0920 18:18:36.374638   74753 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I0920 18:18:36.410860   74753 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0920 18:18:36.410961   74753 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0920 18:18:36.738003   74753 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:18:32.749877   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:33.249391   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:33.749510   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:34.250003   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:34.749967   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:35.249953   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:35.749992   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:36.250161   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:36.750339   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:37.249600   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:34.192782   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:36.692485   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:38.692626   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:39.024337   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:41.025364   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:38.480788   74753 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.108302901s)
	I0920 18:18:38.480819   74753 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0920 18:18:38.480839   74753 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0920 18:18:38.480869   74753 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1: (2.069867717s)
	I0920 18:18:38.480893   74753 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0920 18:18:38.480896   74753 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I0920 18:18:38.480922   74753 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.742864605s)
	I0920 18:18:38.480970   74753 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0920 18:18:38.480993   74753 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:18:38.481031   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:18:40.340748   74753 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (1.85982838s)
	I0920 18:18:40.340782   74753 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0920 18:18:40.340790   74753 ssh_runner.go:235] Completed: which crictl: (1.859743083s)
	I0920 18:18:40.340812   74753 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0920 18:18:40.340850   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:18:40.340869   74753 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0920 18:18:37.750269   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:38.249855   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:38.749845   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:39.249342   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:39.750306   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:40.250103   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:40.750346   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:41.250134   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:41.750136   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:42.249945   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:41.191300   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:43.193536   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:43.025860   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:45.527119   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:44.427309   74753 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (4.086436233s)
	I0920 18:18:44.427365   74753 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (4.086473186s)
	I0920 18:18:44.427395   74753 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0920 18:18:44.427400   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:18:44.427430   74753 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0920 18:18:44.427510   74753 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0920 18:18:46.393094   74753 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (1.965557326s)
	I0920 18:18:46.393133   74753 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0920 18:18:46.393163   74753 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0920 18:18:46.393170   74753 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.965750119s)
	I0920 18:18:46.393216   74753 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0920 18:18:46.393241   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:18:42.749980   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:43.249416   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:43.750087   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:44.249972   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:44.749649   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:45.250128   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:45.750346   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:46.249611   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:46.749566   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:47.249814   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:45.193713   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:47.691882   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:48.027047   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:50.527076   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:47.771559   74753 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.37829872s)
	I0920 18:18:47.771595   74753 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0920 18:18:47.771607   74753 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.378351302s)
	I0920 18:18:47.771618   74753 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0920 18:18:47.771645   74753 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0920 18:18:47.771668   74753 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0920 18:18:47.771726   74753 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0920 18:18:49.544117   74753 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.772423068s)
	I0920 18:18:49.544147   74753 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0920 18:18:49.544159   74753 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.772401848s)
	I0920 18:18:49.544187   74753 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0920 18:18:49.544199   74753 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0920 18:18:49.544275   74753 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0920 18:18:50.198691   74753 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0920 18:18:50.198744   74753 cache_images.go:123] Successfully loaded all cached images
	I0920 18:18:50.198752   74753 cache_images.go:92] duration metric: took 14.66333409s to LoadCachedImages
	I0920 18:18:50.198766   74753 kubeadm.go:934] updating node { 192.168.50.47 8443 v1.31.1 crio true true} ...
	I0920 18:18:50.198900   74753 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-956403 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.47
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-956403 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 18:18:50.198988   74753 ssh_runner.go:195] Run: crio config
	I0920 18:18:50.249876   74753 cni.go:84] Creating CNI manager for ""
	I0920 18:18:50.249901   74753 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:18:50.249915   74753 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 18:18:50.249942   74753 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.47 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-956403 NodeName:no-preload-956403 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.47"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.47 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 18:18:50.250150   74753 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.47
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-956403"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.47
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.47"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 18:18:50.250264   74753 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 18:18:50.262805   74753 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 18:18:50.262886   74753 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 18:18:50.272958   74753 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0920 18:18:50.290269   74753 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 18:18:50.306981   74753 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0920 18:18:50.324360   74753 ssh_runner.go:195] Run: grep 192.168.50.47	control-plane.minikube.internal$ /etc/hosts
	I0920 18:18:50.328382   74753 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.47	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:18:50.341021   74753 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:18:50.462105   74753 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:18:50.478780   74753 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/no-preload-956403 for IP: 192.168.50.47
	I0920 18:18:50.478804   74753 certs.go:194] generating shared ca certs ...
	I0920 18:18:50.478824   74753 certs.go:226] acquiring lock for ca certs: {Name:mkc7ef6c737c6bdc3fdd9dcff8f57029c020d8f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:18:50.479010   74753 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key
	I0920 18:18:50.479069   74753 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key
	I0920 18:18:50.479082   74753 certs.go:256] generating profile certs ...
	I0920 18:18:50.479188   74753 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/no-preload-956403/client.key
	I0920 18:18:50.479270   74753 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/no-preload-956403/apiserver.key.859cf278
	I0920 18:18:50.479335   74753 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/no-preload-956403/proxy-client.key
	I0920 18:18:50.479491   74753 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem (1338 bytes)
	W0920 18:18:50.479534   74753 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973_empty.pem, impossibly tiny 0 bytes
	I0920 18:18:50.479549   74753 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 18:18:50.479596   74753 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem (1082 bytes)
	I0920 18:18:50.479633   74753 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem (1123 bytes)
	I0920 18:18:50.479668   74753 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem (1675 bytes)
	I0920 18:18:50.479771   74753 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem (1708 bytes)
	I0920 18:18:50.480696   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 18:18:50.514559   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 18:18:50.548790   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 18:18:50.581384   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0920 18:18:50.609138   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/no-preload-956403/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0920 18:18:50.641098   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/no-preload-956403/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 18:18:50.680479   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/no-preload-956403/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 18:18:50.705168   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/no-preload-956403/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 18:18:50.727603   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /usr/share/ca-certificates/159732.pem (1708 bytes)
	I0920 18:18:50.750272   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 18:18:50.776117   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem --> /usr/share/ca-certificates/15973.pem (1338 bytes)
	I0920 18:18:50.799799   74753 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 18:18:50.816250   74753 ssh_runner.go:195] Run: openssl version
	I0920 18:18:50.821680   74753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/159732.pem && ln -fs /usr/share/ca-certificates/159732.pem /etc/ssl/certs/159732.pem"
	I0920 18:18:50.832295   74753 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/159732.pem
	I0920 18:18:50.836833   74753 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:04 /usr/share/ca-certificates/159732.pem
	I0920 18:18:50.836896   74753 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/159732.pem
	I0920 18:18:50.842626   74753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/159732.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 18:18:50.853297   74753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 18:18:50.864400   74753 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:18:50.868951   74753 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:18:50.869011   74753 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:18:50.874547   74753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 18:18:50.885615   74753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15973.pem && ln -fs /usr/share/ca-certificates/15973.pem /etc/ssl/certs/15973.pem"
	I0920 18:18:50.896960   74753 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15973.pem
	I0920 18:18:50.901798   74753 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:04 /usr/share/ca-certificates/15973.pem
	I0920 18:18:50.901879   74753 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15973.pem
	I0920 18:18:50.907920   74753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15973.pem /etc/ssl/certs/51391683.0"
	I0920 18:18:50.919345   74753 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 18:18:50.923671   74753 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 18:18:50.929701   74753 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 18:18:50.935649   74753 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 18:18:50.942162   74753 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 18:18:50.948012   74753 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 18:18:50.954097   74753 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 18:18:50.960442   74753 kubeadm.go:392] StartCluster: {Name:no-preload-956403 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-956403 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.47 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:18:50.960535   74753 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 18:18:50.960600   74753 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:18:50.998528   74753 cri.go:89] found id: ""
	I0920 18:18:50.998608   74753 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 18:18:51.008964   74753 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 18:18:51.008985   74753 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 18:18:51.009043   74753 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 18:18:51.018457   74753 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 18:18:51.019553   74753 kubeconfig.go:125] found "no-preload-956403" server: "https://192.168.50.47:8443"
	I0920 18:18:51.021712   74753 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 18:18:51.033439   74753 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.47
	I0920 18:18:51.033470   74753 kubeadm.go:1160] stopping kube-system containers ...
	I0920 18:18:51.033481   74753 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0920 18:18:51.033538   74753 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:18:51.070049   74753 cri.go:89] found id: ""
	I0920 18:18:51.070137   74753 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 18:18:51.087472   74753 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 18:18:51.098582   74753 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 18:18:51.098606   74753 kubeadm.go:157] found existing configuration files:
	
	I0920 18:18:51.098654   74753 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 18:18:51.107201   74753 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 18:18:51.107276   74753 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 18:18:51.116058   74753 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 18:18:51.124563   74753 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 18:18:51.124630   74753 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 18:18:51.134174   74753 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 18:18:51.142880   74753 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 18:18:51.142944   74753 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 18:18:51.152181   74753 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 18:18:51.161942   74753 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 18:18:51.162012   74753 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 18:18:51.171615   74753 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 18:18:51.180728   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:51.292140   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:52.019018   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:52.239327   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:52.317900   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:52.433910   74753 api_server.go:52] waiting for apiserver process to appear ...
	I0920 18:18:52.433991   74753 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:47.749487   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:48.249612   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:48.750324   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:49.250006   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:49.749667   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:50.249802   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:50.749597   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:51.250236   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:51.750203   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:52.250132   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:49.693075   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:52.192547   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:53.024979   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:55.025225   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:52.934956   74753 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:53.434681   74753 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:53.451797   74753 api_server.go:72] duration metric: took 1.017867426s to wait for apiserver process to appear ...
	I0920 18:18:53.451828   74753 api_server.go:88] waiting for apiserver healthz status ...
	I0920 18:18:53.451851   74753 api_server.go:253] Checking apiserver healthz at https://192.168.50.47:8443/healthz ...
	I0920 18:18:53.452366   74753 api_server.go:269] stopped: https://192.168.50.47:8443/healthz: Get "https://192.168.50.47:8443/healthz": dial tcp 192.168.50.47:8443: connect: connection refused
	I0920 18:18:53.952175   74753 api_server.go:253] Checking apiserver healthz at https://192.168.50.47:8443/healthz ...
	I0920 18:18:55.972801   74753 api_server.go:279] https://192.168.50.47:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 18:18:55.972835   74753 api_server.go:103] status: https://192.168.50.47:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 18:18:55.972854   74753 api_server.go:253] Checking apiserver healthz at https://192.168.50.47:8443/healthz ...
	I0920 18:18:56.018532   74753 api_server.go:279] https://192.168.50.47:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 18:18:56.018563   74753 api_server.go:103] status: https://192.168.50.47:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 18:18:56.452127   74753 api_server.go:253] Checking apiserver healthz at https://192.168.50.47:8443/healthz ...
	I0920 18:18:56.459752   74753 api_server.go:279] https://192.168.50.47:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 18:18:56.459796   74753 api_server.go:103] status: https://192.168.50.47:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 18:18:56.952284   74753 api_server.go:253] Checking apiserver healthz at https://192.168.50.47:8443/healthz ...
	I0920 18:18:56.959496   74753 api_server.go:279] https://192.168.50.47:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 18:18:56.959553   74753 api_server.go:103] status: https://192.168.50.47:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 18:18:57.452049   74753 api_server.go:253] Checking apiserver healthz at https://192.168.50.47:8443/healthz ...
	I0920 18:18:57.456586   74753 api_server.go:279] https://192.168.50.47:8443/healthz returned 200:
	ok
	I0920 18:18:57.463754   74753 api_server.go:141] control plane version: v1.31.1
	I0920 18:18:57.463785   74753 api_server.go:131] duration metric: took 4.011948835s to wait for apiserver health ...
	I0920 18:18:57.463795   74753 cni.go:84] Creating CNI manager for ""
	I0920 18:18:57.463804   74753 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:18:57.465916   74753 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 18:18:57.467416   74753 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 18:18:57.485487   74753 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 18:18:57.529426   74753 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 18:18:57.540046   74753 system_pods.go:59] 8 kube-system pods found
	I0920 18:18:57.540100   74753 system_pods.go:61] "coredns-7c65d6cfc9-j2t5h" [dd2636f1-3200-4f22-957c-046277c9be8c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 18:18:57.540113   74753 system_pods.go:61] "etcd-no-preload-956403" [daf84be0-e673-4685-a998-47ec64664484] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0920 18:18:57.540127   74753 system_pods.go:61] "kube-apiserver-no-preload-956403" [416217e6-3c91-49ec-bd69-812a49b4afd9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0920 18:18:57.540139   74753 system_pods.go:61] "kube-controller-manager-no-preload-956403" [30fae740-b073-4619-bca5-010ca90b2667] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0920 18:18:57.540148   74753 system_pods.go:61] "kube-proxy-sz4bm" [269600fb-ef65-4b17-8c07-76c79e35f5a8] Running
	I0920 18:18:57.540160   74753 system_pods.go:61] "kube-scheduler-no-preload-956403" [46432927-e506-4665-8825-c5f6a2ed0458] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0920 18:18:57.540174   74753 system_pods.go:61] "metrics-server-6867b74b74-tfsff" [599ba06a-6d4d-483b-b390-a3595a814757] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 18:18:57.540184   74753 system_pods.go:61] "storage-provisioner" [df661627-7d32-4467-9805-1ae65d4fa35c] Running
	I0920 18:18:57.540196   74753 system_pods.go:74] duration metric: took 10.749595ms to wait for pod list to return data ...
	I0920 18:18:57.540208   74753 node_conditions.go:102] verifying NodePressure condition ...
	I0920 18:18:57.544401   74753 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 18:18:57.544438   74753 node_conditions.go:123] node cpu capacity is 2
	I0920 18:18:57.544451   74753 node_conditions.go:105] duration metric: took 4.237618ms to run NodePressure ...
	I0920 18:18:57.544471   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:52.749468   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:53.249519   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:53.750056   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:54.250286   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:54.750240   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:55.249980   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:55.750192   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:56.249940   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:56.749289   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:57.249615   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:54.193519   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:56.693152   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:57.818736   74753 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0920 18:18:57.823058   74753 kubeadm.go:739] kubelet initialised
	I0920 18:18:57.823082   74753 kubeadm.go:740] duration metric: took 4.316691ms waiting for restarted kubelet to initialise ...
	I0920 18:18:57.823091   74753 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:18:57.827594   74753 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-j2t5h" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:57.833207   74753 pod_ready.go:98] node "no-preload-956403" hosting pod "coredns-7c65d6cfc9-j2t5h" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:57.833240   74753 pod_ready.go:82] duration metric: took 5.620335ms for pod "coredns-7c65d6cfc9-j2t5h" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:57.833252   74753 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-956403" hosting pod "coredns-7c65d6cfc9-j2t5h" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:57.833269   74753 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:57.838687   74753 pod_ready.go:98] node "no-preload-956403" hosting pod "etcd-no-preload-956403" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:57.838723   74753 pod_ready.go:82] duration metric: took 5.441795ms for pod "etcd-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:57.838737   74753 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-956403" hosting pod "etcd-no-preload-956403" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:57.838747   74753 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:57.848481   74753 pod_ready.go:98] node "no-preload-956403" hosting pod "kube-apiserver-no-preload-956403" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:57.848510   74753 pod_ready.go:82] duration metric: took 9.754053ms for pod "kube-apiserver-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:57.848520   74753 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-956403" hosting pod "kube-apiserver-no-preload-956403" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:57.848530   74753 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:57.934356   74753 pod_ready.go:98] node "no-preload-956403" hosting pod "kube-controller-manager-no-preload-956403" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:57.934383   74753 pod_ready.go:82] duration metric: took 85.842265ms for pod "kube-controller-manager-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:57.934392   74753 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-956403" hosting pod "kube-controller-manager-no-preload-956403" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:57.934397   74753 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-sz4bm" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:58.332467   74753 pod_ready.go:98] node "no-preload-956403" hosting pod "kube-proxy-sz4bm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:58.332495   74753 pod_ready.go:82] duration metric: took 398.088821ms for pod "kube-proxy-sz4bm" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:58.332504   74753 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-956403" hosting pod "kube-proxy-sz4bm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:58.332510   74753 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:58.732836   74753 pod_ready.go:98] node "no-preload-956403" hosting pod "kube-scheduler-no-preload-956403" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:58.732859   74753 pod_ready.go:82] duration metric: took 400.343274ms for pod "kube-scheduler-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:58.732868   74753 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-956403" hosting pod "kube-scheduler-no-preload-956403" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:58.732874   74753 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:59.132465   74753 pod_ready.go:98] node "no-preload-956403" hosting pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:59.132489   74753 pod_ready.go:82] duration metric: took 399.606625ms for pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:59.132503   74753 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-956403" hosting pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:59.132510   74753 pod_ready.go:39] duration metric: took 1.309409155s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:18:59.132526   74753 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 18:18:59.147163   74753 ops.go:34] apiserver oom_adj: -16
	I0920 18:18:59.147190   74753 kubeadm.go:597] duration metric: took 8.138198351s to restartPrimaryControlPlane
	I0920 18:18:59.147200   74753 kubeadm.go:394] duration metric: took 8.186768244s to StartCluster
	I0920 18:18:59.147214   74753 settings.go:142] acquiring lock: {Name:mk2a13c58cdc0faf8cddca5d6716175d45db9bfd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:18:59.147287   74753 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19672-8777/kubeconfig
	I0920 18:18:59.149036   74753 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/kubeconfig: {Name:mkf32a4c736808e023459b2f0e40188618a38db1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:18:59.149303   74753 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.47 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:18:59.149381   74753 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 18:18:59.149479   74753 addons.go:69] Setting storage-provisioner=true in profile "no-preload-956403"
	I0920 18:18:59.149503   74753 addons.go:234] Setting addon storage-provisioner=true in "no-preload-956403"
	I0920 18:18:59.149501   74753 addons.go:69] Setting default-storageclass=true in profile "no-preload-956403"
	W0920 18:18:59.149515   74753 addons.go:243] addon storage-provisioner should already be in state true
	I0920 18:18:59.149526   74753 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-956403"
	I0920 18:18:59.149520   74753 addons.go:69] Setting metrics-server=true in profile "no-preload-956403"
	I0920 18:18:59.149546   74753 host.go:66] Checking if "no-preload-956403" exists ...
	I0920 18:18:59.149558   74753 addons.go:234] Setting addon metrics-server=true in "no-preload-956403"
	W0920 18:18:59.149575   74753 addons.go:243] addon metrics-server should already be in state true
	I0920 18:18:59.149588   74753 config.go:182] Loaded profile config "no-preload-956403": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:18:59.149610   74753 host.go:66] Checking if "no-preload-956403" exists ...
	I0920 18:18:59.149847   74753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:59.149889   74753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:59.149944   74753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:59.149954   74753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:59.150003   74753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:59.150075   74753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:59.151276   74753 out.go:177] * Verifying Kubernetes components...
	I0920 18:18:59.152577   74753 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:18:59.178294   74753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42565
	I0920 18:18:59.178917   74753 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:59.179414   74753 main.go:141] libmachine: Using API Version  1
	I0920 18:18:59.179435   74753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:59.179869   74753 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:59.180059   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetState
	I0920 18:18:59.183556   74753 addons.go:234] Setting addon default-storageclass=true in "no-preload-956403"
	W0920 18:18:59.183575   74753 addons.go:243] addon default-storageclass should already be in state true
	I0920 18:18:59.183606   74753 host.go:66] Checking if "no-preload-956403" exists ...
	I0920 18:18:59.183970   74753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:59.184011   74753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:59.195338   74753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43759
	I0920 18:18:59.195739   74753 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:59.196290   74753 main.go:141] libmachine: Using API Version  1
	I0920 18:18:59.196316   74753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:59.196742   74753 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:59.197211   74753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33531
	I0920 18:18:59.197327   74753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:59.197375   74753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:59.197552   74753 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:59.198055   74753 main.go:141] libmachine: Using API Version  1
	I0920 18:18:59.198080   74753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:59.198420   74753 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:59.198997   74753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:59.199037   74753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:59.203164   74753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40055
	I0920 18:18:59.203700   74753 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:59.204320   74753 main.go:141] libmachine: Using API Version  1
	I0920 18:18:59.204341   74753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:59.204745   74753 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:59.205399   74753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:59.205440   74753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:59.215457   74753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37991
	I0920 18:18:59.215953   74753 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:59.216312   74753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35461
	I0920 18:18:59.216521   74753 main.go:141] libmachine: Using API Version  1
	I0920 18:18:59.216723   74753 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:59.216826   74753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:59.217221   74753 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:59.217403   74753 main.go:141] libmachine: Using API Version  1
	I0920 18:18:59.217414   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetState
	I0920 18:18:59.217452   74753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:59.217942   74753 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:59.218403   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetState
	I0920 18:18:59.219340   74753 main.go:141] libmachine: (no-preload-956403) Calling .DriverName
	I0920 18:18:59.220536   74753 main.go:141] libmachine: (no-preload-956403) Calling .DriverName
	I0920 18:18:59.221934   74753 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0920 18:18:59.222788   74753 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:18:59.223744   74753 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 18:18:59.223766   74753 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 18:18:59.223788   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:18:59.224567   74753 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 18:18:59.224588   74753 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 18:18:59.224607   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:18:59.225333   74753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45207
	I0920 18:18:59.225992   74753 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:59.226975   74753 main.go:141] libmachine: Using API Version  1
	I0920 18:18:59.226991   74753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:59.227472   74753 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:59.227788   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetState
	I0920 18:18:59.227818   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:59.228234   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:59.228282   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:59.228441   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHPort
	I0920 18:18:59.228636   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:59.228671   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:59.228821   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHUsername
	I0920 18:18:59.228960   74753 sshutil.go:53] new ssh client: &{IP:192.168.50.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/no-preload-956403/id_rsa Username:docker}
	I0920 18:18:59.229280   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:59.229297   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:59.229478   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHPort
	I0920 18:18:59.229651   74753 main.go:141] libmachine: (no-preload-956403) Calling .DriverName
	I0920 18:18:59.229807   74753 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 18:18:59.229817   74753 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 18:18:59.229843   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:18:59.230195   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:59.230729   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHUsername
	I0920 18:18:59.230899   74753 sshutil.go:53] new ssh client: &{IP:192.168.50.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/no-preload-956403/id_rsa Username:docker}
	I0920 18:18:59.232419   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:59.232844   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:59.232877   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:59.233020   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHPort
	I0920 18:18:59.233201   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:59.233311   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHUsername
	I0920 18:18:59.233424   74753 sshutil.go:53] new ssh client: &{IP:192.168.50.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/no-preload-956403/id_rsa Username:docker}
	I0920 18:18:59.373284   74753 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:18:59.391114   74753 node_ready.go:35] waiting up to 6m0s for node "no-preload-956403" to be "Ready" ...
	I0920 18:18:59.475607   74753 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 18:18:59.530301   74753 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 18:18:59.530320   74753 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0920 18:18:59.530804   74753 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 18:18:59.607246   74753 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 18:18:59.607272   74753 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 18:18:59.685234   74753 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 18:18:59.685273   74753 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 18:18:59.753902   74753 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 18:19:00.029605   74753 main.go:141] libmachine: Making call to close driver server
	I0920 18:19:00.029630   74753 main.go:141] libmachine: (no-preload-956403) Calling .Close
	I0920 18:19:00.029968   74753 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:19:00.030016   74753 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:19:00.030032   74753 main.go:141] libmachine: Making call to close driver server
	I0920 18:19:00.030039   74753 main.go:141] libmachine: (no-preload-956403) Calling .Close
	I0920 18:19:00.030285   74753 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:19:00.030299   74753 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:19:00.049472   74753 main.go:141] libmachine: Making call to close driver server
	I0920 18:19:00.049500   74753 main.go:141] libmachine: (no-preload-956403) Calling .Close
	I0920 18:19:00.049765   74753 main.go:141] libmachine: (no-preload-956403) DBG | Closing plugin on server side
	I0920 18:19:00.049781   74753 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:19:00.049797   74753 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:19:00.790635   74753 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.259791892s)
	I0920 18:19:00.790694   74753 main.go:141] libmachine: Making call to close driver server
	I0920 18:19:00.790703   74753 main.go:141] libmachine: (no-preload-956403) Calling .Close
	I0920 18:19:00.791004   74753 main.go:141] libmachine: (no-preload-956403) DBG | Closing plugin on server side
	I0920 18:19:00.791007   74753 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:19:00.791075   74753 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:19:00.791100   74753 main.go:141] libmachine: Making call to close driver server
	I0920 18:19:00.791110   74753 main.go:141] libmachine: (no-preload-956403) Calling .Close
	I0920 18:19:00.791339   74753 main.go:141] libmachine: (no-preload-956403) DBG | Closing plugin on server side
	I0920 18:19:00.791347   74753 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:19:00.791358   74753 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:19:00.829250   74753 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.075301587s)
	I0920 18:19:00.829297   74753 main.go:141] libmachine: Making call to close driver server
	I0920 18:19:00.829311   74753 main.go:141] libmachine: (no-preload-956403) Calling .Close
	I0920 18:19:00.829607   74753 main.go:141] libmachine: (no-preload-956403) DBG | Closing plugin on server side
	I0920 18:19:00.829612   74753 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:19:00.829632   74753 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:19:00.829641   74753 main.go:141] libmachine: Making call to close driver server
	I0920 18:19:00.829647   74753 main.go:141] libmachine: (no-preload-956403) Calling .Close
	I0920 18:19:00.829905   74753 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:19:00.829927   74753 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:19:00.829926   74753 main.go:141] libmachine: (no-preload-956403) DBG | Closing plugin on server side
	I0920 18:19:00.829938   74753 addons.go:475] Verifying addon metrics-server=true in "no-preload-956403"
	I0920 18:19:00.831897   74753 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0920 18:18:57.524979   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:59.526500   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:02.024579   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:00.833093   74753 addons.go:510] duration metric: took 1.683722999s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0920 18:19:01.394507   74753 node_ready.go:53] node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:57.750113   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:58.250100   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:58.750133   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:59.249427   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:59.749350   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:00.250267   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:00.749723   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:01.249549   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:01.749698   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:02.250043   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:59.195097   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:01.692391   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:03.692510   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:04.024713   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:06.525591   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:03.395035   74753 node_ready.go:53] node "no-preload-956403" has status "Ready":"False"
	I0920 18:19:05.895477   74753 node_ready.go:53] node "no-preload-956403" has status "Ready":"False"
	I0920 18:19:06.395177   74753 node_ready.go:49] node "no-preload-956403" has status "Ready":"True"
	I0920 18:19:06.395211   74753 node_ready.go:38] duration metric: took 7.0040677s for node "no-preload-956403" to be "Ready" ...
	I0920 18:19:06.395224   74753 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:19:06.400929   74753 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-j2t5h" in "kube-system" namespace to be "Ready" ...
	I0920 18:19:06.406052   74753 pod_ready.go:93] pod "coredns-7c65d6cfc9-j2t5h" in "kube-system" namespace has status "Ready":"True"
	I0920 18:19:06.406076   74753 pod_ready.go:82] duration metric: took 5.118178ms for pod "coredns-7c65d6cfc9-j2t5h" in "kube-system" namespace to be "Ready" ...
	I0920 18:19:06.406088   74753 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	I0920 18:19:07.412930   74753 pod_ready.go:93] pod "etcd-no-preload-956403" in "kube-system" namespace has status "Ready":"True"
	I0920 18:19:07.412946   74753 pod_ready.go:82] duration metric: took 1.006851075s for pod "etcd-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	I0920 18:19:07.412955   74753 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	I0920 18:19:07.420312   74753 pod_ready.go:93] pod "kube-apiserver-no-preload-956403" in "kube-system" namespace has status "Ready":"True"
	I0920 18:19:07.420342   74753 pod_ready.go:82] duration metric: took 7.380815ms for pod "kube-apiserver-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	I0920 18:19:07.420354   74753 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	I0920 18:19:02.750082   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:03.249861   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:03.749400   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:04.250213   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:04.749806   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:05.249569   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:05.750115   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:06.250182   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:06.750188   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:07.250041   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:06.191328   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:08.192222   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:09.024510   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:11.025023   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:08.426149   74753 pod_ready.go:93] pod "kube-controller-manager-no-preload-956403" in "kube-system" namespace has status "Ready":"True"
	I0920 18:19:08.426173   74753 pod_ready.go:82] duration metric: took 1.005811855s for pod "kube-controller-manager-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	I0920 18:19:08.426182   74753 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-sz4bm" in "kube-system" namespace to be "Ready" ...
	I0920 18:19:08.431446   74753 pod_ready.go:93] pod "kube-proxy-sz4bm" in "kube-system" namespace has status "Ready":"True"
	I0920 18:19:08.431474   74753 pod_ready.go:82] duration metric: took 5.284488ms for pod "kube-proxy-sz4bm" in "kube-system" namespace to be "Ready" ...
	I0920 18:19:08.431486   74753 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	I0920 18:19:08.794973   74753 pod_ready.go:93] pod "kube-scheduler-no-preload-956403" in "kube-system" namespace has status "Ready":"True"
	I0920 18:19:08.794997   74753 pod_ready.go:82] duration metric: took 363.504269ms for pod "kube-scheduler-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	I0920 18:19:08.795005   74753 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace to be "Ready" ...
	I0920 18:19:10.802181   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:07.749479   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:08.249641   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:08.749637   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:09.249803   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:09.749534   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:10.249365   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:10.749366   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:11.250045   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:11.750298   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:12.249766   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:10.192631   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:12.692470   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:13.525217   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:16.025516   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:13.300755   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:15.301345   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:12.749447   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:13.249356   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:13.749514   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:14.249942   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:14.749988   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:15.249405   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:15.749620   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:16.249962   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:16.750325   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:17.250198   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:15.191379   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:17.692235   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:18.525176   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:21.023910   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:17.801315   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:20.302369   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:22.302578   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:17.749509   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:18.250076   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:18.749518   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:19.249440   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:19.750156   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:20.249686   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:20.750315   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:21.249755   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:21.749370   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:22.249767   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:20.192027   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:22.192925   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:23.024677   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:25.025216   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:24.801558   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:26.802796   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:22.749289   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:23.249386   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:23.749870   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:24.249424   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:24.750349   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:25.249582   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:25.249657   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:25.287604   75577 cri.go:89] found id: ""
	I0920 18:19:25.287633   75577 logs.go:276] 0 containers: []
	W0920 18:19:25.287641   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:25.287647   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:25.287692   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:25.322093   75577 cri.go:89] found id: ""
	I0920 18:19:25.322127   75577 logs.go:276] 0 containers: []
	W0920 18:19:25.322137   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:25.322144   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:25.322194   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:25.354812   75577 cri.go:89] found id: ""
	I0920 18:19:25.354840   75577 logs.go:276] 0 containers: []
	W0920 18:19:25.354851   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:25.354863   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:25.354928   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:25.387777   75577 cri.go:89] found id: ""
	I0920 18:19:25.387804   75577 logs.go:276] 0 containers: []
	W0920 18:19:25.387813   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:25.387819   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:25.387867   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:25.420325   75577 cri.go:89] found id: ""
	I0920 18:19:25.420354   75577 logs.go:276] 0 containers: []
	W0920 18:19:25.420364   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:25.420372   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:25.420437   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:25.454824   75577 cri.go:89] found id: ""
	I0920 18:19:25.454853   75577 logs.go:276] 0 containers: []
	W0920 18:19:25.454864   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:25.454871   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:25.454933   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:25.493520   75577 cri.go:89] found id: ""
	I0920 18:19:25.493548   75577 logs.go:276] 0 containers: []
	W0920 18:19:25.493559   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:25.493566   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:25.493639   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:25.528795   75577 cri.go:89] found id: ""
	I0920 18:19:25.528820   75577 logs.go:276] 0 containers: []
	W0920 18:19:25.528828   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:25.528836   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:25.528847   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:25.571411   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:25.571442   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:25.625648   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:25.625693   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:25.639891   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:25.639920   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:25.769263   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:25.769289   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:25.769303   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:19:24.691757   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:26.695197   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:27.524327   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:29.525192   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:32.024450   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:29.301400   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:31.301988   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:28.346894   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:28.359891   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:28.359967   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:28.396117   75577 cri.go:89] found id: ""
	I0920 18:19:28.396144   75577 logs.go:276] 0 containers: []
	W0920 18:19:28.396152   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:28.396158   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:28.396206   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:28.447888   75577 cri.go:89] found id: ""
	I0920 18:19:28.447923   75577 logs.go:276] 0 containers: []
	W0920 18:19:28.447933   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:28.447941   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:28.447998   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:28.485018   75577 cri.go:89] found id: ""
	I0920 18:19:28.485046   75577 logs.go:276] 0 containers: []
	W0920 18:19:28.485054   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:28.485060   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:28.485108   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:28.520904   75577 cri.go:89] found id: ""
	I0920 18:19:28.520934   75577 logs.go:276] 0 containers: []
	W0920 18:19:28.520942   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:28.520948   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:28.520996   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:28.561375   75577 cri.go:89] found id: ""
	I0920 18:19:28.561407   75577 logs.go:276] 0 containers: []
	W0920 18:19:28.561415   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:28.561421   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:28.561467   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:28.597643   75577 cri.go:89] found id: ""
	I0920 18:19:28.597671   75577 logs.go:276] 0 containers: []
	W0920 18:19:28.597679   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:28.597685   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:28.597747   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:28.632885   75577 cri.go:89] found id: ""
	I0920 18:19:28.632912   75577 logs.go:276] 0 containers: []
	W0920 18:19:28.632921   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:28.632926   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:28.632973   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:28.670639   75577 cri.go:89] found id: ""
	I0920 18:19:28.670665   75577 logs.go:276] 0 containers: []
	W0920 18:19:28.670675   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:28.670699   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:28.670714   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:19:28.749623   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:28.749659   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:28.790808   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:28.790831   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:28.842027   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:28.842070   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:28.855927   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:28.855954   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:28.935807   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:31.436321   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:31.450159   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:31.450221   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:31.485444   75577 cri.go:89] found id: ""
	I0920 18:19:31.485483   75577 logs.go:276] 0 containers: []
	W0920 18:19:31.485494   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:31.485502   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:31.485562   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:31.520888   75577 cri.go:89] found id: ""
	I0920 18:19:31.520918   75577 logs.go:276] 0 containers: []
	W0920 18:19:31.520927   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:31.520941   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:31.521007   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:31.555001   75577 cri.go:89] found id: ""
	I0920 18:19:31.555029   75577 logs.go:276] 0 containers: []
	W0920 18:19:31.555040   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:31.555047   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:31.555111   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:31.588766   75577 cri.go:89] found id: ""
	I0920 18:19:31.588794   75577 logs.go:276] 0 containers: []
	W0920 18:19:31.588802   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:31.588808   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:31.588872   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:31.622006   75577 cri.go:89] found id: ""
	I0920 18:19:31.622037   75577 logs.go:276] 0 containers: []
	W0920 18:19:31.622048   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:31.622056   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:31.622110   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:31.659475   75577 cri.go:89] found id: ""
	I0920 18:19:31.659508   75577 logs.go:276] 0 containers: []
	W0920 18:19:31.659519   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:31.659527   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:31.659589   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:31.695380   75577 cri.go:89] found id: ""
	I0920 18:19:31.695415   75577 logs.go:276] 0 containers: []
	W0920 18:19:31.695426   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:31.695436   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:31.695521   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:31.729507   75577 cri.go:89] found id: ""
	I0920 18:19:31.729540   75577 logs.go:276] 0 containers: []
	W0920 18:19:31.729550   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:31.729561   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:31.729574   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:31.781857   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:31.781908   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:31.795325   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:31.795356   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:31.868684   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:31.868708   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:31.868722   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:19:31.945334   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:31.945371   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:29.191780   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:31.192712   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:33.693996   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:34.024591   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:36.024999   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:33.801443   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:36.300715   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:34.485723   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:34.499039   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:34.499106   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:34.534664   75577 cri.go:89] found id: ""
	I0920 18:19:34.534697   75577 logs.go:276] 0 containers: []
	W0920 18:19:34.534709   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:34.534717   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:34.534777   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:34.567515   75577 cri.go:89] found id: ""
	I0920 18:19:34.567545   75577 logs.go:276] 0 containers: []
	W0920 18:19:34.567556   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:34.567564   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:34.567624   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:34.601653   75577 cri.go:89] found id: ""
	I0920 18:19:34.601693   75577 logs.go:276] 0 containers: []
	W0920 18:19:34.601704   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:34.601712   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:34.601775   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:34.635238   75577 cri.go:89] found id: ""
	I0920 18:19:34.635271   75577 logs.go:276] 0 containers: []
	W0920 18:19:34.635282   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:34.635291   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:34.635361   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:34.669665   75577 cri.go:89] found id: ""
	I0920 18:19:34.669689   75577 logs.go:276] 0 containers: []
	W0920 18:19:34.669697   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:34.669703   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:34.669751   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:34.705984   75577 cri.go:89] found id: ""
	I0920 18:19:34.706012   75577 logs.go:276] 0 containers: []
	W0920 18:19:34.706022   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:34.706032   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:34.706110   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:34.740052   75577 cri.go:89] found id: ""
	I0920 18:19:34.740079   75577 logs.go:276] 0 containers: []
	W0920 18:19:34.740087   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:34.740092   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:34.740139   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:34.774930   75577 cri.go:89] found id: ""
	I0920 18:19:34.774962   75577 logs.go:276] 0 containers: []
	W0920 18:19:34.774973   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:34.774984   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:34.775081   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:34.791674   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:34.791714   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:34.894988   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:34.895019   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:34.895035   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:19:34.973785   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:34.973823   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:35.010928   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:35.010964   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:37.561543   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:37.574865   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:37.574925   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:37.609145   75577 cri.go:89] found id: ""
	I0920 18:19:37.609170   75577 logs.go:276] 0 containers: []
	W0920 18:19:37.609178   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:37.609183   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:37.609247   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:37.645080   75577 cri.go:89] found id: ""
	I0920 18:19:37.645107   75577 logs.go:276] 0 containers: []
	W0920 18:19:37.645115   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:37.645121   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:37.645168   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:37.680060   75577 cri.go:89] found id: ""
	I0920 18:19:37.680094   75577 logs.go:276] 0 containers: []
	W0920 18:19:37.680105   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:37.680113   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:37.680173   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:37.713586   75577 cri.go:89] found id: ""
	I0920 18:19:37.713618   75577 logs.go:276] 0 containers: []
	W0920 18:19:37.713629   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:37.713636   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:37.713694   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:36.192483   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:38.692017   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:38.025974   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:40.524671   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:38.302403   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:40.802748   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:37.750249   75577 cri.go:89] found id: ""
	I0920 18:19:37.750274   75577 logs.go:276] 0 containers: []
	W0920 18:19:37.750282   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:37.750289   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:37.750351   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:37.785613   75577 cri.go:89] found id: ""
	I0920 18:19:37.785642   75577 logs.go:276] 0 containers: []
	W0920 18:19:37.785650   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:37.785656   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:37.785705   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:37.824240   75577 cri.go:89] found id: ""
	I0920 18:19:37.824267   75577 logs.go:276] 0 containers: []
	W0920 18:19:37.824278   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:37.824286   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:37.824348   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:37.858522   75577 cri.go:89] found id: ""
	I0920 18:19:37.858547   75577 logs.go:276] 0 containers: []
	W0920 18:19:37.858556   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:37.858564   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:37.858575   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:19:37.939852   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:37.939891   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:37.981029   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:37.981062   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:38.030606   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:38.030633   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:38.043914   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:38.043953   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:38.124846   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:40.625196   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:40.640638   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:40.640708   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:40.687107   75577 cri.go:89] found id: ""
	I0920 18:19:40.687131   75577 logs.go:276] 0 containers: []
	W0920 18:19:40.687140   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:40.687148   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:40.687206   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:40.725727   75577 cri.go:89] found id: ""
	I0920 18:19:40.725858   75577 logs.go:276] 0 containers: []
	W0920 18:19:40.725875   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:40.725889   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:40.725967   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:40.762965   75577 cri.go:89] found id: ""
	I0920 18:19:40.762988   75577 logs.go:276] 0 containers: []
	W0920 18:19:40.762996   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:40.763002   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:40.763049   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:40.797387   75577 cri.go:89] found id: ""
	I0920 18:19:40.797415   75577 logs.go:276] 0 containers: []
	W0920 18:19:40.797427   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:40.797439   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:40.797498   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:40.834482   75577 cri.go:89] found id: ""
	I0920 18:19:40.834513   75577 logs.go:276] 0 containers: []
	W0920 18:19:40.834521   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:40.834528   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:40.834578   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:40.870062   75577 cri.go:89] found id: ""
	I0920 18:19:40.870090   75577 logs.go:276] 0 containers: []
	W0920 18:19:40.870099   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:40.870106   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:40.870164   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:40.906613   75577 cri.go:89] found id: ""
	I0920 18:19:40.906642   75577 logs.go:276] 0 containers: []
	W0920 18:19:40.906653   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:40.906662   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:40.906721   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:40.953033   75577 cri.go:89] found id: ""
	I0920 18:19:40.953056   75577 logs.go:276] 0 containers: []
	W0920 18:19:40.953065   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:40.953073   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:40.953083   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:40.998490   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:40.998523   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:41.051488   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:41.051525   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:41.067908   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:41.067937   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:41.144854   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:41.144878   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:41.144890   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:19:40.692456   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:43.192532   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:43.024238   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:45.024998   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:47.025427   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:43.302334   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:45.801415   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:43.723613   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:43.736857   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:43.736924   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:43.772588   75577 cri.go:89] found id: ""
	I0920 18:19:43.772624   75577 logs.go:276] 0 containers: []
	W0920 18:19:43.772635   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:43.772643   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:43.772712   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:43.808580   75577 cri.go:89] found id: ""
	I0920 18:19:43.808611   75577 logs.go:276] 0 containers: []
	W0920 18:19:43.808622   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:43.808629   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:43.808695   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:43.847167   75577 cri.go:89] found id: ""
	I0920 18:19:43.847207   75577 logs.go:276] 0 containers: []
	W0920 18:19:43.847218   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:43.847243   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:43.847302   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:43.883614   75577 cri.go:89] found id: ""
	I0920 18:19:43.883646   75577 logs.go:276] 0 containers: []
	W0920 18:19:43.883658   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:43.883667   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:43.883738   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:43.917602   75577 cri.go:89] found id: ""
	I0920 18:19:43.917627   75577 logs.go:276] 0 containers: []
	W0920 18:19:43.917635   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:43.917641   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:43.917694   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:43.953232   75577 cri.go:89] found id: ""
	I0920 18:19:43.953259   75577 logs.go:276] 0 containers: []
	W0920 18:19:43.953268   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:43.953273   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:43.953325   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:43.991204   75577 cri.go:89] found id: ""
	I0920 18:19:43.991234   75577 logs.go:276] 0 containers: []
	W0920 18:19:43.991246   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:43.991253   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:43.991334   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:44.028117   75577 cri.go:89] found id: ""
	I0920 18:19:44.028140   75577 logs.go:276] 0 containers: []
	W0920 18:19:44.028147   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:44.028154   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:44.028164   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:44.068175   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:44.068203   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:44.119953   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:44.119993   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:44.134127   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:44.134154   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:44.211563   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:44.211592   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:44.211604   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:19:46.787328   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:46.803429   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:46.803516   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:46.844813   75577 cri.go:89] found id: ""
	I0920 18:19:46.844839   75577 logs.go:276] 0 containers: []
	W0920 18:19:46.844850   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:46.844856   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:46.844914   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:46.878441   75577 cri.go:89] found id: ""
	I0920 18:19:46.878483   75577 logs.go:276] 0 containers: []
	W0920 18:19:46.878497   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:46.878506   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:46.878580   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:46.913933   75577 cri.go:89] found id: ""
	I0920 18:19:46.913976   75577 logs.go:276] 0 containers: []
	W0920 18:19:46.913986   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:46.913993   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:46.914066   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:46.948577   75577 cri.go:89] found id: ""
	I0920 18:19:46.948609   75577 logs.go:276] 0 containers: []
	W0920 18:19:46.948618   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:46.948625   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:46.948689   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:46.982742   75577 cri.go:89] found id: ""
	I0920 18:19:46.982770   75577 logs.go:276] 0 containers: []
	W0920 18:19:46.982778   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:46.982785   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:46.982836   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:47.017075   75577 cri.go:89] found id: ""
	I0920 18:19:47.017107   75577 logs.go:276] 0 containers: []
	W0920 18:19:47.017120   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:47.017128   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:47.017190   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:47.055494   75577 cri.go:89] found id: ""
	I0920 18:19:47.055520   75577 logs.go:276] 0 containers: []
	W0920 18:19:47.055528   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:47.055534   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:47.055586   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:47.091966   75577 cri.go:89] found id: ""
	I0920 18:19:47.091998   75577 logs.go:276] 0 containers: []
	W0920 18:19:47.092006   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:47.092019   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:47.092035   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:47.142916   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:47.142955   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:47.158471   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:47.158502   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:47.241530   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:47.241553   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:47.241567   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:19:47.316958   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:47.316992   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:45.192724   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:47.692046   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:49.524915   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:52.026403   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:48.302289   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:50.303276   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:49.854403   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:49.867317   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:49.867431   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:49.903627   75577 cri.go:89] found id: ""
	I0920 18:19:49.903661   75577 logs.go:276] 0 containers: []
	W0920 18:19:49.903674   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:49.903682   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:49.903745   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:49.936857   75577 cri.go:89] found id: ""
	I0920 18:19:49.936892   75577 logs.go:276] 0 containers: []
	W0920 18:19:49.936902   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:49.936909   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:49.936975   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:49.974403   75577 cri.go:89] found id: ""
	I0920 18:19:49.974433   75577 logs.go:276] 0 containers: []
	W0920 18:19:49.974441   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:49.974447   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:49.974505   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:50.011810   75577 cri.go:89] found id: ""
	I0920 18:19:50.011839   75577 logs.go:276] 0 containers: []
	W0920 18:19:50.011850   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:50.011857   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:50.011921   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:50.050491   75577 cri.go:89] found id: ""
	I0920 18:19:50.050527   75577 logs.go:276] 0 containers: []
	W0920 18:19:50.050538   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:50.050546   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:50.050610   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:50.088270   75577 cri.go:89] found id: ""
	I0920 18:19:50.088299   75577 logs.go:276] 0 containers: []
	W0920 18:19:50.088308   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:50.088314   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:50.088375   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:50.126355   75577 cri.go:89] found id: ""
	I0920 18:19:50.126382   75577 logs.go:276] 0 containers: []
	W0920 18:19:50.126392   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:50.126399   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:50.126460   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:50.160770   75577 cri.go:89] found id: ""
	I0920 18:19:50.160798   75577 logs.go:276] 0 containers: []
	W0920 18:19:50.160808   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:50.160819   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:50.160834   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:50.212866   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:50.212914   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:50.227269   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:50.227295   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:50.301363   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:50.301392   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:50.301406   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:19:50.376293   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:50.376330   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:50.192002   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:52.192488   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:54.524436   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:56.526032   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:52.801396   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:54.802393   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:57.301066   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:52.916567   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:52.930445   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:52.930525   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:52.965360   75577 cri.go:89] found id: ""
	I0920 18:19:52.965392   75577 logs.go:276] 0 containers: []
	W0920 18:19:52.965403   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:52.965411   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:52.965468   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:53.000439   75577 cri.go:89] found id: ""
	I0920 18:19:53.000480   75577 logs.go:276] 0 containers: []
	W0920 18:19:53.000495   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:53.000503   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:53.000577   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:53.034598   75577 cri.go:89] found id: ""
	I0920 18:19:53.034630   75577 logs.go:276] 0 containers: []
	W0920 18:19:53.034640   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:53.034647   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:53.034744   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:53.068636   75577 cri.go:89] found id: ""
	I0920 18:19:53.068664   75577 logs.go:276] 0 containers: []
	W0920 18:19:53.068673   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:53.068678   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:53.068750   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:53.104381   75577 cri.go:89] found id: ""
	I0920 18:19:53.104408   75577 logs.go:276] 0 containers: []
	W0920 18:19:53.104416   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:53.104421   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:53.104474   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:53.137865   75577 cri.go:89] found id: ""
	I0920 18:19:53.137897   75577 logs.go:276] 0 containers: []
	W0920 18:19:53.137909   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:53.137922   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:53.137983   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:53.171825   75577 cri.go:89] found id: ""
	I0920 18:19:53.171861   75577 logs.go:276] 0 containers: []
	W0920 18:19:53.171874   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:53.171883   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:53.171952   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:53.210742   75577 cri.go:89] found id: ""
	I0920 18:19:53.210774   75577 logs.go:276] 0 containers: []
	W0920 18:19:53.210784   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:53.210796   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:53.210811   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:19:53.285597   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:53.285637   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:53.329821   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:53.329871   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:53.381362   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:53.381415   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:53.396044   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:53.396074   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:53.471582   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:55.972369   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:55.987205   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:55.987281   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:56.031322   75577 cri.go:89] found id: ""
	I0920 18:19:56.031355   75577 logs.go:276] 0 containers: []
	W0920 18:19:56.031363   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:56.031368   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:56.031420   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:56.066442   75577 cri.go:89] found id: ""
	I0920 18:19:56.066487   75577 logs.go:276] 0 containers: []
	W0920 18:19:56.066576   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:56.066617   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:56.066695   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:56.101818   75577 cri.go:89] found id: ""
	I0920 18:19:56.101864   75577 logs.go:276] 0 containers: []
	W0920 18:19:56.101876   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:56.101883   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:56.101947   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:56.139513   75577 cri.go:89] found id: ""
	I0920 18:19:56.139545   75577 logs.go:276] 0 containers: []
	W0920 18:19:56.139557   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:56.139565   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:56.139618   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:56.173638   75577 cri.go:89] found id: ""
	I0920 18:19:56.173669   75577 logs.go:276] 0 containers: []
	W0920 18:19:56.173680   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:56.173688   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:56.173752   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:56.210657   75577 cri.go:89] found id: ""
	I0920 18:19:56.210689   75577 logs.go:276] 0 containers: []
	W0920 18:19:56.210700   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:56.210709   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:56.210768   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:56.246035   75577 cri.go:89] found id: ""
	I0920 18:19:56.246063   75577 logs.go:276] 0 containers: []
	W0920 18:19:56.246071   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:56.246077   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:56.246123   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:56.280766   75577 cri.go:89] found id: ""
	I0920 18:19:56.280796   75577 logs.go:276] 0 containers: []
	W0920 18:19:56.280807   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:56.280818   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:56.280834   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:56.320511   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:56.320540   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:56.373746   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:56.373785   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:56.389294   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:56.389322   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:56.460079   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:56.460100   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:56.460112   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:19:54.692781   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:57.192706   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:59.025015   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:01.025196   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:59.302190   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:01.801923   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:59.044541   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:59.058395   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:59.058464   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:59.097442   75577 cri.go:89] found id: ""
	I0920 18:19:59.097482   75577 logs.go:276] 0 containers: []
	W0920 18:19:59.097495   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:59.097512   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:59.097593   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:59.133091   75577 cri.go:89] found id: ""
	I0920 18:19:59.133116   75577 logs.go:276] 0 containers: []
	W0920 18:19:59.133128   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:59.133135   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:59.133264   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:59.166902   75577 cri.go:89] found id: ""
	I0920 18:19:59.166927   75577 logs.go:276] 0 containers: []
	W0920 18:19:59.166938   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:59.166945   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:59.167008   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:59.202551   75577 cri.go:89] found id: ""
	I0920 18:19:59.202573   75577 logs.go:276] 0 containers: []
	W0920 18:19:59.202581   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:59.202586   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:59.202633   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:59.235197   75577 cri.go:89] found id: ""
	I0920 18:19:59.235222   75577 logs.go:276] 0 containers: []
	W0920 18:19:59.235230   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:59.235236   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:59.235286   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:59.269148   75577 cri.go:89] found id: ""
	I0920 18:19:59.269176   75577 logs.go:276] 0 containers: []
	W0920 18:19:59.269187   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:59.269196   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:59.269262   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:59.307670   75577 cri.go:89] found id: ""
	I0920 18:19:59.307692   75577 logs.go:276] 0 containers: []
	W0920 18:19:59.307699   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:59.307705   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:59.307753   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:59.340554   75577 cri.go:89] found id: ""
	I0920 18:19:59.340589   75577 logs.go:276] 0 containers: []
	W0920 18:19:59.340599   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:59.340610   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:59.340623   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:59.382339   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:59.382470   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:59.432176   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:59.432211   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:59.445889   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:59.445922   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:59.516564   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:59.516590   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:59.516609   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:02.101918   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:02.115064   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:02.115128   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:02.148718   75577 cri.go:89] found id: ""
	I0920 18:20:02.148749   75577 logs.go:276] 0 containers: []
	W0920 18:20:02.148757   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:02.148764   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:02.148822   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:02.182673   75577 cri.go:89] found id: ""
	I0920 18:20:02.182711   75577 logs.go:276] 0 containers: []
	W0920 18:20:02.182722   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:02.182729   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:02.182797   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:02.218734   75577 cri.go:89] found id: ""
	I0920 18:20:02.218771   75577 logs.go:276] 0 containers: []
	W0920 18:20:02.218782   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:02.218791   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:02.218856   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:02.258956   75577 cri.go:89] found id: ""
	I0920 18:20:02.258981   75577 logs.go:276] 0 containers: []
	W0920 18:20:02.258989   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:02.258995   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:02.259066   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:02.294879   75577 cri.go:89] found id: ""
	I0920 18:20:02.294910   75577 logs.go:276] 0 containers: []
	W0920 18:20:02.294919   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:02.294925   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:02.294971   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:02.331010   75577 cri.go:89] found id: ""
	I0920 18:20:02.331039   75577 logs.go:276] 0 containers: []
	W0920 18:20:02.331049   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:02.331057   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:02.331122   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:02.365023   75577 cri.go:89] found id: ""
	I0920 18:20:02.365056   75577 logs.go:276] 0 containers: []
	W0920 18:20:02.365066   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:02.365072   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:02.365122   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:02.400501   75577 cri.go:89] found id: ""
	I0920 18:20:02.400528   75577 logs.go:276] 0 containers: []
	W0920 18:20:02.400537   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:02.400545   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:02.400556   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:02.450597   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:02.450636   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:02.467637   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:02.467671   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:02.540668   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:02.540690   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:02.540706   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:02.629706   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:02.629752   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:59.192799   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:01.691537   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:03.692613   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:03.524517   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:05.525418   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:04.300845   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:06.301250   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:05.168119   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:05.182954   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:05.183043   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:05.221938   75577 cri.go:89] found id: ""
	I0920 18:20:05.221971   75577 logs.go:276] 0 containers: []
	W0920 18:20:05.221980   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:05.221990   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:05.222051   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:05.258454   75577 cri.go:89] found id: ""
	I0920 18:20:05.258479   75577 logs.go:276] 0 containers: []
	W0920 18:20:05.258487   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:05.258492   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:05.258540   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:05.293091   75577 cri.go:89] found id: ""
	I0920 18:20:05.293125   75577 logs.go:276] 0 containers: []
	W0920 18:20:05.293138   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:05.293146   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:05.293196   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:05.332992   75577 cri.go:89] found id: ""
	I0920 18:20:05.333025   75577 logs.go:276] 0 containers: []
	W0920 18:20:05.333034   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:05.333040   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:05.333088   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:05.368743   75577 cri.go:89] found id: ""
	I0920 18:20:05.368778   75577 logs.go:276] 0 containers: []
	W0920 18:20:05.368790   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:05.368798   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:05.368859   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:05.404913   75577 cri.go:89] found id: ""
	I0920 18:20:05.404941   75577 logs.go:276] 0 containers: []
	W0920 18:20:05.404948   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:05.404954   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:05.405003   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:05.442111   75577 cri.go:89] found id: ""
	I0920 18:20:05.442143   75577 logs.go:276] 0 containers: []
	W0920 18:20:05.442154   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:05.442163   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:05.442228   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:05.478808   75577 cri.go:89] found id: ""
	I0920 18:20:05.478842   75577 logs.go:276] 0 containers: []
	W0920 18:20:05.478853   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:05.478865   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:05.478879   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:05.531653   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:05.531691   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:05.545181   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:05.545210   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:05.615009   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:05.615041   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:05.615059   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:05.690842   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:05.690871   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:06.193177   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:08.691596   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:08.034009   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:10.524419   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:08.301457   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:10.801788   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:08.230851   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:08.244539   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:08.244609   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:08.281123   75577 cri.go:89] found id: ""
	I0920 18:20:08.281155   75577 logs.go:276] 0 containers: []
	W0920 18:20:08.281167   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:08.281174   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:08.281226   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:08.319704   75577 cri.go:89] found id: ""
	I0920 18:20:08.319740   75577 logs.go:276] 0 containers: []
	W0920 18:20:08.319754   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:08.319763   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:08.319828   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:08.354589   75577 cri.go:89] found id: ""
	I0920 18:20:08.354619   75577 logs.go:276] 0 containers: []
	W0920 18:20:08.354631   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:08.354638   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:08.354703   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:08.391580   75577 cri.go:89] found id: ""
	I0920 18:20:08.391603   75577 logs.go:276] 0 containers: []
	W0920 18:20:08.391612   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:08.391617   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:08.391666   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:08.425596   75577 cri.go:89] found id: ""
	I0920 18:20:08.425622   75577 logs.go:276] 0 containers: []
	W0920 18:20:08.425630   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:08.425638   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:08.425704   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:08.458720   75577 cri.go:89] found id: ""
	I0920 18:20:08.458747   75577 logs.go:276] 0 containers: []
	W0920 18:20:08.458758   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:08.458764   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:08.458812   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:08.496104   75577 cri.go:89] found id: ""
	I0920 18:20:08.496137   75577 logs.go:276] 0 containers: []
	W0920 18:20:08.496148   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:08.496155   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:08.496210   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:08.530961   75577 cri.go:89] found id: ""
	I0920 18:20:08.530989   75577 logs.go:276] 0 containers: []
	W0920 18:20:08.531000   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:08.531010   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:08.531023   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:08.568512   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:08.568541   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:08.619716   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:08.619754   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:08.634358   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:08.634390   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:08.721465   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:08.721488   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:08.721501   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:11.303942   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:11.316686   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:11.316759   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:11.353137   75577 cri.go:89] found id: ""
	I0920 18:20:11.353161   75577 logs.go:276] 0 containers: []
	W0920 18:20:11.353169   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:11.353176   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:11.353229   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:11.388271   75577 cri.go:89] found id: ""
	I0920 18:20:11.388298   75577 logs.go:276] 0 containers: []
	W0920 18:20:11.388315   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:11.388322   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:11.388388   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:11.422673   75577 cri.go:89] found id: ""
	I0920 18:20:11.422700   75577 logs.go:276] 0 containers: []
	W0920 18:20:11.422708   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:11.422714   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:11.422768   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:11.458869   75577 cri.go:89] found id: ""
	I0920 18:20:11.458900   75577 logs.go:276] 0 containers: []
	W0920 18:20:11.458910   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:11.458917   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:11.459068   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:11.494128   75577 cri.go:89] found id: ""
	I0920 18:20:11.494161   75577 logs.go:276] 0 containers: []
	W0920 18:20:11.494172   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:11.494180   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:11.494246   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:11.529111   75577 cri.go:89] found id: ""
	I0920 18:20:11.529135   75577 logs.go:276] 0 containers: []
	W0920 18:20:11.529150   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:11.529157   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:11.529223   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:11.562287   75577 cri.go:89] found id: ""
	I0920 18:20:11.562314   75577 logs.go:276] 0 containers: []
	W0920 18:20:11.562323   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:11.562329   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:11.562381   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:11.600066   75577 cri.go:89] found id: ""
	I0920 18:20:11.600106   75577 logs.go:276] 0 containers: []
	W0920 18:20:11.600117   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:11.600128   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:11.600143   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:11.681628   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:11.681665   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:11.722173   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:11.722205   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:11.773132   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:11.773171   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:11.787183   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:11.787215   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:11.856304   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:10.695174   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:13.191743   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:13.025069   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:15.525020   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:12.803677   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:15.301431   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:14.356881   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:14.371658   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:14.371729   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:14.406983   75577 cri.go:89] found id: ""
	I0920 18:20:14.407009   75577 logs.go:276] 0 containers: []
	W0920 18:20:14.407017   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:14.407022   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:14.407075   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:14.481048   75577 cri.go:89] found id: ""
	I0920 18:20:14.481075   75577 logs.go:276] 0 containers: []
	W0920 18:20:14.481086   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:14.481094   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:14.481156   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:14.513683   75577 cri.go:89] found id: ""
	I0920 18:20:14.513711   75577 logs.go:276] 0 containers: []
	W0920 18:20:14.513719   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:14.513725   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:14.513797   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:14.553330   75577 cri.go:89] found id: ""
	I0920 18:20:14.553363   75577 logs.go:276] 0 containers: []
	W0920 18:20:14.553375   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:14.553381   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:14.553446   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:14.590803   75577 cri.go:89] found id: ""
	I0920 18:20:14.590837   75577 logs.go:276] 0 containers: []
	W0920 18:20:14.590848   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:14.590856   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:14.590927   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:14.625100   75577 cri.go:89] found id: ""
	I0920 18:20:14.625130   75577 logs.go:276] 0 containers: []
	W0920 18:20:14.625141   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:14.625151   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:14.625219   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:14.659313   75577 cri.go:89] found id: ""
	I0920 18:20:14.659342   75577 logs.go:276] 0 containers: []
	W0920 18:20:14.659351   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:14.659357   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:14.659418   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:14.694901   75577 cri.go:89] found id: ""
	I0920 18:20:14.694931   75577 logs.go:276] 0 containers: []
	W0920 18:20:14.694939   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:14.694951   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:14.694966   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:14.708406   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:14.708437   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:14.785174   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:14.785200   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:14.785214   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:14.873622   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:14.873666   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:14.919130   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:14.919166   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:17.468595   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:17.483397   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:17.483473   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:17.522415   75577 cri.go:89] found id: ""
	I0920 18:20:17.522445   75577 logs.go:276] 0 containers: []
	W0920 18:20:17.522455   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:17.522463   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:17.522523   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:17.558957   75577 cri.go:89] found id: ""
	I0920 18:20:17.558991   75577 logs.go:276] 0 containers: []
	W0920 18:20:17.559002   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:17.559010   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:17.559174   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:17.595183   75577 cri.go:89] found id: ""
	I0920 18:20:17.595217   75577 logs.go:276] 0 containers: []
	W0920 18:20:17.595229   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:17.595237   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:17.595302   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:17.631727   75577 cri.go:89] found id: ""
	I0920 18:20:17.631757   75577 logs.go:276] 0 containers: []
	W0920 18:20:17.631768   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:17.631775   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:17.631894   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:17.666374   75577 cri.go:89] found id: ""
	I0920 18:20:17.666409   75577 logs.go:276] 0 containers: []
	W0920 18:20:17.666420   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:17.666427   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:17.666488   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:17.702256   75577 cri.go:89] found id: ""
	I0920 18:20:17.702281   75577 logs.go:276] 0 containers: []
	W0920 18:20:17.702291   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:17.702299   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:17.702370   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:15.191971   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:17.192990   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:17.525552   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:20.025229   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:17.802045   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:20.302538   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:17.742133   75577 cri.go:89] found id: ""
	I0920 18:20:17.742161   75577 logs.go:276] 0 containers: []
	W0920 18:20:17.742172   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:17.742179   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:17.742249   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:17.781269   75577 cri.go:89] found id: ""
	I0920 18:20:17.781300   75577 logs.go:276] 0 containers: []
	W0920 18:20:17.781311   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:17.781321   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:17.781336   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:17.835886   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:17.835923   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:17.851365   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:17.851396   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:17.932682   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:17.932711   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:17.932726   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:18.013680   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:18.013731   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:20.555074   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:20.568007   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:20.568071   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:20.602250   75577 cri.go:89] found id: ""
	I0920 18:20:20.602278   75577 logs.go:276] 0 containers: []
	W0920 18:20:20.602287   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:20.602293   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:20.602353   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:20.636711   75577 cri.go:89] found id: ""
	I0920 18:20:20.636739   75577 logs.go:276] 0 containers: []
	W0920 18:20:20.636748   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:20.636753   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:20.636807   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:20.669529   75577 cri.go:89] found id: ""
	I0920 18:20:20.669570   75577 logs.go:276] 0 containers: []
	W0920 18:20:20.669583   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:20.669593   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:20.669651   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:20.710330   75577 cri.go:89] found id: ""
	I0920 18:20:20.710364   75577 logs.go:276] 0 containers: []
	W0920 18:20:20.710372   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:20.710378   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:20.710427   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:20.747664   75577 cri.go:89] found id: ""
	I0920 18:20:20.747690   75577 logs.go:276] 0 containers: []
	W0920 18:20:20.747699   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:20.747705   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:20.747760   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:20.785680   75577 cri.go:89] found id: ""
	I0920 18:20:20.785717   75577 logs.go:276] 0 containers: []
	W0920 18:20:20.785726   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:20.785732   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:20.785794   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:20.822570   75577 cri.go:89] found id: ""
	I0920 18:20:20.822599   75577 logs.go:276] 0 containers: []
	W0920 18:20:20.822608   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:20.822613   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:20.822659   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:20.856739   75577 cri.go:89] found id: ""
	I0920 18:20:20.856772   75577 logs.go:276] 0 containers: []
	W0920 18:20:20.856786   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:20.856806   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:20.856822   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:20.896606   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:20.896643   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:20.948275   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:20.948313   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:20.962576   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:20.962606   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:21.033511   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:21.033534   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:21.033547   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:19.691723   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:21.694099   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:22.524982   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:24.527065   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:27.026374   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:22.803300   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:25.302416   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:23.615731   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:23.628619   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:23.628698   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:23.664344   75577 cri.go:89] found id: ""
	I0920 18:20:23.664367   75577 logs.go:276] 0 containers: []
	W0920 18:20:23.664375   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:23.664381   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:23.664431   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:23.698791   75577 cri.go:89] found id: ""
	I0920 18:20:23.698823   75577 logs.go:276] 0 containers: []
	W0920 18:20:23.698832   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:23.698839   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:23.698889   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:23.732552   75577 cri.go:89] found id: ""
	I0920 18:20:23.732590   75577 logs.go:276] 0 containers: []
	W0920 18:20:23.732602   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:23.732610   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:23.732676   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:23.769463   75577 cri.go:89] found id: ""
	I0920 18:20:23.769490   75577 logs.go:276] 0 containers: []
	W0920 18:20:23.769501   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:23.769508   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:23.769567   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:23.807329   75577 cri.go:89] found id: ""
	I0920 18:20:23.807361   75577 logs.go:276] 0 containers: []
	W0920 18:20:23.807374   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:23.807382   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:23.807442   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:23.847889   75577 cri.go:89] found id: ""
	I0920 18:20:23.847913   75577 logs.go:276] 0 containers: []
	W0920 18:20:23.847920   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:23.847927   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:23.847985   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:23.883287   75577 cri.go:89] found id: ""
	I0920 18:20:23.883314   75577 logs.go:276] 0 containers: []
	W0920 18:20:23.883322   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:23.883335   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:23.883389   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:23.920006   75577 cri.go:89] found id: ""
	I0920 18:20:23.920034   75577 logs.go:276] 0 containers: []
	W0920 18:20:23.920045   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:23.920057   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:23.920070   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:23.995572   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:23.995608   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:24.035953   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:24.035983   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:24.085803   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:24.085860   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:24.100226   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:24.100255   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:24.173555   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:26.673726   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:26.687372   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:26.687449   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:26.726544   75577 cri.go:89] found id: ""
	I0920 18:20:26.726574   75577 logs.go:276] 0 containers: []
	W0920 18:20:26.726583   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:26.726590   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:26.726651   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:26.761542   75577 cri.go:89] found id: ""
	I0920 18:20:26.761571   75577 logs.go:276] 0 containers: []
	W0920 18:20:26.761580   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:26.761587   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:26.761639   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:26.803869   75577 cri.go:89] found id: ""
	I0920 18:20:26.803896   75577 logs.go:276] 0 containers: []
	W0920 18:20:26.803904   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:26.803910   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:26.803970   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:26.854835   75577 cri.go:89] found id: ""
	I0920 18:20:26.854876   75577 logs.go:276] 0 containers: []
	W0920 18:20:26.854888   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:26.854896   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:26.854958   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:26.896113   75577 cri.go:89] found id: ""
	I0920 18:20:26.896151   75577 logs.go:276] 0 containers: []
	W0920 18:20:26.896162   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:26.896170   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:26.896255   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:26.942828   75577 cri.go:89] found id: ""
	I0920 18:20:26.942853   75577 logs.go:276] 0 containers: []
	W0920 18:20:26.942863   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:26.942930   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:26.943007   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:26.977246   75577 cri.go:89] found id: ""
	I0920 18:20:26.977278   75577 logs.go:276] 0 containers: []
	W0920 18:20:26.977289   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:26.977296   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:26.977367   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:27.012408   75577 cri.go:89] found id: ""
	I0920 18:20:27.012440   75577 logs.go:276] 0 containers: []
	W0920 18:20:27.012451   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:27.012462   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:27.012477   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:27.063970   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:27.064017   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:27.078082   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:27.078119   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:27.148050   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:27.148079   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:27.148094   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:27.230836   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:27.230880   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:24.192842   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:26.196350   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:28.692682   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:29.525508   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:32.025275   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:27.802268   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:30.301519   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:29.771845   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:29.785479   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:29.785553   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:29.823101   75577 cri.go:89] found id: ""
	I0920 18:20:29.823132   75577 logs.go:276] 0 containers: []
	W0920 18:20:29.823143   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:29.823150   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:29.823228   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:29.858679   75577 cri.go:89] found id: ""
	I0920 18:20:29.858713   75577 logs.go:276] 0 containers: []
	W0920 18:20:29.858724   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:29.858732   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:29.858796   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:29.901044   75577 cri.go:89] found id: ""
	I0920 18:20:29.901073   75577 logs.go:276] 0 containers: []
	W0920 18:20:29.901083   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:29.901091   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:29.901160   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:29.937743   75577 cri.go:89] found id: ""
	I0920 18:20:29.937775   75577 logs.go:276] 0 containers: []
	W0920 18:20:29.937792   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:29.937800   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:29.937884   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:29.972813   75577 cri.go:89] found id: ""
	I0920 18:20:29.972838   75577 logs.go:276] 0 containers: []
	W0920 18:20:29.972846   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:29.972852   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:29.972901   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:30.008024   75577 cri.go:89] found id: ""
	I0920 18:20:30.008053   75577 logs.go:276] 0 containers: []
	W0920 18:20:30.008062   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:30.008068   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:30.008117   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:30.048544   75577 cri.go:89] found id: ""
	I0920 18:20:30.048577   75577 logs.go:276] 0 containers: []
	W0920 18:20:30.048585   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:30.048591   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:30.048643   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:30.084501   75577 cri.go:89] found id: ""
	I0920 18:20:30.084525   75577 logs.go:276] 0 containers: []
	W0920 18:20:30.084534   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:30.084545   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:30.084559   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:30.136234   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:30.136279   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:30.149557   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:30.149591   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:30.229765   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:30.229792   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:30.229806   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:30.307786   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:30.307825   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:31.191475   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:33.192864   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:34.025515   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:36.027276   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:32.801952   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:34.802033   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:37.301059   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:32.845220   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:32.859734   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:32.859810   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:32.896390   75577 cri.go:89] found id: ""
	I0920 18:20:32.896434   75577 logs.go:276] 0 containers: []
	W0920 18:20:32.896446   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:32.896464   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:32.896538   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:32.934548   75577 cri.go:89] found id: ""
	I0920 18:20:32.934572   75577 logs.go:276] 0 containers: []
	W0920 18:20:32.934580   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:32.934587   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:32.934640   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:32.982987   75577 cri.go:89] found id: ""
	I0920 18:20:32.983013   75577 logs.go:276] 0 containers: []
	W0920 18:20:32.983020   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:32.983026   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:32.983079   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:33.019059   75577 cri.go:89] found id: ""
	I0920 18:20:33.019085   75577 logs.go:276] 0 containers: []
	W0920 18:20:33.019093   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:33.019099   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:33.019148   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:33.057704   75577 cri.go:89] found id: ""
	I0920 18:20:33.057738   75577 logs.go:276] 0 containers: []
	W0920 18:20:33.057750   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:33.057759   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:33.057821   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:33.093702   75577 cri.go:89] found id: ""
	I0920 18:20:33.093732   75577 logs.go:276] 0 containers: []
	W0920 18:20:33.093743   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:33.093751   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:33.093809   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:33.128476   75577 cri.go:89] found id: ""
	I0920 18:20:33.128504   75577 logs.go:276] 0 containers: []
	W0920 18:20:33.128516   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:33.128523   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:33.128591   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:33.165513   75577 cri.go:89] found id: ""
	I0920 18:20:33.165542   75577 logs.go:276] 0 containers: []
	W0920 18:20:33.165550   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:33.165559   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:33.165569   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:33.219613   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:33.219650   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:33.232291   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:33.232317   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:33.304172   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:33.304197   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:33.304212   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:33.380057   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:33.380094   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:35.920842   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:35.935500   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:35.935593   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:35.974445   75577 cri.go:89] found id: ""
	I0920 18:20:35.974471   75577 logs.go:276] 0 containers: []
	W0920 18:20:35.974479   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:35.974485   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:35.974548   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:36.009487   75577 cri.go:89] found id: ""
	I0920 18:20:36.009518   75577 logs.go:276] 0 containers: []
	W0920 18:20:36.009530   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:36.009538   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:36.009706   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:36.049428   75577 cri.go:89] found id: ""
	I0920 18:20:36.049451   75577 logs.go:276] 0 containers: []
	W0920 18:20:36.049463   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:36.049469   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:36.049518   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:36.083973   75577 cri.go:89] found id: ""
	I0920 18:20:36.084011   75577 logs.go:276] 0 containers: []
	W0920 18:20:36.084022   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:36.084030   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:36.084119   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:36.117952   75577 cri.go:89] found id: ""
	I0920 18:20:36.117985   75577 logs.go:276] 0 containers: []
	W0920 18:20:36.117993   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:36.117998   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:36.118052   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:36.151222   75577 cri.go:89] found id: ""
	I0920 18:20:36.151256   75577 logs.go:276] 0 containers: []
	W0920 18:20:36.151265   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:36.151271   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:36.151319   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:36.184665   75577 cri.go:89] found id: ""
	I0920 18:20:36.184697   75577 logs.go:276] 0 containers: []
	W0920 18:20:36.184708   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:36.184716   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:36.184771   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:36.222974   75577 cri.go:89] found id: ""
	I0920 18:20:36.223003   75577 logs.go:276] 0 containers: []
	W0920 18:20:36.223012   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:36.223021   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:36.223033   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:36.300403   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:36.300424   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:36.300437   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:36.381220   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:36.381260   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:36.419010   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:36.419042   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:36.471758   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:36.471799   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:35.192949   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:37.693384   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:38.526219   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:41.024309   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:39.301511   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:41.801558   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:38.985359   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:38.998817   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:38.998898   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:39.036705   75577 cri.go:89] found id: ""
	I0920 18:20:39.036737   75577 logs.go:276] 0 containers: []
	W0920 18:20:39.036747   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:39.036755   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:39.036815   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:39.074254   75577 cri.go:89] found id: ""
	I0920 18:20:39.074285   75577 logs.go:276] 0 containers: []
	W0920 18:20:39.074294   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:39.074300   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:39.074367   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:39.108385   75577 cri.go:89] found id: ""
	I0920 18:20:39.108420   75577 logs.go:276] 0 containers: []
	W0920 18:20:39.108493   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:39.108506   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:39.108561   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:39.142283   75577 cri.go:89] found id: ""
	I0920 18:20:39.142313   75577 logs.go:276] 0 containers: []
	W0920 18:20:39.142325   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:39.142332   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:39.142396   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:39.176881   75577 cri.go:89] found id: ""
	I0920 18:20:39.176918   75577 logs.go:276] 0 containers: []
	W0920 18:20:39.176933   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:39.176941   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:39.177002   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:39.217734   75577 cri.go:89] found id: ""
	I0920 18:20:39.217759   75577 logs.go:276] 0 containers: []
	W0920 18:20:39.217767   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:39.217773   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:39.217852   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:39.250823   75577 cri.go:89] found id: ""
	I0920 18:20:39.250852   75577 logs.go:276] 0 containers: []
	W0920 18:20:39.250860   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:39.250868   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:39.250936   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:39.287494   75577 cri.go:89] found id: ""
	I0920 18:20:39.287519   75577 logs.go:276] 0 containers: []
	W0920 18:20:39.287528   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:39.287540   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:39.287552   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:39.343091   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:39.343126   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:39.359028   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:39.359063   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:39.425293   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:39.425321   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:39.425336   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:39.503303   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:39.503350   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:42.044784   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:42.057764   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:42.057876   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:42.091798   75577 cri.go:89] found id: ""
	I0920 18:20:42.091825   75577 logs.go:276] 0 containers: []
	W0920 18:20:42.091833   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:42.091839   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:42.091888   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:42.125325   75577 cri.go:89] found id: ""
	I0920 18:20:42.125352   75577 logs.go:276] 0 containers: []
	W0920 18:20:42.125362   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:42.125370   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:42.125423   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:42.158641   75577 cri.go:89] found id: ""
	I0920 18:20:42.158680   75577 logs.go:276] 0 containers: []
	W0920 18:20:42.158692   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:42.158701   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:42.158771   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:42.194065   75577 cri.go:89] found id: ""
	I0920 18:20:42.194090   75577 logs.go:276] 0 containers: []
	W0920 18:20:42.194101   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:42.194109   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:42.194174   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:42.227237   75577 cri.go:89] found id: ""
	I0920 18:20:42.227262   75577 logs.go:276] 0 containers: []
	W0920 18:20:42.227270   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:42.227275   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:42.227324   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:42.260205   75577 cri.go:89] found id: ""
	I0920 18:20:42.260240   75577 logs.go:276] 0 containers: []
	W0920 18:20:42.260251   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:42.260258   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:42.260325   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:42.300124   75577 cri.go:89] found id: ""
	I0920 18:20:42.300159   75577 logs.go:276] 0 containers: []
	W0920 18:20:42.300169   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:42.300175   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:42.300273   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:42.335638   75577 cri.go:89] found id: ""
	I0920 18:20:42.335674   75577 logs.go:276] 0 containers: []
	W0920 18:20:42.335685   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:42.335695   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:42.335710   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:42.386176   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:42.386214   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:42.400113   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:42.400147   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:42.479877   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:42.479900   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:42.479912   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:42.554654   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:42.554701   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:40.191251   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:42.192910   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:43.026185   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:45.525362   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:43.801927   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:45.802309   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:45.093962   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:45.107747   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:45.107811   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:45.147091   75577 cri.go:89] found id: ""
	I0920 18:20:45.147120   75577 logs.go:276] 0 containers: []
	W0920 18:20:45.147132   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:45.147140   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:45.147205   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:45.185332   75577 cri.go:89] found id: ""
	I0920 18:20:45.185396   75577 logs.go:276] 0 containers: []
	W0920 18:20:45.185431   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:45.185446   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:45.185523   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:45.224496   75577 cri.go:89] found id: ""
	I0920 18:20:45.224525   75577 logs.go:276] 0 containers: []
	W0920 18:20:45.224535   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:45.224542   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:45.224612   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:45.260686   75577 cri.go:89] found id: ""
	I0920 18:20:45.260718   75577 logs.go:276] 0 containers: []
	W0920 18:20:45.260729   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:45.260737   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:45.260801   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:45.301231   75577 cri.go:89] found id: ""
	I0920 18:20:45.301259   75577 logs.go:276] 0 containers: []
	W0920 18:20:45.301269   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:45.301277   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:45.301343   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:45.346441   75577 cri.go:89] found id: ""
	I0920 18:20:45.346468   75577 logs.go:276] 0 containers: []
	W0920 18:20:45.346476   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:45.346482   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:45.346537   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:45.385042   75577 cri.go:89] found id: ""
	I0920 18:20:45.385071   75577 logs.go:276] 0 containers: []
	W0920 18:20:45.385082   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:45.385090   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:45.385149   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:45.422208   75577 cri.go:89] found id: ""
	I0920 18:20:45.422238   75577 logs.go:276] 0 containers: []
	W0920 18:20:45.422249   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:45.422260   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:45.422275   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:45.472692   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:45.472742   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:45.487981   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:45.488011   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:45.563012   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:45.563035   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:45.563051   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:45.640750   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:45.640786   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:44.691791   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:46.692359   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:48.693165   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:48.024959   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:50.025944   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:48.302699   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:50.801740   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:48.181093   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:48.194433   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:48.194516   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:48.234349   75577 cri.go:89] found id: ""
	I0920 18:20:48.234383   75577 logs.go:276] 0 containers: []
	W0920 18:20:48.234394   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:48.234403   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:48.234467   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:48.269424   75577 cri.go:89] found id: ""
	I0920 18:20:48.269450   75577 logs.go:276] 0 containers: []
	W0920 18:20:48.269457   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:48.269462   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:48.269514   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:48.303968   75577 cri.go:89] found id: ""
	I0920 18:20:48.303990   75577 logs.go:276] 0 containers: []
	W0920 18:20:48.303997   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:48.304002   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:48.304061   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:48.339149   75577 cri.go:89] found id: ""
	I0920 18:20:48.339180   75577 logs.go:276] 0 containers: []
	W0920 18:20:48.339191   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:48.339198   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:48.339259   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:48.374529   75577 cri.go:89] found id: ""
	I0920 18:20:48.374559   75577 logs.go:276] 0 containers: []
	W0920 18:20:48.374571   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:48.374578   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:48.374644   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:48.409170   75577 cri.go:89] found id: ""
	I0920 18:20:48.409203   75577 logs.go:276] 0 containers: []
	W0920 18:20:48.409211   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:48.409217   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:48.409292   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:48.443960   75577 cri.go:89] found id: ""
	I0920 18:20:48.443990   75577 logs.go:276] 0 containers: []
	W0920 18:20:48.444009   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:48.444017   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:48.444074   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:48.480934   75577 cri.go:89] found id: ""
	I0920 18:20:48.480965   75577 logs.go:276] 0 containers: []
	W0920 18:20:48.480978   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:48.480990   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:48.481006   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:48.533261   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:48.533295   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:48.546460   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:48.546488   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:48.615183   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:48.615206   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:48.615219   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:48.695299   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:48.695337   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:51.240343   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:51.255327   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:51.255411   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:51.294611   75577 cri.go:89] found id: ""
	I0920 18:20:51.294642   75577 logs.go:276] 0 containers: []
	W0920 18:20:51.294650   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:51.294656   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:51.294710   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:51.328569   75577 cri.go:89] found id: ""
	I0920 18:20:51.328598   75577 logs.go:276] 0 containers: []
	W0920 18:20:51.328613   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:51.328621   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:51.328675   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:51.364255   75577 cri.go:89] found id: ""
	I0920 18:20:51.364283   75577 logs.go:276] 0 containers: []
	W0920 18:20:51.364291   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:51.364297   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:51.364347   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:51.406178   75577 cri.go:89] found id: ""
	I0920 18:20:51.406204   75577 logs.go:276] 0 containers: []
	W0920 18:20:51.406215   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:51.406223   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:51.406284   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:51.449491   75577 cri.go:89] found id: ""
	I0920 18:20:51.449519   75577 logs.go:276] 0 containers: []
	W0920 18:20:51.449529   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:51.449536   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:51.449600   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:51.483243   75577 cri.go:89] found id: ""
	I0920 18:20:51.483269   75577 logs.go:276] 0 containers: []
	W0920 18:20:51.483278   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:51.483284   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:51.483334   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:51.517280   75577 cri.go:89] found id: ""
	I0920 18:20:51.517304   75577 logs.go:276] 0 containers: []
	W0920 18:20:51.517311   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:51.517316   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:51.517378   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:51.553517   75577 cri.go:89] found id: ""
	I0920 18:20:51.553545   75577 logs.go:276] 0 containers: []
	W0920 18:20:51.553556   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:51.553565   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:51.553576   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:51.607330   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:51.607369   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:51.620628   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:51.620662   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:51.687586   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:51.687618   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:51.687638   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:51.764882   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:51.764924   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:51.191732   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:53.193078   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:52.524529   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:54.526138   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:57.025365   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:52.802328   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:54.804955   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:57.301987   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:54.307276   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:54.321777   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:54.321860   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:54.355565   75577 cri.go:89] found id: ""
	I0920 18:20:54.355598   75577 logs.go:276] 0 containers: []
	W0920 18:20:54.355609   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:54.355618   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:54.355680   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:54.391184   75577 cri.go:89] found id: ""
	I0920 18:20:54.391216   75577 logs.go:276] 0 containers: []
	W0920 18:20:54.391227   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:54.391236   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:54.391301   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:54.425680   75577 cri.go:89] found id: ""
	I0920 18:20:54.425709   75577 logs.go:276] 0 containers: []
	W0920 18:20:54.425718   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:54.425725   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:54.425775   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:54.461514   75577 cri.go:89] found id: ""
	I0920 18:20:54.461541   75577 logs.go:276] 0 containers: []
	W0920 18:20:54.461552   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:54.461559   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:54.461625   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:54.495290   75577 cri.go:89] found id: ""
	I0920 18:20:54.495319   75577 logs.go:276] 0 containers: []
	W0920 18:20:54.495327   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:54.495333   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:54.495384   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:54.530014   75577 cri.go:89] found id: ""
	I0920 18:20:54.530038   75577 logs.go:276] 0 containers: []
	W0920 18:20:54.530046   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:54.530052   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:54.530103   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:54.562559   75577 cri.go:89] found id: ""
	I0920 18:20:54.562597   75577 logs.go:276] 0 containers: []
	W0920 18:20:54.562611   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:54.562621   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:54.562694   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:54.599894   75577 cri.go:89] found id: ""
	I0920 18:20:54.599920   75577 logs.go:276] 0 containers: []
	W0920 18:20:54.599928   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:54.599946   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:54.599982   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:54.636853   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:54.636880   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:54.687932   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:54.687978   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:54.701642   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:54.701673   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:54.769649   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:54.769678   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:54.769695   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:57.356015   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:57.368860   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:57.368923   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:57.409339   75577 cri.go:89] found id: ""
	I0920 18:20:57.409365   75577 logs.go:276] 0 containers: []
	W0920 18:20:57.409375   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:57.409382   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:57.409444   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:57.443055   75577 cri.go:89] found id: ""
	I0920 18:20:57.443085   75577 logs.go:276] 0 containers: []
	W0920 18:20:57.443095   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:57.443102   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:57.443158   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:57.481816   75577 cri.go:89] found id: ""
	I0920 18:20:57.481859   75577 logs.go:276] 0 containers: []
	W0920 18:20:57.481871   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:57.481879   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:57.481942   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:57.517327   75577 cri.go:89] found id: ""
	I0920 18:20:57.517361   75577 logs.go:276] 0 containers: []
	W0920 18:20:57.517372   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:57.517379   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:57.517442   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:57.555121   75577 cri.go:89] found id: ""
	I0920 18:20:57.555151   75577 logs.go:276] 0 containers: []
	W0920 18:20:57.555159   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:57.555164   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:57.555222   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:57.592632   75577 cri.go:89] found id: ""
	I0920 18:20:57.592666   75577 logs.go:276] 0 containers: []
	W0920 18:20:57.592679   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:57.592685   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:57.592734   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:57.627519   75577 cri.go:89] found id: ""
	I0920 18:20:57.627556   75577 logs.go:276] 0 containers: []
	W0920 18:20:57.627567   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:57.627574   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:57.627636   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:57.663817   75577 cri.go:89] found id: ""
	I0920 18:20:57.663844   75577 logs.go:276] 0 containers: []
	W0920 18:20:57.663853   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:57.663862   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:57.663877   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:57.704896   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:57.704924   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:55.692746   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:58.191973   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:59.524732   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:01.525346   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:59.801537   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:01.802088   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:57.757933   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:57.757972   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:57.772646   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:57.772673   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:57.850634   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:57.850665   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:57.850681   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:00.431504   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:00.445175   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:00.445241   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:00.482050   75577 cri.go:89] found id: ""
	I0920 18:21:00.482076   75577 logs.go:276] 0 containers: []
	W0920 18:21:00.482088   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:00.482095   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:00.482150   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:00.515793   75577 cri.go:89] found id: ""
	I0920 18:21:00.515822   75577 logs.go:276] 0 containers: []
	W0920 18:21:00.515833   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:00.515841   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:00.515903   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:00.550250   75577 cri.go:89] found id: ""
	I0920 18:21:00.550278   75577 logs.go:276] 0 containers: []
	W0920 18:21:00.550288   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:00.550296   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:00.550374   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:00.585977   75577 cri.go:89] found id: ""
	I0920 18:21:00.586011   75577 logs.go:276] 0 containers: []
	W0920 18:21:00.586034   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:00.586051   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:00.586118   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:00.621828   75577 cri.go:89] found id: ""
	I0920 18:21:00.621869   75577 logs.go:276] 0 containers: []
	W0920 18:21:00.621879   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:00.621886   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:00.621937   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:00.655492   75577 cri.go:89] found id: ""
	I0920 18:21:00.655528   75577 logs.go:276] 0 containers: []
	W0920 18:21:00.655540   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:00.655600   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:00.655673   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:00.709445   75577 cri.go:89] found id: ""
	I0920 18:21:00.709476   75577 logs.go:276] 0 containers: []
	W0920 18:21:00.709488   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:00.709496   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:00.709566   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:00.744098   75577 cri.go:89] found id: ""
	I0920 18:21:00.744123   75577 logs.go:276] 0 containers: []
	W0920 18:21:00.744134   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:00.744144   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:00.744164   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:00.792576   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:00.792612   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:00.807766   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:00.807794   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:00.883626   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:00.883649   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:00.883663   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:00.966233   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:00.966269   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:00.692181   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:03.192227   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:04.024582   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:06.029376   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:04.301078   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:06.301727   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:03.505502   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:03.520407   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:03.520534   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:03.557891   75577 cri.go:89] found id: ""
	I0920 18:21:03.557926   75577 logs.go:276] 0 containers: []
	W0920 18:21:03.557936   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:03.557944   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:03.558006   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:03.593869   75577 cri.go:89] found id: ""
	I0920 18:21:03.593898   75577 logs.go:276] 0 containers: []
	W0920 18:21:03.593908   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:03.593916   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:03.593982   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:03.629270   75577 cri.go:89] found id: ""
	I0920 18:21:03.629302   75577 logs.go:276] 0 containers: []
	W0920 18:21:03.629311   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:03.629317   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:03.629366   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:03.664658   75577 cri.go:89] found id: ""
	I0920 18:21:03.664688   75577 logs.go:276] 0 containers: []
	W0920 18:21:03.664699   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:03.664706   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:03.664769   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:03.700846   75577 cri.go:89] found id: ""
	I0920 18:21:03.700868   75577 logs.go:276] 0 containers: []
	W0920 18:21:03.700875   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:03.700882   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:03.700941   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:03.736311   75577 cri.go:89] found id: ""
	I0920 18:21:03.736345   75577 logs.go:276] 0 containers: []
	W0920 18:21:03.736355   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:03.736363   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:03.736421   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:03.770760   75577 cri.go:89] found id: ""
	I0920 18:21:03.770788   75577 logs.go:276] 0 containers: []
	W0920 18:21:03.770800   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:03.770808   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:03.770868   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:03.808724   75577 cri.go:89] found id: ""
	I0920 18:21:03.808749   75577 logs.go:276] 0 containers: []
	W0920 18:21:03.808756   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:03.808764   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:03.808775   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:03.851231   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:03.851265   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:03.899607   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:03.899641   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:03.915051   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:03.915079   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:03.984016   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:03.984038   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:03.984053   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:06.564776   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:06.578524   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:06.578604   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:06.614305   75577 cri.go:89] found id: ""
	I0920 18:21:06.614340   75577 logs.go:276] 0 containers: []
	W0920 18:21:06.614351   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:06.614365   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:06.614427   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:06.648929   75577 cri.go:89] found id: ""
	I0920 18:21:06.648958   75577 logs.go:276] 0 containers: []
	W0920 18:21:06.648968   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:06.648976   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:06.649036   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:06.689975   75577 cri.go:89] found id: ""
	I0920 18:21:06.690016   75577 logs.go:276] 0 containers: []
	W0920 18:21:06.690027   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:06.690034   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:06.690092   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:06.729721   75577 cri.go:89] found id: ""
	I0920 18:21:06.729747   75577 logs.go:276] 0 containers: []
	W0920 18:21:06.729755   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:06.729762   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:06.729808   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:06.766310   75577 cri.go:89] found id: ""
	I0920 18:21:06.766343   75577 logs.go:276] 0 containers: []
	W0920 18:21:06.766354   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:06.766361   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:06.766437   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:06.800801   75577 cri.go:89] found id: ""
	I0920 18:21:06.800829   75577 logs.go:276] 0 containers: []
	W0920 18:21:06.800839   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:06.800847   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:06.800909   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:06.836391   75577 cri.go:89] found id: ""
	I0920 18:21:06.836429   75577 logs.go:276] 0 containers: []
	W0920 18:21:06.836447   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:06.836455   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:06.836521   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:06.873014   75577 cri.go:89] found id: ""
	I0920 18:21:06.873041   75577 logs.go:276] 0 containers: []
	W0920 18:21:06.873049   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:06.873057   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:06.873070   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:06.953084   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:06.953110   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:06.953122   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:07.032398   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:07.032434   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:07.082196   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:07.082224   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:07.159604   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:07.159640   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:05.691350   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:07.692127   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:08.526757   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:10.526990   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:08.801736   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:11.301001   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:09.675022   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:09.687924   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:09.687999   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:09.722963   75577 cri.go:89] found id: ""
	I0920 18:21:09.722994   75577 logs.go:276] 0 containers: []
	W0920 18:21:09.723005   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:09.723017   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:09.723084   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:09.756433   75577 cri.go:89] found id: ""
	I0920 18:21:09.756458   75577 logs.go:276] 0 containers: []
	W0920 18:21:09.756472   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:09.756486   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:09.756549   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:09.792200   75577 cri.go:89] found id: ""
	I0920 18:21:09.792232   75577 logs.go:276] 0 containers: []
	W0920 18:21:09.792248   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:09.792256   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:09.792323   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:09.829058   75577 cri.go:89] found id: ""
	I0920 18:21:09.829081   75577 logs.go:276] 0 containers: []
	W0920 18:21:09.829098   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:09.829104   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:09.829150   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:09.863225   75577 cri.go:89] found id: ""
	I0920 18:21:09.863252   75577 logs.go:276] 0 containers: []
	W0920 18:21:09.863259   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:09.863265   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:09.863312   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:09.897682   75577 cri.go:89] found id: ""
	I0920 18:21:09.897708   75577 logs.go:276] 0 containers: []
	W0920 18:21:09.897725   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:09.897731   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:09.897789   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:09.936799   75577 cri.go:89] found id: ""
	I0920 18:21:09.936826   75577 logs.go:276] 0 containers: []
	W0920 18:21:09.936836   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:09.936843   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:09.936904   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:09.970502   75577 cri.go:89] found id: ""
	I0920 18:21:09.970539   75577 logs.go:276] 0 containers: []
	W0920 18:21:09.970550   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:09.970560   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:09.970574   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:09.983676   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:09.983703   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:10.058844   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:10.058865   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:10.058874   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:10.136998   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:10.137040   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:10.178902   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:10.178933   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:12.729619   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:09.692361   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:12.192257   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:13.025403   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:15.025667   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:13.302087   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:15.802019   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:12.743816   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:12.743876   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:12.779407   75577 cri.go:89] found id: ""
	I0920 18:21:12.779431   75577 logs.go:276] 0 containers: []
	W0920 18:21:12.779439   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:12.779446   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:12.779503   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:12.820700   75577 cri.go:89] found id: ""
	I0920 18:21:12.820730   75577 logs.go:276] 0 containers: []
	W0920 18:21:12.820740   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:12.820748   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:12.820812   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:12.862179   75577 cri.go:89] found id: ""
	I0920 18:21:12.862210   75577 logs.go:276] 0 containers: []
	W0920 18:21:12.862221   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:12.862228   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:12.862284   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:12.908976   75577 cri.go:89] found id: ""
	I0920 18:21:12.908999   75577 logs.go:276] 0 containers: []
	W0920 18:21:12.909007   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:12.909013   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:12.909072   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:12.953653   75577 cri.go:89] found id: ""
	I0920 18:21:12.953688   75577 logs.go:276] 0 containers: []
	W0920 18:21:12.953696   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:12.953702   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:12.953757   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:12.998519   75577 cri.go:89] found id: ""
	I0920 18:21:12.998546   75577 logs.go:276] 0 containers: []
	W0920 18:21:12.998557   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:12.998564   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:12.998640   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:13.035518   75577 cri.go:89] found id: ""
	I0920 18:21:13.035541   75577 logs.go:276] 0 containers: []
	W0920 18:21:13.035549   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:13.035555   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:13.035609   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:13.073551   75577 cri.go:89] found id: ""
	I0920 18:21:13.073591   75577 logs.go:276] 0 containers: []
	W0920 18:21:13.073605   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:13.073617   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:13.073638   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:13.125118   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:13.125155   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:13.139384   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:13.139415   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:13.204980   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:13.205006   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:13.205021   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:13.289400   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:13.289446   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:15.829405   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:15.842291   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:15.842354   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:15.875886   75577 cri.go:89] found id: ""
	I0920 18:21:15.875921   75577 logs.go:276] 0 containers: []
	W0920 18:21:15.875930   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:15.875937   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:15.875990   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:15.911706   75577 cri.go:89] found id: ""
	I0920 18:21:15.911746   75577 logs.go:276] 0 containers: []
	W0920 18:21:15.911760   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:15.911768   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:15.911831   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:15.950117   75577 cri.go:89] found id: ""
	I0920 18:21:15.950150   75577 logs.go:276] 0 containers: []
	W0920 18:21:15.950161   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:15.950170   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:15.950243   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:15.986029   75577 cri.go:89] found id: ""
	I0920 18:21:15.986066   75577 logs.go:276] 0 containers: []
	W0920 18:21:15.986083   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:15.986091   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:15.986159   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:16.020307   75577 cri.go:89] found id: ""
	I0920 18:21:16.020335   75577 logs.go:276] 0 containers: []
	W0920 18:21:16.020346   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:16.020354   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:16.020412   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:16.054744   75577 cri.go:89] found id: ""
	I0920 18:21:16.054769   75577 logs.go:276] 0 containers: []
	W0920 18:21:16.054777   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:16.054782   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:16.054831   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:16.091756   75577 cri.go:89] found id: ""
	I0920 18:21:16.091789   75577 logs.go:276] 0 containers: []
	W0920 18:21:16.091800   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:16.091807   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:16.091868   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:16.127818   75577 cri.go:89] found id: ""
	I0920 18:21:16.127843   75577 logs.go:276] 0 containers: []
	W0920 18:21:16.127851   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:16.127861   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:16.127877   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:16.200114   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:16.200138   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:16.200149   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:16.279473   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:16.279508   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:16.319139   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:16.319171   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:16.370721   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:16.370769   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:14.192782   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:16.192866   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:18.691599   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:17.524026   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:19.524385   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:21.525244   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:18.301696   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:20.301993   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:18.884966   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:18.900270   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:18.900355   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:18.938541   75577 cri.go:89] found id: ""
	I0920 18:21:18.938582   75577 logs.go:276] 0 containers: []
	W0920 18:21:18.938594   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:18.938602   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:18.938673   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:18.972343   75577 cri.go:89] found id: ""
	I0920 18:21:18.972380   75577 logs.go:276] 0 containers: []
	W0920 18:21:18.972391   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:18.972400   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:18.972458   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:19.011996   75577 cri.go:89] found id: ""
	I0920 18:21:19.012037   75577 logs.go:276] 0 containers: []
	W0920 18:21:19.012048   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:19.012054   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:19.012105   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:19.053789   75577 cri.go:89] found id: ""
	I0920 18:21:19.053818   75577 logs.go:276] 0 containers: []
	W0920 18:21:19.053839   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:19.053849   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:19.053907   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:19.094830   75577 cri.go:89] found id: ""
	I0920 18:21:19.094862   75577 logs.go:276] 0 containers: []
	W0920 18:21:19.094872   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:19.094881   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:19.094941   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:19.133894   75577 cri.go:89] found id: ""
	I0920 18:21:19.133923   75577 logs.go:276] 0 containers: []
	W0920 18:21:19.133934   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:19.133943   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:19.134001   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:19.171640   75577 cri.go:89] found id: ""
	I0920 18:21:19.171662   75577 logs.go:276] 0 containers: []
	W0920 18:21:19.171670   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:19.171676   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:19.171730   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:19.207821   75577 cri.go:89] found id: ""
	I0920 18:21:19.207852   75577 logs.go:276] 0 containers: []
	W0920 18:21:19.207861   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:19.207869   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:19.207880   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:19.292486   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:19.292530   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:19.332664   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:19.332695   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:19.383793   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:19.383828   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:19.399234   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:19.399269   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:19.470636   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:21.971772   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:21.985558   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:21.985630   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:22.018745   75577 cri.go:89] found id: ""
	I0920 18:21:22.018775   75577 logs.go:276] 0 containers: []
	W0920 18:21:22.018785   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:22.018795   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:22.018854   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:22.053586   75577 cri.go:89] found id: ""
	I0920 18:21:22.053617   75577 logs.go:276] 0 containers: []
	W0920 18:21:22.053627   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:22.053635   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:22.053697   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:22.088290   75577 cri.go:89] found id: ""
	I0920 18:21:22.088320   75577 logs.go:276] 0 containers: []
	W0920 18:21:22.088337   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:22.088344   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:22.088394   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:22.121026   75577 cri.go:89] found id: ""
	I0920 18:21:22.121050   75577 logs.go:276] 0 containers: []
	W0920 18:21:22.121057   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:22.121063   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:22.121117   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:22.153669   75577 cri.go:89] found id: ""
	I0920 18:21:22.153702   75577 logs.go:276] 0 containers: []
	W0920 18:21:22.153715   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:22.153725   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:22.153793   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:22.192179   75577 cri.go:89] found id: ""
	I0920 18:21:22.192208   75577 logs.go:276] 0 containers: []
	W0920 18:21:22.192218   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:22.192226   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:22.192294   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:22.226115   75577 cri.go:89] found id: ""
	I0920 18:21:22.226142   75577 logs.go:276] 0 containers: []
	W0920 18:21:22.226153   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:22.226161   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:22.226231   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:22.259032   75577 cri.go:89] found id: ""
	I0920 18:21:22.259059   75577 logs.go:276] 0 containers: []
	W0920 18:21:22.259070   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:22.259080   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:22.259094   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:22.308989   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:22.309020   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:22.322084   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:22.322113   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:22.397567   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:22.397587   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:22.397598   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:22.476551   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:22.476596   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:20.691837   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:23.191939   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:24.024785   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:26.024824   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:22.801102   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:24.801697   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:26.801789   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:25.017659   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:25.032637   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:25.032698   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:25.067671   75577 cri.go:89] found id: ""
	I0920 18:21:25.067701   75577 logs.go:276] 0 containers: []
	W0920 18:21:25.067711   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:25.067718   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:25.067774   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:25.109354   75577 cri.go:89] found id: ""
	I0920 18:21:25.109385   75577 logs.go:276] 0 containers: []
	W0920 18:21:25.109396   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:25.109403   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:25.109463   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:25.142888   75577 cri.go:89] found id: ""
	I0920 18:21:25.142924   75577 logs.go:276] 0 containers: []
	W0920 18:21:25.142935   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:25.142942   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:25.143004   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:25.179005   75577 cri.go:89] found id: ""
	I0920 18:21:25.179032   75577 logs.go:276] 0 containers: []
	W0920 18:21:25.179043   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:25.179050   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:25.179114   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:25.211639   75577 cri.go:89] found id: ""
	I0920 18:21:25.211662   75577 logs.go:276] 0 containers: []
	W0920 18:21:25.211669   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:25.211676   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:25.211729   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:25.246678   75577 cri.go:89] found id: ""
	I0920 18:21:25.246709   75577 logs.go:276] 0 containers: []
	W0920 18:21:25.246718   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:25.246724   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:25.246780   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:25.284187   75577 cri.go:89] found id: ""
	I0920 18:21:25.284216   75577 logs.go:276] 0 containers: []
	W0920 18:21:25.284240   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:25.284247   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:25.284309   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:25.318882   75577 cri.go:89] found id: ""
	I0920 18:21:25.318917   75577 logs.go:276] 0 containers: []
	W0920 18:21:25.318929   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:25.318940   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:25.318954   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:25.357017   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:25.357046   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:25.407584   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:25.407627   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:25.421405   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:25.421437   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:25.492849   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:25.492870   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:25.492881   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:25.192551   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:27.691730   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:28.025042   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:30.025806   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:29.301637   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:31.302080   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:28.076101   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:28.088741   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:28.088805   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:28.124889   75577 cri.go:89] found id: ""
	I0920 18:21:28.124925   75577 logs.go:276] 0 containers: []
	W0920 18:21:28.124944   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:28.124952   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:28.125013   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:28.158590   75577 cri.go:89] found id: ""
	I0920 18:21:28.158615   75577 logs.go:276] 0 containers: []
	W0920 18:21:28.158623   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:28.158629   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:28.158677   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:28.195636   75577 cri.go:89] found id: ""
	I0920 18:21:28.195670   75577 logs.go:276] 0 containers: []
	W0920 18:21:28.195683   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:28.195692   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:28.195762   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:28.228714   75577 cri.go:89] found id: ""
	I0920 18:21:28.228759   75577 logs.go:276] 0 containers: []
	W0920 18:21:28.228771   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:28.228780   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:28.228840   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:28.261488   75577 cri.go:89] found id: ""
	I0920 18:21:28.261519   75577 logs.go:276] 0 containers: []
	W0920 18:21:28.261531   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:28.261538   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:28.261606   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:28.293534   75577 cri.go:89] found id: ""
	I0920 18:21:28.293560   75577 logs.go:276] 0 containers: []
	W0920 18:21:28.293568   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:28.293573   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:28.293629   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:28.330095   75577 cri.go:89] found id: ""
	I0920 18:21:28.330120   75577 logs.go:276] 0 containers: []
	W0920 18:21:28.330128   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:28.330134   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:28.330191   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:28.365643   75577 cri.go:89] found id: ""
	I0920 18:21:28.365675   75577 logs.go:276] 0 containers: []
	W0920 18:21:28.365684   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:28.365693   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:28.365712   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:28.379982   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:28.380007   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:28.456355   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:28.456386   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:28.456402   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:28.538725   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:28.538759   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:28.576067   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:28.576104   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:31.129705   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:31.145814   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:31.145919   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:31.181469   75577 cri.go:89] found id: ""
	I0920 18:21:31.181497   75577 logs.go:276] 0 containers: []
	W0920 18:21:31.181507   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:31.181514   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:31.181562   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:31.216391   75577 cri.go:89] found id: ""
	I0920 18:21:31.216416   75577 logs.go:276] 0 containers: []
	W0920 18:21:31.216426   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:31.216433   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:31.216492   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:31.250236   75577 cri.go:89] found id: ""
	I0920 18:21:31.250266   75577 logs.go:276] 0 containers: []
	W0920 18:21:31.250277   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:31.250285   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:31.250357   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:31.283406   75577 cri.go:89] found id: ""
	I0920 18:21:31.283430   75577 logs.go:276] 0 containers: []
	W0920 18:21:31.283439   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:31.283446   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:31.283519   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:31.317276   75577 cri.go:89] found id: ""
	I0920 18:21:31.317308   75577 logs.go:276] 0 containers: []
	W0920 18:21:31.317327   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:31.317335   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:31.317400   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:31.351563   75577 cri.go:89] found id: ""
	I0920 18:21:31.351588   75577 logs.go:276] 0 containers: []
	W0920 18:21:31.351595   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:31.351600   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:31.351652   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:31.385932   75577 cri.go:89] found id: ""
	I0920 18:21:31.385972   75577 logs.go:276] 0 containers: []
	W0920 18:21:31.385984   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:31.385992   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:31.386056   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:31.422679   75577 cri.go:89] found id: ""
	I0920 18:21:31.422718   75577 logs.go:276] 0 containers: []
	W0920 18:21:31.422729   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:31.422740   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:31.422755   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:31.475827   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:31.475867   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:31.489324   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:31.489354   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:31.560357   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:31.560377   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:31.560390   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:31.642137   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:31.642171   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:30.191820   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:32.692178   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:32.524953   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:35.025736   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:33.801586   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:36.301145   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:34.179063   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:34.193077   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:34.193157   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:34.227451   75577 cri.go:89] found id: ""
	I0920 18:21:34.227484   75577 logs.go:276] 0 containers: []
	W0920 18:21:34.227495   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:34.227503   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:34.227571   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:34.263740   75577 cri.go:89] found id: ""
	I0920 18:21:34.263766   75577 logs.go:276] 0 containers: []
	W0920 18:21:34.263774   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:34.263779   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:34.263838   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:34.298094   75577 cri.go:89] found id: ""
	I0920 18:21:34.298121   75577 logs.go:276] 0 containers: []
	W0920 18:21:34.298132   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:34.298139   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:34.298202   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:34.334253   75577 cri.go:89] found id: ""
	I0920 18:21:34.334281   75577 logs.go:276] 0 containers: []
	W0920 18:21:34.334290   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:34.334297   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:34.334361   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:34.370656   75577 cri.go:89] found id: ""
	I0920 18:21:34.370688   75577 logs.go:276] 0 containers: []
	W0920 18:21:34.370699   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:34.370706   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:34.370759   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:34.405658   75577 cri.go:89] found id: ""
	I0920 18:21:34.405681   75577 logs.go:276] 0 containers: []
	W0920 18:21:34.405689   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:34.405695   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:34.405748   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:34.442284   75577 cri.go:89] found id: ""
	I0920 18:21:34.442311   75577 logs.go:276] 0 containers: []
	W0920 18:21:34.442319   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:34.442325   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:34.442394   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:34.477895   75577 cri.go:89] found id: ""
	I0920 18:21:34.477927   75577 logs.go:276] 0 containers: []
	W0920 18:21:34.477939   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:34.477950   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:34.477964   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:34.531225   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:34.531256   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:34.544325   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:34.544359   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:34.615914   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:34.615933   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:34.615948   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:34.692119   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:34.692160   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:37.228930   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:37.242070   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:37.242151   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:37.276489   75577 cri.go:89] found id: ""
	I0920 18:21:37.276531   75577 logs.go:276] 0 containers: []
	W0920 18:21:37.276543   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:37.276551   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:37.276617   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:37.311106   75577 cri.go:89] found id: ""
	I0920 18:21:37.311140   75577 logs.go:276] 0 containers: []
	W0920 18:21:37.311148   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:37.311156   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:37.311217   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:37.343147   75577 cri.go:89] found id: ""
	I0920 18:21:37.343173   75577 logs.go:276] 0 containers: []
	W0920 18:21:37.343181   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:37.343188   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:37.343237   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:37.378858   75577 cri.go:89] found id: ""
	I0920 18:21:37.378934   75577 logs.go:276] 0 containers: []
	W0920 18:21:37.378947   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:37.378955   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:37.379005   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:37.412321   75577 cri.go:89] found id: ""
	I0920 18:21:37.412355   75577 logs.go:276] 0 containers: []
	W0920 18:21:37.412366   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:37.412374   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:37.412443   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:37.447478   75577 cri.go:89] found id: ""
	I0920 18:21:37.447510   75577 logs.go:276] 0 containers: []
	W0920 18:21:37.447520   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:37.447526   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:37.447580   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:37.481172   75577 cri.go:89] found id: ""
	I0920 18:21:37.481201   75577 logs.go:276] 0 containers: []
	W0920 18:21:37.481209   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:37.481216   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:37.481269   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:37.518001   75577 cri.go:89] found id: ""
	I0920 18:21:37.518032   75577 logs.go:276] 0 containers: []
	W0920 18:21:37.518041   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:37.518050   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:37.518062   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:37.567675   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:37.567707   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:37.582279   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:37.582308   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:37.654514   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:37.654546   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:37.654563   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:37.735895   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:37.735929   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:35.192406   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:37.692460   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:37.524744   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:39.526068   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:42.024627   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:38.301432   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:40.302489   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:40.276416   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:40.291651   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:40.291713   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:40.328309   75577 cri.go:89] found id: ""
	I0920 18:21:40.328346   75577 logs.go:276] 0 containers: []
	W0920 18:21:40.328359   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:40.328378   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:40.328441   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:40.368126   75577 cri.go:89] found id: ""
	I0920 18:21:40.368154   75577 logs.go:276] 0 containers: []
	W0920 18:21:40.368162   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:40.368167   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:40.368217   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:40.408324   75577 cri.go:89] found id: ""
	I0920 18:21:40.408359   75577 logs.go:276] 0 containers: []
	W0920 18:21:40.408371   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:40.408380   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:40.408448   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:40.453867   75577 cri.go:89] found id: ""
	I0920 18:21:40.453892   75577 logs.go:276] 0 containers: []
	W0920 18:21:40.453900   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:40.453906   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:40.453969   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:40.500625   75577 cri.go:89] found id: ""
	I0920 18:21:40.500660   75577 logs.go:276] 0 containers: []
	W0920 18:21:40.500670   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:40.500678   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:40.500750   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:40.533998   75577 cri.go:89] found id: ""
	I0920 18:21:40.534028   75577 logs.go:276] 0 containers: []
	W0920 18:21:40.534039   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:40.534048   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:40.534111   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:40.566279   75577 cri.go:89] found id: ""
	I0920 18:21:40.566308   75577 logs.go:276] 0 containers: []
	W0920 18:21:40.566319   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:40.566326   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:40.566392   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:40.599154   75577 cri.go:89] found id: ""
	I0920 18:21:40.599179   75577 logs.go:276] 0 containers: []
	W0920 18:21:40.599186   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:40.599194   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:40.599210   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:40.668568   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:40.668596   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:40.668608   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:40.747895   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:40.747969   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:40.789568   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:40.789604   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:40.839816   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:40.839852   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:39.693537   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:42.192617   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:44.025059   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:46.524700   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:42.801135   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:44.802083   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:46.802537   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:43.354847   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:43.369077   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:43.369167   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:43.404667   75577 cri.go:89] found id: ""
	I0920 18:21:43.404698   75577 logs.go:276] 0 containers: []
	W0920 18:21:43.404710   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:43.404718   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:43.404778   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:43.440968   75577 cri.go:89] found id: ""
	I0920 18:21:43.441001   75577 logs.go:276] 0 containers: []
	W0920 18:21:43.441012   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:43.441021   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:43.441088   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:43.474481   75577 cri.go:89] found id: ""
	I0920 18:21:43.474511   75577 logs.go:276] 0 containers: []
	W0920 18:21:43.474520   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:43.474529   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:43.474592   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:43.508196   75577 cri.go:89] found id: ""
	I0920 18:21:43.508228   75577 logs.go:276] 0 containers: []
	W0920 18:21:43.508241   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:43.508248   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:43.508307   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:43.543679   75577 cri.go:89] found id: ""
	I0920 18:21:43.543710   75577 logs.go:276] 0 containers: []
	W0920 18:21:43.543721   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:43.543728   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:43.543788   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:43.577115   75577 cri.go:89] found id: ""
	I0920 18:21:43.577138   75577 logs.go:276] 0 containers: []
	W0920 18:21:43.577145   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:43.577152   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:43.577198   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:43.611552   75577 cri.go:89] found id: ""
	I0920 18:21:43.611588   75577 logs.go:276] 0 containers: []
	W0920 18:21:43.611602   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:43.611616   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:43.611685   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:43.647913   75577 cri.go:89] found id: ""
	I0920 18:21:43.647946   75577 logs.go:276] 0 containers: []
	W0920 18:21:43.647957   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:43.647969   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:43.647983   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:43.661014   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:43.661043   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:43.736584   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:43.736607   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:43.736620   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:43.814340   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:43.814380   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:43.855968   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:43.855996   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:46.410577   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:46.423733   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:46.423800   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:46.456819   75577 cri.go:89] found id: ""
	I0920 18:21:46.456847   75577 logs.go:276] 0 containers: []
	W0920 18:21:46.456855   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:46.456861   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:46.456927   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:46.493259   75577 cri.go:89] found id: ""
	I0920 18:21:46.493291   75577 logs.go:276] 0 containers: []
	W0920 18:21:46.493301   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:46.493307   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:46.493372   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:46.531200   75577 cri.go:89] found id: ""
	I0920 18:21:46.531233   75577 logs.go:276] 0 containers: []
	W0920 18:21:46.531241   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:46.531247   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:46.531294   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:46.567508   75577 cri.go:89] found id: ""
	I0920 18:21:46.567530   75577 logs.go:276] 0 containers: []
	W0920 18:21:46.567538   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:46.567544   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:46.567601   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:46.600257   75577 cri.go:89] found id: ""
	I0920 18:21:46.600290   75577 logs.go:276] 0 containers: []
	W0920 18:21:46.600303   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:46.600311   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:46.600375   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:46.635568   75577 cri.go:89] found id: ""
	I0920 18:21:46.635598   75577 logs.go:276] 0 containers: []
	W0920 18:21:46.635606   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:46.635612   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:46.635668   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:46.668257   75577 cri.go:89] found id: ""
	I0920 18:21:46.668304   75577 logs.go:276] 0 containers: []
	W0920 18:21:46.668316   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:46.668324   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:46.668392   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:46.703631   75577 cri.go:89] found id: ""
	I0920 18:21:46.703654   75577 logs.go:276] 0 containers: []
	W0920 18:21:46.703662   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:46.703671   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:46.703680   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:46.780232   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:46.780295   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:46.816623   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:46.816647   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:46.868911   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:46.868954   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:46.882716   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:46.882743   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:46.965166   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:44.692655   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:47.192969   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:48.524828   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:50.525045   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:49.301263   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:51.802343   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:49.466238   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:49.480167   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:49.480228   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:49.513507   75577 cri.go:89] found id: ""
	I0920 18:21:49.513537   75577 logs.go:276] 0 containers: []
	W0920 18:21:49.513549   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:49.513558   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:49.513620   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:49.548708   75577 cri.go:89] found id: ""
	I0920 18:21:49.548739   75577 logs.go:276] 0 containers: []
	W0920 18:21:49.548750   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:49.548758   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:49.548810   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:49.583586   75577 cri.go:89] found id: ""
	I0920 18:21:49.583614   75577 logs.go:276] 0 containers: []
	W0920 18:21:49.583625   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:49.583633   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:49.583695   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:49.617574   75577 cri.go:89] found id: ""
	I0920 18:21:49.617605   75577 logs.go:276] 0 containers: []
	W0920 18:21:49.617616   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:49.617623   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:49.617684   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:49.656517   75577 cri.go:89] found id: ""
	I0920 18:21:49.656552   75577 logs.go:276] 0 containers: []
	W0920 18:21:49.656563   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:49.656571   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:49.656634   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:49.692848   75577 cri.go:89] found id: ""
	I0920 18:21:49.692870   75577 logs.go:276] 0 containers: []
	W0920 18:21:49.692877   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:49.692883   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:49.692950   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:49.728593   75577 cri.go:89] found id: ""
	I0920 18:21:49.728620   75577 logs.go:276] 0 containers: []
	W0920 18:21:49.728630   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:49.728643   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:49.728705   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:49.764187   75577 cri.go:89] found id: ""
	I0920 18:21:49.764218   75577 logs.go:276] 0 containers: []
	W0920 18:21:49.764229   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:49.764242   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:49.764260   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:49.837741   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:49.837764   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:49.837777   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:49.914941   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:49.914978   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:49.957609   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:49.957639   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:50.012075   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:50.012115   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:52.527722   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:52.542125   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:52.542206   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:52.579884   75577 cri.go:89] found id: ""
	I0920 18:21:52.579910   75577 logs.go:276] 0 containers: []
	W0920 18:21:52.579919   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:52.579924   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:52.579982   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:52.619138   75577 cri.go:89] found id: ""
	I0920 18:21:52.619167   75577 logs.go:276] 0 containers: []
	W0920 18:21:52.619180   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:52.619188   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:52.619246   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:52.654460   75577 cri.go:89] found id: ""
	I0920 18:21:52.654498   75577 logs.go:276] 0 containers: []
	W0920 18:21:52.654508   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:52.654515   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:52.654578   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:52.708015   75577 cri.go:89] found id: ""
	I0920 18:21:52.708042   75577 logs.go:276] 0 containers: []
	W0920 18:21:52.708051   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:52.708057   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:52.708127   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:49.692309   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:52.192466   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:53.025700   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:55.026088   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:53.802620   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:55.802856   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:52.743917   75577 cri.go:89] found id: ""
	I0920 18:21:52.743952   75577 logs.go:276] 0 containers: []
	W0920 18:21:52.743964   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:52.743972   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:52.744025   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:52.783458   75577 cri.go:89] found id: ""
	I0920 18:21:52.783481   75577 logs.go:276] 0 containers: []
	W0920 18:21:52.783488   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:52.783495   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:52.783552   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:52.817716   75577 cri.go:89] found id: ""
	I0920 18:21:52.817749   75577 logs.go:276] 0 containers: []
	W0920 18:21:52.817762   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:52.817771   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:52.817882   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:52.857141   75577 cri.go:89] found id: ""
	I0920 18:21:52.857169   75577 logs.go:276] 0 containers: []
	W0920 18:21:52.857180   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:52.857190   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:52.857204   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:52.910555   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:52.910597   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:52.923843   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:52.923873   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:52.994263   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:52.994296   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:52.994313   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:53.079782   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:53.079829   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:55.619418   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:55.633854   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:55.633922   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:55.669778   75577 cri.go:89] found id: ""
	I0920 18:21:55.669813   75577 logs.go:276] 0 containers: []
	W0920 18:21:55.669824   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:55.669853   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:55.669920   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:55.705687   75577 cri.go:89] found id: ""
	I0920 18:21:55.705724   75577 logs.go:276] 0 containers: []
	W0920 18:21:55.705736   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:55.705746   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:55.705811   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:55.739249   75577 cri.go:89] found id: ""
	I0920 18:21:55.739288   75577 logs.go:276] 0 containers: []
	W0920 18:21:55.739296   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:55.739302   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:55.739438   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:55.775077   75577 cri.go:89] found id: ""
	I0920 18:21:55.775109   75577 logs.go:276] 0 containers: []
	W0920 18:21:55.775121   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:55.775130   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:55.775181   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:55.810291   75577 cri.go:89] found id: ""
	I0920 18:21:55.810329   75577 logs.go:276] 0 containers: []
	W0920 18:21:55.810340   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:55.810349   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:55.810415   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:55.844440   75577 cri.go:89] found id: ""
	I0920 18:21:55.844468   75577 logs.go:276] 0 containers: []
	W0920 18:21:55.844476   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:55.844482   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:55.844551   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:55.878611   75577 cri.go:89] found id: ""
	I0920 18:21:55.878647   75577 logs.go:276] 0 containers: []
	W0920 18:21:55.878659   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:55.878667   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:55.878733   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:55.922064   75577 cri.go:89] found id: ""
	I0920 18:21:55.922101   75577 logs.go:276] 0 containers: []
	W0920 18:21:55.922114   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:55.922127   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:55.922147   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:55.979406   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:55.979445   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:55.993082   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:55.993111   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:56.065650   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:56.065701   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:56.065717   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:56.142640   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:56.142684   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:54.691482   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:56.691912   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:57.525280   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:00.025224   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:02.025722   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:58.300359   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:00.301252   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:02.302209   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:58.683524   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:58.698775   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:58.698833   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:58.737369   75577 cri.go:89] found id: ""
	I0920 18:21:58.737398   75577 logs.go:276] 0 containers: []
	W0920 18:21:58.737409   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:58.737417   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:58.737476   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:58.779504   75577 cri.go:89] found id: ""
	I0920 18:21:58.779533   75577 logs.go:276] 0 containers: []
	W0920 18:21:58.779544   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:58.779552   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:58.779610   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:58.814363   75577 cri.go:89] found id: ""
	I0920 18:21:58.814393   75577 logs.go:276] 0 containers: []
	W0920 18:21:58.814401   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:58.814407   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:58.814454   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:58.846219   75577 cri.go:89] found id: ""
	I0920 18:21:58.846242   75577 logs.go:276] 0 containers: []
	W0920 18:21:58.846251   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:58.846256   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:58.846307   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:58.882387   75577 cri.go:89] found id: ""
	I0920 18:21:58.882423   75577 logs.go:276] 0 containers: []
	W0920 18:21:58.882431   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:58.882437   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:58.882497   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:58.918632   75577 cri.go:89] found id: ""
	I0920 18:21:58.918668   75577 logs.go:276] 0 containers: []
	W0920 18:21:58.918679   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:58.918686   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:58.918751   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:58.953521   75577 cri.go:89] found id: ""
	I0920 18:21:58.953547   75577 logs.go:276] 0 containers: []
	W0920 18:21:58.953557   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:58.953564   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:58.953624   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:58.987412   75577 cri.go:89] found id: ""
	I0920 18:21:58.987439   75577 logs.go:276] 0 containers: []
	W0920 18:21:58.987447   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:58.987457   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:58.987471   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:59.077108   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:59.077169   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:59.141723   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:59.141758   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:59.208035   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:59.208081   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:59.221380   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:59.221404   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:59.297151   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:22:01.797684   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:22:01.812275   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:22:01.812338   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:22:01.846431   75577 cri.go:89] found id: ""
	I0920 18:22:01.846459   75577 logs.go:276] 0 containers: []
	W0920 18:22:01.846477   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:22:01.846483   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:22:01.846535   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:22:01.889016   75577 cri.go:89] found id: ""
	I0920 18:22:01.889049   75577 logs.go:276] 0 containers: []
	W0920 18:22:01.889061   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:22:01.889069   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:22:01.889144   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:22:01.926048   75577 cri.go:89] found id: ""
	I0920 18:22:01.926078   75577 logs.go:276] 0 containers: []
	W0920 18:22:01.926090   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:22:01.926108   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:22:01.926185   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:22:01.964510   75577 cri.go:89] found id: ""
	I0920 18:22:01.964536   75577 logs.go:276] 0 containers: []
	W0920 18:22:01.964547   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:22:01.964555   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:22:01.964624   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:22:02.005549   75577 cri.go:89] found id: ""
	I0920 18:22:02.005575   75577 logs.go:276] 0 containers: []
	W0920 18:22:02.005583   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:22:02.005588   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:22:02.005642   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:22:02.047359   75577 cri.go:89] found id: ""
	I0920 18:22:02.047385   75577 logs.go:276] 0 containers: []
	W0920 18:22:02.047393   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:22:02.047399   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:22:02.047455   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:22:02.082961   75577 cri.go:89] found id: ""
	I0920 18:22:02.082999   75577 logs.go:276] 0 containers: []
	W0920 18:22:02.083008   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:22:02.083015   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:22:02.083062   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:22:02.117696   75577 cri.go:89] found id: ""
	I0920 18:22:02.117732   75577 logs.go:276] 0 containers: []
	W0920 18:22:02.117741   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:22:02.117753   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:22:02.117769   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:22:02.131282   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:22:02.131311   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:22:02.200738   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:22:02.200760   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:22:02.200776   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:22:02.281162   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:22:02.281200   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:22:02.321854   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:22:02.321888   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:59.191434   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:01.192475   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:01.685900   75086 pod_ready.go:82] duration metric: took 4m0.000752648s for pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace to be "Ready" ...
	E0920 18:22:01.685945   75086 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace to be "Ready" (will not retry!)
	I0920 18:22:01.685972   75086 pod_ready.go:39] duration metric: took 4m13.051752581s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:22:01.686033   75086 kubeadm.go:597] duration metric: took 4m20.498687791s to restartPrimaryControlPlane
	W0920 18:22:01.686114   75086 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0920 18:22:01.686150   75086 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0920 18:22:04.026729   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:06.524404   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:04.803249   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:07.301696   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:04.874353   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:22:04.889710   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:22:04.889773   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:22:04.945719   75577 cri.go:89] found id: ""
	I0920 18:22:04.945745   75577 logs.go:276] 0 containers: []
	W0920 18:22:04.945753   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:22:04.945759   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:22:04.945802   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:22:04.995492   75577 cri.go:89] found id: ""
	I0920 18:22:04.995535   75577 logs.go:276] 0 containers: []
	W0920 18:22:04.995547   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:22:04.995555   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:22:04.995615   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:22:05.037827   75577 cri.go:89] found id: ""
	I0920 18:22:05.037871   75577 logs.go:276] 0 containers: []
	W0920 18:22:05.037882   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:22:05.037888   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:22:05.037935   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:22:05.077652   75577 cri.go:89] found id: ""
	I0920 18:22:05.077691   75577 logs.go:276] 0 containers: []
	W0920 18:22:05.077704   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:22:05.077712   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:22:05.077772   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:22:05.114472   75577 cri.go:89] found id: ""
	I0920 18:22:05.114509   75577 logs.go:276] 0 containers: []
	W0920 18:22:05.114520   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:22:05.114527   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:22:05.114590   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:22:05.150809   75577 cri.go:89] found id: ""
	I0920 18:22:05.150841   75577 logs.go:276] 0 containers: []
	W0920 18:22:05.150853   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:22:05.150860   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:22:05.150908   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:22:05.188880   75577 cri.go:89] found id: ""
	I0920 18:22:05.188910   75577 logs.go:276] 0 containers: []
	W0920 18:22:05.188921   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:22:05.188929   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:22:05.188990   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:22:05.224990   75577 cri.go:89] found id: ""
	I0920 18:22:05.225015   75577 logs.go:276] 0 containers: []
	W0920 18:22:05.225023   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:22:05.225032   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:22:05.225043   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:22:05.298000   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:22:05.298023   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:22:05.298037   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:22:05.382969   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:22:05.383008   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:22:05.424911   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:22:05.424940   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:22:05.475988   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:22:05.476024   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:22:08.525098   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:10.525321   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:09.801936   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:12.301073   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:07.990660   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:22:08.005572   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:22:08.005637   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:22:08.044354   75577 cri.go:89] found id: ""
	I0920 18:22:08.044381   75577 logs.go:276] 0 containers: []
	W0920 18:22:08.044390   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:22:08.044401   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:22:08.044449   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:22:08.082899   75577 cri.go:89] found id: ""
	I0920 18:22:08.082928   75577 logs.go:276] 0 containers: []
	W0920 18:22:08.082939   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:22:08.082948   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:22:08.083009   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:22:08.129297   75577 cri.go:89] found id: ""
	I0920 18:22:08.129325   75577 logs.go:276] 0 containers: []
	W0920 18:22:08.129335   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:22:08.129343   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:22:08.129404   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:22:08.168744   75577 cri.go:89] found id: ""
	I0920 18:22:08.168774   75577 logs.go:276] 0 containers: []
	W0920 18:22:08.168787   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:22:08.168792   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:22:08.168849   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:22:08.203830   75577 cri.go:89] found id: ""
	I0920 18:22:08.203860   75577 logs.go:276] 0 containers: []
	W0920 18:22:08.203871   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:22:08.203879   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:22:08.203940   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:22:08.238226   75577 cri.go:89] found id: ""
	I0920 18:22:08.238250   75577 logs.go:276] 0 containers: []
	W0920 18:22:08.238258   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:22:08.238263   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:22:08.238331   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:22:08.271457   75577 cri.go:89] found id: ""
	I0920 18:22:08.271485   75577 logs.go:276] 0 containers: []
	W0920 18:22:08.271495   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:22:08.271502   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:22:08.271563   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:22:08.306661   75577 cri.go:89] found id: ""
	I0920 18:22:08.306692   75577 logs.go:276] 0 containers: []
	W0920 18:22:08.306703   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:22:08.306712   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:22:08.306730   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:22:08.357760   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:22:08.357793   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:22:08.372625   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:22:08.372660   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:22:08.442477   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:22:08.442501   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:22:08.442517   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:22:08.528381   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:22:08.528412   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:22:11.064842   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:22:11.078086   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:22:11.078158   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:22:11.113454   75577 cri.go:89] found id: ""
	I0920 18:22:11.113485   75577 logs.go:276] 0 containers: []
	W0920 18:22:11.113497   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:22:11.113511   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:22:11.113575   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:22:11.148039   75577 cri.go:89] found id: ""
	I0920 18:22:11.148064   75577 logs.go:276] 0 containers: []
	W0920 18:22:11.148072   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:22:11.148078   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:22:11.148127   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:22:11.182667   75577 cri.go:89] found id: ""
	I0920 18:22:11.182697   75577 logs.go:276] 0 containers: []
	W0920 18:22:11.182708   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:22:11.182715   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:22:11.182776   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:22:11.217022   75577 cri.go:89] found id: ""
	I0920 18:22:11.217055   75577 logs.go:276] 0 containers: []
	W0920 18:22:11.217067   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:22:11.217075   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:22:11.217141   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:22:11.253970   75577 cri.go:89] found id: ""
	I0920 18:22:11.254001   75577 logs.go:276] 0 containers: []
	W0920 18:22:11.254012   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:22:11.254019   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:22:11.254085   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:22:11.288070   75577 cri.go:89] found id: ""
	I0920 18:22:11.288103   75577 logs.go:276] 0 containers: []
	W0920 18:22:11.288114   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:22:11.288121   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:22:11.288189   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:22:11.324215   75577 cri.go:89] found id: ""
	I0920 18:22:11.324240   75577 logs.go:276] 0 containers: []
	W0920 18:22:11.324254   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:22:11.324261   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:22:11.324319   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:22:11.359377   75577 cri.go:89] found id: ""
	I0920 18:22:11.359406   75577 logs.go:276] 0 containers: []
	W0920 18:22:11.359414   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:22:11.359423   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:22:11.359433   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:22:11.416479   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:22:11.416520   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:22:11.430240   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:22:11.430269   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:22:11.501531   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:22:11.501553   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:22:11.501565   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:22:11.580748   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:22:11.580787   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:22:13.024330   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:15.025065   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:17.026642   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:14.301652   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:16.802017   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:14.119084   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:22:14.132428   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:22:14.132505   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:22:14.167674   75577 cri.go:89] found id: ""
	I0920 18:22:14.167699   75577 logs.go:276] 0 containers: []
	W0920 18:22:14.167710   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:22:14.167717   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:22:14.167772   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:22:14.204570   75577 cri.go:89] found id: ""
	I0920 18:22:14.204595   75577 logs.go:276] 0 containers: []
	W0920 18:22:14.204603   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:22:14.204608   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:22:14.204655   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:22:14.241687   75577 cri.go:89] found id: ""
	I0920 18:22:14.241716   75577 logs.go:276] 0 containers: []
	W0920 18:22:14.241724   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:22:14.241731   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:22:14.241789   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:22:14.275776   75577 cri.go:89] found id: ""
	I0920 18:22:14.275802   75577 logs.go:276] 0 containers: []
	W0920 18:22:14.275810   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:22:14.275816   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:22:14.275872   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:22:14.309565   75577 cri.go:89] found id: ""
	I0920 18:22:14.309589   75577 logs.go:276] 0 containers: []
	W0920 18:22:14.309596   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:22:14.309602   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:22:14.309662   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:22:14.341858   75577 cri.go:89] found id: ""
	I0920 18:22:14.341884   75577 logs.go:276] 0 containers: []
	W0920 18:22:14.341892   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:22:14.341898   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:22:14.341963   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:22:14.377863   75577 cri.go:89] found id: ""
	I0920 18:22:14.377896   75577 logs.go:276] 0 containers: []
	W0920 18:22:14.377906   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:22:14.377912   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:22:14.377988   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:22:14.413182   75577 cri.go:89] found id: ""
	I0920 18:22:14.413207   75577 logs.go:276] 0 containers: []
	W0920 18:22:14.413214   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:22:14.413236   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:22:14.413253   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:22:14.466782   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:22:14.466820   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:22:14.480627   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:22:14.480668   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:22:14.552071   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:22:14.552104   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:22:14.552121   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:22:14.626481   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:22:14.626518   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:22:17.163264   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:22:17.181609   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:22:17.181684   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:22:17.219871   75577 cri.go:89] found id: ""
	I0920 18:22:17.219899   75577 logs.go:276] 0 containers: []
	W0920 18:22:17.219923   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:22:17.219932   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:22:17.220013   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:22:17.256315   75577 cri.go:89] found id: ""
	I0920 18:22:17.256346   75577 logs.go:276] 0 containers: []
	W0920 18:22:17.256356   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:22:17.256364   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:22:17.256434   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:22:17.294326   75577 cri.go:89] found id: ""
	I0920 18:22:17.294352   75577 logs.go:276] 0 containers: []
	W0920 18:22:17.294360   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:22:17.294366   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:22:17.294421   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:22:17.336461   75577 cri.go:89] found id: ""
	I0920 18:22:17.336491   75577 logs.go:276] 0 containers: []
	W0920 18:22:17.336500   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:22:17.336506   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:22:17.336568   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:22:17.376242   75577 cri.go:89] found id: ""
	I0920 18:22:17.376283   75577 logs.go:276] 0 containers: []
	W0920 18:22:17.376295   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:22:17.376302   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:22:17.376363   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:22:17.409147   75577 cri.go:89] found id: ""
	I0920 18:22:17.409180   75577 logs.go:276] 0 containers: []
	W0920 18:22:17.409190   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:22:17.409198   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:22:17.409269   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:22:17.444698   75577 cri.go:89] found id: ""
	I0920 18:22:17.444725   75577 logs.go:276] 0 containers: []
	W0920 18:22:17.444734   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:22:17.444738   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:22:17.444791   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:22:17.485914   75577 cri.go:89] found id: ""
	I0920 18:22:17.485948   75577 logs.go:276] 0 containers: []
	W0920 18:22:17.485959   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:22:17.485970   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:22:17.485983   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:22:17.523567   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:22:17.523601   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:22:17.588864   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:22:17.588905   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:22:17.605052   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:22:17.605092   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:22:17.677012   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:22:17.677039   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:22:17.677056   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:22:19.525154   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:22.025422   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:18.802052   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:20.804024   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:20.252258   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:22:20.267607   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:22:20.267698   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:22:20.302260   75577 cri.go:89] found id: ""
	I0920 18:22:20.302291   75577 logs.go:276] 0 containers: []
	W0920 18:22:20.302301   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:22:20.302309   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:22:20.302373   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:22:20.335343   75577 cri.go:89] found id: ""
	I0920 18:22:20.335377   75577 logs.go:276] 0 containers: []
	W0920 18:22:20.335389   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:22:20.335397   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:22:20.335460   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:22:20.369612   75577 cri.go:89] found id: ""
	I0920 18:22:20.369641   75577 logs.go:276] 0 containers: []
	W0920 18:22:20.369649   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:22:20.369655   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:22:20.369703   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:22:20.403703   75577 cri.go:89] found id: ""
	I0920 18:22:20.403732   75577 logs.go:276] 0 containers: []
	W0920 18:22:20.403740   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:22:20.403746   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:22:20.403804   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:22:20.436288   75577 cri.go:89] found id: ""
	I0920 18:22:20.436316   75577 logs.go:276] 0 containers: []
	W0920 18:22:20.436328   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:22:20.436336   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:22:20.436399   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:22:20.471536   75577 cri.go:89] found id: ""
	I0920 18:22:20.471572   75577 logs.go:276] 0 containers: []
	W0920 18:22:20.471584   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:22:20.471593   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:22:20.471657   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:22:20.506012   75577 cri.go:89] found id: ""
	I0920 18:22:20.506107   75577 logs.go:276] 0 containers: []
	W0920 18:22:20.506134   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:22:20.506143   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:22:20.506199   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:22:20.542614   75577 cri.go:89] found id: ""
	I0920 18:22:20.542650   75577 logs.go:276] 0 containers: []
	W0920 18:22:20.542660   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:22:20.542671   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:22:20.542687   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:22:20.596316   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:22:20.596357   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:22:20.610438   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:22:20.610465   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:22:20.688061   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:22:20.688079   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:22:20.688091   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:22:20.768249   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:22:20.768296   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:22:24.525259   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:25.524857   75264 pod_ready.go:82] duration metric: took 4m0.006563805s for pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace to be "Ready" ...
	E0920 18:22:25.524883   75264 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0920 18:22:25.524891   75264 pod_ready.go:39] duration metric: took 4m8.542422056s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:22:25.524906   75264 api_server.go:52] waiting for apiserver process to appear ...
	I0920 18:22:25.524977   75264 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:22:25.525029   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:22:25.579894   75264 cri.go:89] found id: "0ea0cfbd9902ae8e3797c77592bfe05a5f8162f75a391f5c2dc992eb5154c4ad"
	I0920 18:22:25.579914   75264 cri.go:89] found id: ""
	I0920 18:22:25.579923   75264 logs.go:276] 1 containers: [0ea0cfbd9902ae8e3797c77592bfe05a5f8162f75a391f5c2dc992eb5154c4ad]
	I0920 18:22:25.579979   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:25.585095   75264 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:22:25.585176   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:22:25.621769   75264 cri.go:89] found id: "65da0bae1c849a395cc241962bb6cbdf9f3edf11cdc95f4495ac2e427f9602d4"
	I0920 18:22:25.621796   75264 cri.go:89] found id: ""
	I0920 18:22:25.621805   75264 logs.go:276] 1 containers: [65da0bae1c849a395cc241962bb6cbdf9f3edf11cdc95f4495ac2e427f9602d4]
	I0920 18:22:25.621881   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:25.626318   75264 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:22:25.626400   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:22:25.662732   75264 cri.go:89] found id: "606f7c8a9a095f499ae422de76f76105607db72e57b9b0a6db3e3c5a81656c5c"
	I0920 18:22:25.662753   75264 cri.go:89] found id: ""
	I0920 18:22:25.662760   75264 logs.go:276] 1 containers: [606f7c8a9a095f499ae422de76f76105607db72e57b9b0a6db3e3c5a81656c5c]
	I0920 18:22:25.662818   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:25.667212   75264 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:22:25.667299   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:22:25.703639   75264 cri.go:89] found id: "6ba313deffc6185e59dac50051e933f6b2ea70d630cf36b2b8c2c07042f793ba"
	I0920 18:22:25.703663   75264 cri.go:89] found id: ""
	I0920 18:22:25.703670   75264 logs.go:276] 1 containers: [6ba313deffc6185e59dac50051e933f6b2ea70d630cf36b2b8c2c07042f793ba]
	I0920 18:22:25.703721   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:25.708034   75264 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:22:25.708115   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:22:25.745358   75264 cri.go:89] found id: "702f7f440eb6016c2c758a6be4e1072274ab6dcd169d7a081e5464869a7c299c"
	I0920 18:22:25.745387   75264 cri.go:89] found id: ""
	I0920 18:22:25.745397   75264 logs.go:276] 1 containers: [702f7f440eb6016c2c758a6be4e1072274ab6dcd169d7a081e5464869a7c299c]
	I0920 18:22:25.745455   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:25.749702   75264 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:22:25.749787   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:22:25.790592   75264 cri.go:89] found id: "4ca303b795795bd920f261c88630f30fd51a91b9694a9a6e67af8164bab8242b"
	I0920 18:22:25.790615   75264 cri.go:89] found id: ""
	I0920 18:22:25.790623   75264 logs.go:276] 1 containers: [4ca303b795795bd920f261c88630f30fd51a91b9694a9a6e67af8164bab8242b]
	I0920 18:22:25.790686   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:25.794993   75264 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:22:25.795062   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:22:25.836621   75264 cri.go:89] found id: ""
	I0920 18:22:25.836646   75264 logs.go:276] 0 containers: []
	W0920 18:22:25.836654   75264 logs.go:278] No container was found matching "kindnet"
	I0920 18:22:25.836661   75264 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0920 18:22:25.836723   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0920 18:22:25.874191   75264 cri.go:89] found id: "001bdc98537f0917f021c14a5d0733bd31cf19be1b4862502458a7a90e4300da"
	I0920 18:22:25.874215   75264 cri.go:89] found id: "c42201b6e3d55b6fd8beaf04d416658b228bf627e519ddb022dad6d17219cc1c"
	I0920 18:22:25.874220   75264 cri.go:89] found id: ""
	I0920 18:22:25.874229   75264 logs.go:276] 2 containers: [001bdc98537f0917f021c14a5d0733bd31cf19be1b4862502458a7a90e4300da c42201b6e3d55b6fd8beaf04d416658b228bf627e519ddb022dad6d17219cc1c]
	I0920 18:22:25.874316   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:25.878788   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:25.882840   75264 logs.go:123] Gathering logs for kube-proxy [702f7f440eb6016c2c758a6be4e1072274ab6dcd169d7a081e5464869a7c299c] ...
	I0920 18:22:25.882869   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 702f7f440eb6016c2c758a6be4e1072274ab6dcd169d7a081e5464869a7c299c"
	I0920 18:22:25.921609   75264 logs.go:123] Gathering logs for kube-controller-manager [4ca303b795795bd920f261c88630f30fd51a91b9694a9a6e67af8164bab8242b] ...
	I0920 18:22:25.921640   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ca303b795795bd920f261c88630f30fd51a91b9694a9a6e67af8164bab8242b"
	I0920 18:22:25.987199   75264 logs.go:123] Gathering logs for kubelet ...
	I0920 18:22:25.987236   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:22:26.061857   75264 logs.go:123] Gathering logs for kube-apiserver [0ea0cfbd9902ae8e3797c77592bfe05a5f8162f75a391f5c2dc992eb5154c4ad] ...
	I0920 18:22:26.061897   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ea0cfbd9902ae8e3797c77592bfe05a5f8162f75a391f5c2dc992eb5154c4ad"
	I0920 18:22:26.104901   75264 logs.go:123] Gathering logs for etcd [65da0bae1c849a395cc241962bb6cbdf9f3edf11cdc95f4495ac2e427f9602d4] ...
	I0920 18:22:26.104935   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 65da0bae1c849a395cc241962bb6cbdf9f3edf11cdc95f4495ac2e427f9602d4"
	I0920 18:22:26.160795   75264 logs.go:123] Gathering logs for kube-scheduler [6ba313deffc6185e59dac50051e933f6b2ea70d630cf36b2b8c2c07042f793ba] ...
	I0920 18:22:26.160833   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ba313deffc6185e59dac50051e933f6b2ea70d630cf36b2b8c2c07042f793ba"
	I0920 18:22:26.199286   75264 logs.go:123] Gathering logs for storage-provisioner [001bdc98537f0917f021c14a5d0733bd31cf19be1b4862502458a7a90e4300da] ...
	I0920 18:22:26.199316   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 001bdc98537f0917f021c14a5d0733bd31cf19be1b4862502458a7a90e4300da"
	I0920 18:22:26.239560   75264 logs.go:123] Gathering logs for storage-provisioner [c42201b6e3d55b6fd8beaf04d416658b228bf627e519ddb022dad6d17219cc1c] ...
	I0920 18:22:26.239591   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c42201b6e3d55b6fd8beaf04d416658b228bf627e519ddb022dad6d17219cc1c"
	I0920 18:22:26.275424   75264 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:22:26.275450   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:22:26.786849   75264 logs.go:123] Gathering logs for container status ...
	I0920 18:22:26.786895   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:22:26.848751   75264 logs.go:123] Gathering logs for dmesg ...
	I0920 18:22:26.848783   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:22:26.867329   75264 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:22:26.867375   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 18:22:27.017385   75264 logs.go:123] Gathering logs for coredns [606f7c8a9a095f499ae422de76f76105607db72e57b9b0a6db3e3c5a81656c5c] ...
	I0920 18:22:27.017419   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 606f7c8a9a095f499ae422de76f76105607db72e57b9b0a6db3e3c5a81656c5c"
	I0920 18:22:23.300658   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:25.302131   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:27.303929   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:23.307149   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:22:23.319889   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:22:23.319972   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:22:23.356613   75577 cri.go:89] found id: ""
	I0920 18:22:23.356645   75577 logs.go:276] 0 containers: []
	W0920 18:22:23.356656   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:22:23.356663   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:22:23.356728   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:22:23.391632   75577 cri.go:89] found id: ""
	I0920 18:22:23.391663   75577 logs.go:276] 0 containers: []
	W0920 18:22:23.391675   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:22:23.391683   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:22:23.391748   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:22:23.426908   75577 cri.go:89] found id: ""
	I0920 18:22:23.426936   75577 logs.go:276] 0 containers: []
	W0920 18:22:23.426946   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:22:23.426952   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:22:23.427013   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:22:23.461890   75577 cri.go:89] found id: ""
	I0920 18:22:23.461925   75577 logs.go:276] 0 containers: []
	W0920 18:22:23.461938   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:22:23.461947   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:22:23.462014   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:22:23.503517   75577 cri.go:89] found id: ""
	I0920 18:22:23.503549   75577 logs.go:276] 0 containers: []
	W0920 18:22:23.503560   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:22:23.503566   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:22:23.503615   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:22:23.540699   75577 cri.go:89] found id: ""
	I0920 18:22:23.540722   75577 logs.go:276] 0 containers: []
	W0920 18:22:23.540731   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:22:23.540736   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:22:23.540783   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:22:23.575485   75577 cri.go:89] found id: ""
	I0920 18:22:23.575509   75577 logs.go:276] 0 containers: []
	W0920 18:22:23.575517   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:22:23.575523   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:22:23.575576   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:22:23.610998   75577 cri.go:89] found id: ""
	I0920 18:22:23.611028   75577 logs.go:276] 0 containers: []
	W0920 18:22:23.611039   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:22:23.611051   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:22:23.611065   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:22:23.687072   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:22:23.687109   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:22:23.724790   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:22:23.724824   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:22:23.778905   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:22:23.778945   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:22:23.794298   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:22:23.794329   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:22:23.873341   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:22:26.374541   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:22:26.388151   75577 kubeadm.go:597] duration metric: took 4m2.956358154s to restartPrimaryControlPlane
	W0920 18:22:26.388229   75577 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0920 18:22:26.388260   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0920 18:22:27.454521   75577 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.066231187s)
	I0920 18:22:27.454602   75577 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:22:27.469409   75577 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 18:22:27.480314   75577 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 18:22:27.491389   75577 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 18:22:27.491416   75577 kubeadm.go:157] found existing configuration files:
	
	I0920 18:22:27.491468   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 18:22:27.501448   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 18:22:27.501534   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 18:22:27.511370   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 18:22:27.521767   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 18:22:27.521855   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 18:22:27.531717   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 18:22:27.540517   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 18:22:27.540575   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 18:22:27.549644   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 18:22:27.558412   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 18:22:27.558480   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 18:22:27.567702   75577 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 18:22:28.070939   75086 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.384759009s)
	I0920 18:22:28.071025   75086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:22:28.087712   75086 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 18:22:28.097709   75086 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 18:22:28.107217   75086 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 18:22:28.107243   75086 kubeadm.go:157] found existing configuration files:
	
	I0920 18:22:28.107297   75086 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 18:22:28.116947   75086 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 18:22:28.117013   75086 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 18:22:28.126559   75086 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 18:22:28.135261   75086 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 18:22:28.135338   75086 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 18:22:28.144456   75086 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 18:22:28.153412   75086 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 18:22:28.153465   75086 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 18:22:28.165825   75086 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 18:22:28.175003   75086 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 18:22:28.175068   75086 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 18:22:28.184917   75086 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 18:22:28.236825   75086 kubeadm.go:310] W0920 18:22:28.216092    2540 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 18:22:28.237654   75086 kubeadm.go:310] W0920 18:22:28.216986    2540 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 18:22:28.352695   75086 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 18:22:29.557496   75264 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:22:29.578981   75264 api_server.go:72] duration metric: took 4m20.322735753s to wait for apiserver process to appear ...
	I0920 18:22:29.579009   75264 api_server.go:88] waiting for apiserver healthz status ...
	I0920 18:22:29.579044   75264 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:22:29.579093   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:22:29.616252   75264 cri.go:89] found id: "0ea0cfbd9902ae8e3797c77592bfe05a5f8162f75a391f5c2dc992eb5154c4ad"
	I0920 18:22:29.616281   75264 cri.go:89] found id: ""
	I0920 18:22:29.616292   75264 logs.go:276] 1 containers: [0ea0cfbd9902ae8e3797c77592bfe05a5f8162f75a391f5c2dc992eb5154c4ad]
	I0920 18:22:29.616345   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:29.620605   75264 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:22:29.620678   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:22:29.658077   75264 cri.go:89] found id: "65da0bae1c849a395cc241962bb6cbdf9f3edf11cdc95f4495ac2e427f9602d4"
	I0920 18:22:29.658106   75264 cri.go:89] found id: ""
	I0920 18:22:29.658114   75264 logs.go:276] 1 containers: [65da0bae1c849a395cc241962bb6cbdf9f3edf11cdc95f4495ac2e427f9602d4]
	I0920 18:22:29.658170   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:29.662227   75264 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:22:29.662288   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:22:29.695906   75264 cri.go:89] found id: "606f7c8a9a095f499ae422de76f76105607db72e57b9b0a6db3e3c5a81656c5c"
	I0920 18:22:29.695927   75264 cri.go:89] found id: ""
	I0920 18:22:29.695934   75264 logs.go:276] 1 containers: [606f7c8a9a095f499ae422de76f76105607db72e57b9b0a6db3e3c5a81656c5c]
	I0920 18:22:29.695979   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:29.699986   75264 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:22:29.700083   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:22:29.736560   75264 cri.go:89] found id: "6ba313deffc6185e59dac50051e933f6b2ea70d630cf36b2b8c2c07042f793ba"
	I0920 18:22:29.736589   75264 cri.go:89] found id: ""
	I0920 18:22:29.736600   75264 logs.go:276] 1 containers: [6ba313deffc6185e59dac50051e933f6b2ea70d630cf36b2b8c2c07042f793ba]
	I0920 18:22:29.736660   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:29.740751   75264 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:22:29.740803   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:22:29.778930   75264 cri.go:89] found id: "702f7f440eb6016c2c758a6be4e1072274ab6dcd169d7a081e5464869a7c299c"
	I0920 18:22:29.778952   75264 cri.go:89] found id: ""
	I0920 18:22:29.778959   75264 logs.go:276] 1 containers: [702f7f440eb6016c2c758a6be4e1072274ab6dcd169d7a081e5464869a7c299c]
	I0920 18:22:29.779011   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:29.783022   75264 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:22:29.783092   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:22:29.829491   75264 cri.go:89] found id: "4ca303b795795bd920f261c88630f30fd51a91b9694a9a6e67af8164bab8242b"
	I0920 18:22:29.829521   75264 cri.go:89] found id: ""
	I0920 18:22:29.829531   75264 logs.go:276] 1 containers: [4ca303b795795bd920f261c88630f30fd51a91b9694a9a6e67af8164bab8242b]
	I0920 18:22:29.829597   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:29.833853   75264 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:22:29.833924   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:22:29.872376   75264 cri.go:89] found id: ""
	I0920 18:22:29.872404   75264 logs.go:276] 0 containers: []
	W0920 18:22:29.872412   75264 logs.go:278] No container was found matching "kindnet"
	I0920 18:22:29.872419   75264 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0920 18:22:29.872482   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0920 18:22:29.923096   75264 cri.go:89] found id: "001bdc98537f0917f021c14a5d0733bd31cf19be1b4862502458a7a90e4300da"
	I0920 18:22:29.923136   75264 cri.go:89] found id: "c42201b6e3d55b6fd8beaf04d416658b228bf627e519ddb022dad6d17219cc1c"
	I0920 18:22:29.923143   75264 cri.go:89] found id: ""
	I0920 18:22:29.923152   75264 logs.go:276] 2 containers: [001bdc98537f0917f021c14a5d0733bd31cf19be1b4862502458a7a90e4300da c42201b6e3d55b6fd8beaf04d416658b228bf627e519ddb022dad6d17219cc1c]
	I0920 18:22:29.923226   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:29.930023   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:29.935091   75264 logs.go:123] Gathering logs for coredns [606f7c8a9a095f499ae422de76f76105607db72e57b9b0a6db3e3c5a81656c5c] ...
	I0920 18:22:29.935119   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 606f7c8a9a095f499ae422de76f76105607db72e57b9b0a6db3e3c5a81656c5c"
	I0920 18:22:29.974064   75264 logs.go:123] Gathering logs for kube-scheduler [6ba313deffc6185e59dac50051e933f6b2ea70d630cf36b2b8c2c07042f793ba] ...
	I0920 18:22:29.974101   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ba313deffc6185e59dac50051e933f6b2ea70d630cf36b2b8c2c07042f793ba"
	I0920 18:22:30.018770   75264 logs.go:123] Gathering logs for kube-controller-manager [4ca303b795795bd920f261c88630f30fd51a91b9694a9a6e67af8164bab8242b] ...
	I0920 18:22:30.018805   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ca303b795795bd920f261c88630f30fd51a91b9694a9a6e67af8164bab8242b"
	I0920 18:22:30.077578   75264 logs.go:123] Gathering logs for storage-provisioner [c42201b6e3d55b6fd8beaf04d416658b228bf627e519ddb022dad6d17219cc1c] ...
	I0920 18:22:30.077625   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c42201b6e3d55b6fd8beaf04d416658b228bf627e519ddb022dad6d17219cc1c"
	I0920 18:22:30.123874   75264 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:22:30.123910   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:22:30.553215   75264 logs.go:123] Gathering logs for kube-proxy [702f7f440eb6016c2c758a6be4e1072274ab6dcd169d7a081e5464869a7c299c] ...
	I0920 18:22:30.553261   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 702f7f440eb6016c2c758a6be4e1072274ab6dcd169d7a081e5464869a7c299c"
	I0920 18:22:30.597397   75264 logs.go:123] Gathering logs for storage-provisioner [001bdc98537f0917f021c14a5d0733bd31cf19be1b4862502458a7a90e4300da] ...
	I0920 18:22:30.597428   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 001bdc98537f0917f021c14a5d0733bd31cf19be1b4862502458a7a90e4300da"
	I0920 18:22:30.643777   75264 logs.go:123] Gathering logs for container status ...
	I0920 18:22:30.643807   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:22:30.688174   75264 logs.go:123] Gathering logs for kubelet ...
	I0920 18:22:30.688205   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:22:30.760547   75264 logs.go:123] Gathering logs for dmesg ...
	I0920 18:22:30.760591   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:22:30.776702   75264 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:22:30.776729   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 18:22:30.913200   75264 logs.go:123] Gathering logs for kube-apiserver [0ea0cfbd9902ae8e3797c77592bfe05a5f8162f75a391f5c2dc992eb5154c4ad] ...
	I0920 18:22:30.913240   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ea0cfbd9902ae8e3797c77592bfe05a5f8162f75a391f5c2dc992eb5154c4ad"
	I0920 18:22:30.957548   75264 logs.go:123] Gathering logs for etcd [65da0bae1c849a395cc241962bb6cbdf9f3edf11cdc95f4495ac2e427f9602d4] ...
	I0920 18:22:30.957589   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 65da0bae1c849a395cc241962bb6cbdf9f3edf11cdc95f4495ac2e427f9602d4"
	I0920 18:22:29.803036   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:32.301986   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:27.790386   75577 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 18:22:36.572379   75086 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 18:22:36.572452   75086 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 18:22:36.572558   75086 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 18:22:36.572702   75086 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 18:22:36.572826   75086 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 18:22:36.572926   75086 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 18:22:36.574903   75086 out.go:235]   - Generating certificates and keys ...
	I0920 18:22:36.575032   75086 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 18:22:36.575117   75086 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 18:22:36.575224   75086 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 18:22:36.575342   75086 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 18:22:36.575446   75086 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 18:22:36.575535   75086 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 18:22:36.575606   75086 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 18:22:36.575687   75086 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 18:22:36.575790   75086 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 18:22:36.575867   75086 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 18:22:36.575903   75086 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 18:22:36.575970   75086 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 18:22:36.576054   75086 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 18:22:36.576141   75086 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 18:22:36.576223   75086 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 18:22:36.576317   75086 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 18:22:36.576373   75086 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 18:22:36.576442   75086 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 18:22:36.576506   75086 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 18:22:36.577982   75086 out.go:235]   - Booting up control plane ...
	I0920 18:22:36.578071   75086 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 18:22:36.578147   75086 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 18:22:36.578204   75086 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 18:22:36.578312   75086 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 18:22:36.578400   75086 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 18:22:36.578438   75086 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 18:22:36.578568   75086 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 18:22:36.578660   75086 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 18:22:36.578719   75086 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.174126ms
	I0920 18:22:36.578788   75086 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 18:22:36.578839   75086 kubeadm.go:310] [api-check] The API server is healthy after 5.501599925s
	I0920 18:22:36.578933   75086 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 18:22:36.579037   75086 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 18:22:36.579090   75086 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 18:22:36.579268   75086 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-768431 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 18:22:36.579335   75086 kubeadm.go:310] [bootstrap-token] Using token: junoe1.zmowu95mn9yb1z1f
	I0920 18:22:36.580810   75086 out.go:235]   - Configuring RBAC rules ...
	I0920 18:22:36.580919   75086 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 18:22:36.580989   75086 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 18:22:36.581127   75086 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 18:22:36.581290   75086 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 18:22:36.581456   75086 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 18:22:36.581589   75086 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 18:22:36.581742   75086 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 18:22:36.581827   75086 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 18:22:36.581916   75086 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 18:22:36.581925   75086 kubeadm.go:310] 
	I0920 18:22:36.581980   75086 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 18:22:36.581986   75086 kubeadm.go:310] 
	I0920 18:22:36.582086   75086 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 18:22:36.582097   75086 kubeadm.go:310] 
	I0920 18:22:36.582137   75086 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 18:22:36.582204   75086 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 18:22:36.582287   75086 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 18:22:36.582297   75086 kubeadm.go:310] 
	I0920 18:22:36.582389   75086 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 18:22:36.582398   75086 kubeadm.go:310] 
	I0920 18:22:36.582479   75086 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 18:22:36.582495   75086 kubeadm.go:310] 
	I0920 18:22:36.582540   75086 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 18:22:36.582605   75086 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 18:22:36.582682   75086 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 18:22:36.582697   75086 kubeadm.go:310] 
	I0920 18:22:36.582796   75086 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 18:22:36.582892   75086 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 18:22:36.582901   75086 kubeadm.go:310] 
	I0920 18:22:36.583026   75086 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token junoe1.zmowu95mn9yb1z1f \
	I0920 18:22:36.583163   75086 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:736f3fb521855813decb4a7214f2b8beff5f81c3971b60decfa20a8807626a2d \
	I0920 18:22:36.583199   75086 kubeadm.go:310] 	--control-plane 
	I0920 18:22:36.583208   75086 kubeadm.go:310] 
	I0920 18:22:36.583316   75086 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 18:22:36.583335   75086 kubeadm.go:310] 
	I0920 18:22:36.583489   75086 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token junoe1.zmowu95mn9yb1z1f \
	I0920 18:22:36.583648   75086 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:736f3fb521855813decb4a7214f2b8beff5f81c3971b60decfa20a8807626a2d 
	I0920 18:22:36.583664   75086 cni.go:84] Creating CNI manager for ""
	I0920 18:22:36.583673   75086 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:22:36.585378   75086 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 18:22:33.523585   75264 api_server.go:253] Checking apiserver healthz at https://192.168.72.190:8444/healthz ...
	I0920 18:22:33.528658   75264 api_server.go:279] https://192.168.72.190:8444/healthz returned 200:
	ok
	I0920 18:22:33.529826   75264 api_server.go:141] control plane version: v1.31.1
	I0920 18:22:33.529861   75264 api_server.go:131] duration metric: took 3.95084435s to wait for apiserver health ...
	I0920 18:22:33.529870   75264 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 18:22:33.529894   75264 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:22:33.529938   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:22:33.571762   75264 cri.go:89] found id: "0ea0cfbd9902ae8e3797c77592bfe05a5f8162f75a391f5c2dc992eb5154c4ad"
	I0920 18:22:33.571783   75264 cri.go:89] found id: ""
	I0920 18:22:33.571789   75264 logs.go:276] 1 containers: [0ea0cfbd9902ae8e3797c77592bfe05a5f8162f75a391f5c2dc992eb5154c4ad]
	I0920 18:22:33.571840   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:33.576180   75264 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:22:33.576268   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:22:33.621887   75264 cri.go:89] found id: "65da0bae1c849a395cc241962bb6cbdf9f3edf11cdc95f4495ac2e427f9602d4"
	I0920 18:22:33.621915   75264 cri.go:89] found id: ""
	I0920 18:22:33.621926   75264 logs.go:276] 1 containers: [65da0bae1c849a395cc241962bb6cbdf9f3edf11cdc95f4495ac2e427f9602d4]
	I0920 18:22:33.622013   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:33.626552   75264 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:22:33.626609   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:22:33.664879   75264 cri.go:89] found id: "606f7c8a9a095f499ae422de76f76105607db72e57b9b0a6db3e3c5a81656c5c"
	I0920 18:22:33.664902   75264 cri.go:89] found id: ""
	I0920 18:22:33.664912   75264 logs.go:276] 1 containers: [606f7c8a9a095f499ae422de76f76105607db72e57b9b0a6db3e3c5a81656c5c]
	I0920 18:22:33.664980   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:33.669155   75264 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:22:33.669231   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:22:33.710930   75264 cri.go:89] found id: "6ba313deffc6185e59dac50051e933f6b2ea70d630cf36b2b8c2c07042f793ba"
	I0920 18:22:33.710957   75264 cri.go:89] found id: ""
	I0920 18:22:33.710968   75264 logs.go:276] 1 containers: [6ba313deffc6185e59dac50051e933f6b2ea70d630cf36b2b8c2c07042f793ba]
	I0920 18:22:33.711030   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:33.715618   75264 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:22:33.715689   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:22:33.751646   75264 cri.go:89] found id: "702f7f440eb6016c2c758a6be4e1072274ab6dcd169d7a081e5464869a7c299c"
	I0920 18:22:33.751672   75264 cri.go:89] found id: ""
	I0920 18:22:33.751690   75264 logs.go:276] 1 containers: [702f7f440eb6016c2c758a6be4e1072274ab6dcd169d7a081e5464869a7c299c]
	I0920 18:22:33.751758   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:33.756376   75264 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:22:33.756441   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:22:33.803246   75264 cri.go:89] found id: "4ca303b795795bd920f261c88630f30fd51a91b9694a9a6e67af8164bab8242b"
	I0920 18:22:33.803267   75264 cri.go:89] found id: ""
	I0920 18:22:33.803276   75264 logs.go:276] 1 containers: [4ca303b795795bd920f261c88630f30fd51a91b9694a9a6e67af8164bab8242b]
	I0920 18:22:33.803334   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:33.807825   75264 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:22:33.807892   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:22:33.849531   75264 cri.go:89] found id: ""
	I0920 18:22:33.849559   75264 logs.go:276] 0 containers: []
	W0920 18:22:33.849569   75264 logs.go:278] No container was found matching "kindnet"
	I0920 18:22:33.849583   75264 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0920 18:22:33.849645   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0920 18:22:33.891058   75264 cri.go:89] found id: "001bdc98537f0917f021c14a5d0733bd31cf19be1b4862502458a7a90e4300da"
	I0920 18:22:33.891084   75264 cri.go:89] found id: "c42201b6e3d55b6fd8beaf04d416658b228bf627e519ddb022dad6d17219cc1c"
	I0920 18:22:33.891090   75264 cri.go:89] found id: ""
	I0920 18:22:33.891099   75264 logs.go:276] 2 containers: [001bdc98537f0917f021c14a5d0733bd31cf19be1b4862502458a7a90e4300da c42201b6e3d55b6fd8beaf04d416658b228bf627e519ddb022dad6d17219cc1c]
	I0920 18:22:33.891157   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:33.896028   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:33.900174   75264 logs.go:123] Gathering logs for container status ...
	I0920 18:22:33.900209   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:22:33.944777   75264 logs.go:123] Gathering logs for dmesg ...
	I0920 18:22:33.944803   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:22:33.960062   75264 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:22:33.960094   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 18:22:34.087507   75264 logs.go:123] Gathering logs for etcd [65da0bae1c849a395cc241962bb6cbdf9f3edf11cdc95f4495ac2e427f9602d4] ...
	I0920 18:22:34.087556   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 65da0bae1c849a395cc241962bb6cbdf9f3edf11cdc95f4495ac2e427f9602d4"
	I0920 18:22:34.140051   75264 logs.go:123] Gathering logs for kube-proxy [702f7f440eb6016c2c758a6be4e1072274ab6dcd169d7a081e5464869a7c299c] ...
	I0920 18:22:34.140088   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 702f7f440eb6016c2c758a6be4e1072274ab6dcd169d7a081e5464869a7c299c"
	I0920 18:22:34.183540   75264 logs.go:123] Gathering logs for kube-controller-manager [4ca303b795795bd920f261c88630f30fd51a91b9694a9a6e67af8164bab8242b] ...
	I0920 18:22:34.183582   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ca303b795795bd920f261c88630f30fd51a91b9694a9a6e67af8164bab8242b"
	I0920 18:22:34.251978   75264 logs.go:123] Gathering logs for storage-provisioner [001bdc98537f0917f021c14a5d0733bd31cf19be1b4862502458a7a90e4300da] ...
	I0920 18:22:34.252025   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 001bdc98537f0917f021c14a5d0733bd31cf19be1b4862502458a7a90e4300da"
	I0920 18:22:34.296522   75264 logs.go:123] Gathering logs for storage-provisioner [c42201b6e3d55b6fd8beaf04d416658b228bf627e519ddb022dad6d17219cc1c] ...
	I0920 18:22:34.296562   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c42201b6e3d55b6fd8beaf04d416658b228bf627e519ddb022dad6d17219cc1c"
	I0920 18:22:34.338227   75264 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:22:34.338254   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:22:34.703986   75264 logs.go:123] Gathering logs for kubelet ...
	I0920 18:22:34.704037   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:22:34.788091   75264 logs.go:123] Gathering logs for kube-apiserver [0ea0cfbd9902ae8e3797c77592bfe05a5f8162f75a391f5c2dc992eb5154c4ad] ...
	I0920 18:22:34.788139   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ea0cfbd9902ae8e3797c77592bfe05a5f8162f75a391f5c2dc992eb5154c4ad"
	I0920 18:22:34.861403   75264 logs.go:123] Gathering logs for coredns [606f7c8a9a095f499ae422de76f76105607db72e57b9b0a6db3e3c5a81656c5c] ...
	I0920 18:22:34.861435   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 606f7c8a9a095f499ae422de76f76105607db72e57b9b0a6db3e3c5a81656c5c"
	I0920 18:22:34.906740   75264 logs.go:123] Gathering logs for kube-scheduler [6ba313deffc6185e59dac50051e933f6b2ea70d630cf36b2b8c2c07042f793ba] ...
	I0920 18:22:34.906773   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ba313deffc6185e59dac50051e933f6b2ea70d630cf36b2b8c2c07042f793ba"
	I0920 18:22:37.454205   75264 system_pods.go:59] 8 kube-system pods found
	I0920 18:22:37.454243   75264 system_pods.go:61] "coredns-7c65d6cfc9-dmdfb" [8e4ffdb3-37e0-4498-bdf5-d6e7dbcad020] Running
	I0920 18:22:37.454252   75264 system_pods.go:61] "etcd-default-k8s-diff-port-553719" [e8de0e96-5f3a-4f3f-ae69-c5da7d3c2eb7] Running
	I0920 18:22:37.454257   75264 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-553719" [5164e26c-7fcd-41dc-865f-429046a9ad61] Running
	I0920 18:22:37.454263   75264 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-553719" [b2c00a76-9645-4c63-9812-43e6ed9f1d4f] Running
	I0920 18:22:37.454268   75264 system_pods.go:61] "kube-proxy-p9crq" [83e0f53d-6960-42c4-904d-ea85ba9160f4] Running
	I0920 18:22:37.454273   75264 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-553719" [42155f7b-95bb-467b-acf5-89eb4f80cf76] Running
	I0920 18:22:37.454289   75264 system_pods.go:61] "metrics-server-6867b74b74-vtl79" [29e0b6eb-22a9-4e37-97f9-83b48cc38193] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 18:22:37.454296   75264 system_pods.go:61] "storage-provisioner" [6fad2d07-f99e-45ac-9657-bce6d73d7fce] Running
	I0920 18:22:37.454310   75264 system_pods.go:74] duration metric: took 3.924431145s to wait for pod list to return data ...
	I0920 18:22:37.454324   75264 default_sa.go:34] waiting for default service account to be created ...
	I0920 18:22:37.457599   75264 default_sa.go:45] found service account: "default"
	I0920 18:22:37.457619   75264 default_sa.go:55] duration metric: took 3.289936ms for default service account to be created ...
	I0920 18:22:37.457627   75264 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 18:22:37.461977   75264 system_pods.go:86] 8 kube-system pods found
	I0920 18:22:37.462001   75264 system_pods.go:89] "coredns-7c65d6cfc9-dmdfb" [8e4ffdb3-37e0-4498-bdf5-d6e7dbcad020] Running
	I0920 18:22:37.462006   75264 system_pods.go:89] "etcd-default-k8s-diff-port-553719" [e8de0e96-5f3a-4f3f-ae69-c5da7d3c2eb7] Running
	I0920 18:22:37.462011   75264 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-553719" [5164e26c-7fcd-41dc-865f-429046a9ad61] Running
	I0920 18:22:37.462015   75264 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-553719" [b2c00a76-9645-4c63-9812-43e6ed9f1d4f] Running
	I0920 18:22:37.462019   75264 system_pods.go:89] "kube-proxy-p9crq" [83e0f53d-6960-42c4-904d-ea85ba9160f4] Running
	I0920 18:22:37.462022   75264 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-553719" [42155f7b-95bb-467b-acf5-89eb4f80cf76] Running
	I0920 18:22:37.462028   75264 system_pods.go:89] "metrics-server-6867b74b74-vtl79" [29e0b6eb-22a9-4e37-97f9-83b48cc38193] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 18:22:37.462032   75264 system_pods.go:89] "storage-provisioner" [6fad2d07-f99e-45ac-9657-bce6d73d7fce] Running
	I0920 18:22:37.462040   75264 system_pods.go:126] duration metric: took 4.409042ms to wait for k8s-apps to be running ...
	I0920 18:22:37.462047   75264 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 18:22:37.462087   75264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:22:37.479350   75264 system_svc.go:56] duration metric: took 17.294926ms WaitForService to wait for kubelet
	I0920 18:22:37.479380   75264 kubeadm.go:582] duration metric: took 4m28.223143222s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:22:37.479402   75264 node_conditions.go:102] verifying NodePressure condition ...
	I0920 18:22:37.482497   75264 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 18:22:37.482517   75264 node_conditions.go:123] node cpu capacity is 2
	I0920 18:22:37.482532   75264 node_conditions.go:105] duration metric: took 3.125658ms to run NodePressure ...
	I0920 18:22:37.482543   75264 start.go:241] waiting for startup goroutines ...
	I0920 18:22:37.482549   75264 start.go:246] waiting for cluster config update ...
	I0920 18:22:37.482561   75264 start.go:255] writing updated cluster config ...
	I0920 18:22:37.482817   75264 ssh_runner.go:195] Run: rm -f paused
	I0920 18:22:37.532982   75264 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 18:22:37.534936   75264 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-553719" cluster and "default" namespace by default
	I0920 18:22:34.302204   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:36.802991   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:36.586809   75086 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 18:22:36.599904   75086 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 18:22:36.620050   75086 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 18:22:36.620133   75086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:22:36.620149   75086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-768431 minikube.k8s.io/updated_at=2024_09_20T18_22_36_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=0626f22cf0d915d75e291a5bce701f94395056e1 minikube.k8s.io/name=embed-certs-768431 minikube.k8s.io/primary=true
	I0920 18:22:36.636625   75086 ops.go:34] apiserver oom_adj: -16
	I0920 18:22:36.817434   75086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:22:37.318133   75086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:22:37.818515   75086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:22:38.318005   75086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:22:38.817759   75086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:22:39.318481   75086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:22:39.817943   75086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:22:40.317957   75086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:22:40.423342   75086 kubeadm.go:1113] duration metric: took 3.803277177s to wait for elevateKubeSystemPrivileges
	I0920 18:22:40.423389   75086 kubeadm.go:394] duration metric: took 4m59.290098215s to StartCluster
	I0920 18:22:40.423413   75086 settings.go:142] acquiring lock: {Name:mk2a13c58cdc0faf8cddca5d6716175d45db9bfd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:22:40.423516   75086 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19672-8777/kubeconfig
	I0920 18:22:40.426054   75086 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/kubeconfig: {Name:mkf32a4c736808e023459b2f0e40188618a38db1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:22:40.426360   75086 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.202 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:22:40.426456   75086 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 18:22:40.426546   75086 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-768431"
	I0920 18:22:40.426566   75086 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-768431"
	W0920 18:22:40.426578   75086 addons.go:243] addon storage-provisioner should already be in state true
	I0920 18:22:40.426576   75086 addons.go:69] Setting default-storageclass=true in profile "embed-certs-768431"
	I0920 18:22:40.426608   75086 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-768431"
	I0920 18:22:40.426622   75086 host.go:66] Checking if "embed-certs-768431" exists ...
	I0920 18:22:40.426621   75086 addons.go:69] Setting metrics-server=true in profile "embed-certs-768431"
	I0920 18:22:40.426662   75086 config.go:182] Loaded profile config "embed-certs-768431": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:22:40.426669   75086 addons.go:234] Setting addon metrics-server=true in "embed-certs-768431"
	W0920 18:22:40.426699   75086 addons.go:243] addon metrics-server should already be in state true
	I0920 18:22:40.426739   75086 host.go:66] Checking if "embed-certs-768431" exists ...
	I0920 18:22:40.427075   75086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:22:40.427116   75086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:22:40.427120   75086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:22:40.427155   75086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:22:40.427161   75086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:22:40.427251   75086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:22:40.427977   75086 out.go:177] * Verifying Kubernetes components...
	I0920 18:22:40.429647   75086 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:22:40.443468   75086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40103
	I0920 18:22:40.443663   75086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37871
	I0920 18:22:40.444055   75086 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:22:40.444062   75086 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:22:40.444562   75086 main.go:141] libmachine: Using API Version  1
	I0920 18:22:40.444586   75086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:22:40.444732   75086 main.go:141] libmachine: Using API Version  1
	I0920 18:22:40.444750   75086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:22:40.445016   75086 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:22:40.445110   75086 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:22:40.445584   75086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:22:40.445634   75086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:22:40.445652   75086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:22:40.445654   75086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38771
	I0920 18:22:40.445684   75086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:22:40.446227   75086 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:22:40.446872   75086 main.go:141] libmachine: Using API Version  1
	I0920 18:22:40.446892   75086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:22:40.447280   75086 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:22:40.447692   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetState
	I0920 18:22:40.451332   75086 addons.go:234] Setting addon default-storageclass=true in "embed-certs-768431"
	W0920 18:22:40.451355   75086 addons.go:243] addon default-storageclass should already be in state true
	I0920 18:22:40.451382   75086 host.go:66] Checking if "embed-certs-768431" exists ...
	I0920 18:22:40.451753   75086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:22:40.451803   75086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:22:40.463317   75086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39353
	I0920 18:22:40.463816   75086 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:22:40.464544   75086 main.go:141] libmachine: Using API Version  1
	I0920 18:22:40.464577   75086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:22:40.464961   75086 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:22:40.467445   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetState
	I0920 18:22:40.469290   75086 main.go:141] libmachine: (embed-certs-768431) Calling .DriverName
	I0920 18:22:40.471212   75086 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0920 18:22:40.471862   75086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40689
	I0920 18:22:40.472254   75086 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:22:40.472515   75086 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 18:22:40.472535   75086 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 18:22:40.472557   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHHostname
	I0920 18:22:40.472718   75086 main.go:141] libmachine: Using API Version  1
	I0920 18:22:40.472741   75086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:22:40.473259   75086 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:22:40.473477   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetState
	I0920 18:22:40.473753   75086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36495
	I0920 18:22:40.474173   75086 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:22:40.474778   75086 main.go:141] libmachine: Using API Version  1
	I0920 18:22:40.474796   75086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:22:40.475166   75086 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:22:40.475726   75086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:22:40.475766   75086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:22:40.475984   75086 main.go:141] libmachine: (embed-certs-768431) Calling .DriverName
	I0920 18:22:40.476244   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:22:40.476583   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:22:40.476605   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:22:40.476773   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHPort
	I0920 18:22:40.476953   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:22:40.477053   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHUsername
	I0920 18:22:40.477196   75086 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/embed-certs-768431/id_rsa Username:docker}
	I0920 18:22:40.478244   75086 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:22:40.479443   75086 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 18:22:40.479459   75086 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 18:22:40.479476   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHHostname
	I0920 18:22:40.482119   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:22:40.482400   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:22:40.482423   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:22:40.482639   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHPort
	I0920 18:22:40.482840   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:22:40.482968   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHUsername
	I0920 18:22:40.483145   75086 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/embed-certs-768431/id_rsa Username:docker}
	I0920 18:22:40.494454   75086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35971
	I0920 18:22:40.494948   75086 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:22:40.495425   75086 main.go:141] libmachine: Using API Version  1
	I0920 18:22:40.495444   75086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:22:40.495798   75086 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:22:40.495990   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetState
	I0920 18:22:40.497591   75086 main.go:141] libmachine: (embed-certs-768431) Calling .DriverName
	I0920 18:22:40.497878   75086 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 18:22:40.497894   75086 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 18:22:40.497913   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHHostname
	I0920 18:22:40.500755   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:22:40.501130   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:22:40.501153   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:22:40.501355   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHPort
	I0920 18:22:40.501548   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:22:40.501725   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHUsername
	I0920 18:22:40.502091   75086 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/embed-certs-768431/id_rsa Username:docker}
	I0920 18:22:40.646738   75086 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:22:40.673285   75086 node_ready.go:35] waiting up to 6m0s for node "embed-certs-768431" to be "Ready" ...
	I0920 18:22:40.684618   75086 node_ready.go:49] node "embed-certs-768431" has status "Ready":"True"
	I0920 18:22:40.684643   75086 node_ready.go:38] duration metric: took 11.298825ms for node "embed-certs-768431" to be "Ready" ...
	I0920 18:22:40.684653   75086 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:22:40.689151   75086 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:22:40.734882   75086 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 18:22:40.744479   75086 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 18:22:40.884534   75086 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 18:22:40.884556   75086 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0920 18:22:41.018287   75086 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 18:22:41.018313   75086 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 18:22:41.091216   75086 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 18:22:41.091247   75086 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 18:22:41.132887   75086 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 18:22:41.538445   75086 main.go:141] libmachine: Making call to close driver server
	I0920 18:22:41.538472   75086 main.go:141] libmachine: (embed-certs-768431) Calling .Close
	I0920 18:22:41.538774   75086 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:22:41.538795   75086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:22:41.538806   75086 main.go:141] libmachine: Making call to close driver server
	I0920 18:22:41.538815   75086 main.go:141] libmachine: (embed-certs-768431) Calling .Close
	I0920 18:22:41.539010   75086 main.go:141] libmachine: Making call to close driver server
	I0920 18:22:41.539027   75086 main.go:141] libmachine: (embed-certs-768431) Calling .Close
	I0920 18:22:41.539118   75086 main.go:141] libmachine: (embed-certs-768431) DBG | Closing plugin on server side
	I0920 18:22:41.539137   75086 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:22:41.539205   75086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:22:41.539273   75086 main.go:141] libmachine: (embed-certs-768431) DBG | Closing plugin on server side
	I0920 18:22:41.539286   75086 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:22:41.539300   75086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:22:41.539309   75086 main.go:141] libmachine: Making call to close driver server
	I0920 18:22:41.539348   75086 main.go:141] libmachine: (embed-certs-768431) Calling .Close
	I0920 18:22:41.539629   75086 main.go:141] libmachine: (embed-certs-768431) DBG | Closing plugin on server side
	I0920 18:22:41.539693   75086 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:22:41.539712   75086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:22:41.559164   75086 main.go:141] libmachine: Making call to close driver server
	I0920 18:22:41.559191   75086 main.go:141] libmachine: (embed-certs-768431) Calling .Close
	I0920 18:22:41.559517   75086 main.go:141] libmachine: (embed-certs-768431) DBG | Closing plugin on server side
	I0920 18:22:41.559580   75086 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:22:41.559596   75086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:22:41.880601   75086 main.go:141] libmachine: Making call to close driver server
	I0920 18:22:41.880640   75086 main.go:141] libmachine: (embed-certs-768431) Calling .Close
	I0920 18:22:41.880998   75086 main.go:141] libmachine: (embed-certs-768431) DBG | Closing plugin on server side
	I0920 18:22:41.881026   75086 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:22:41.881039   75086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:22:41.881047   75086 main.go:141] libmachine: Making call to close driver server
	I0920 18:22:41.881052   75086 main.go:141] libmachine: (embed-certs-768431) Calling .Close
	I0920 18:22:41.881304   75086 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:22:41.881322   75086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:22:41.881336   75086 addons.go:475] Verifying addon metrics-server=true in "embed-certs-768431"
	I0920 18:22:41.883315   75086 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0920 18:22:39.302453   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:41.802428   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:41.885367   75086 addons.go:510] duration metric: took 1.45891561s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0920 18:22:42.696266   75086 pod_ready.go:103] pod "etcd-embed-certs-768431" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:44.300965   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:46.301969   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:45.195160   75086 pod_ready.go:103] pod "etcd-embed-certs-768431" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:46.200142   75086 pod_ready.go:93] pod "etcd-embed-certs-768431" in "kube-system" namespace has status "Ready":"True"
	I0920 18:22:46.200166   75086 pod_ready.go:82] duration metric: took 5.510989222s for pod "etcd-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:22:46.200175   75086 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:22:46.205619   75086 pod_ready.go:93] pod "kube-apiserver-embed-certs-768431" in "kube-system" namespace has status "Ready":"True"
	I0920 18:22:46.205647   75086 pod_ready.go:82] duration metric: took 5.464252ms for pod "kube-apiserver-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:22:46.205659   75086 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:22:46.210040   75086 pod_ready.go:93] pod "kube-controller-manager-embed-certs-768431" in "kube-system" namespace has status "Ready":"True"
	I0920 18:22:46.210066   75086 pod_ready.go:82] duration metric: took 4.397776ms for pod "kube-controller-manager-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:22:46.210079   75086 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:22:48.216587   75086 pod_ready.go:103] pod "kube-scheduler-embed-certs-768431" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:49.217996   75086 pod_ready.go:93] pod "kube-scheduler-embed-certs-768431" in "kube-system" namespace has status "Ready":"True"
	I0920 18:22:49.218019   75086 pod_ready.go:82] duration metric: took 3.007931797s for pod "kube-scheduler-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:22:49.218025   75086 pod_ready.go:39] duration metric: took 8.533358652s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:22:49.218039   75086 api_server.go:52] waiting for apiserver process to appear ...
	I0920 18:22:49.218100   75086 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:22:49.235456   75086 api_server.go:72] duration metric: took 8.809056301s to wait for apiserver process to appear ...
	I0920 18:22:49.235480   75086 api_server.go:88] waiting for apiserver healthz status ...
	I0920 18:22:49.235499   75086 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0920 18:22:49.239562   75086 api_server.go:279] https://192.168.61.202:8443/healthz returned 200:
	ok
	I0920 18:22:49.240999   75086 api_server.go:141] control plane version: v1.31.1
	I0920 18:22:49.241024   75086 api_server.go:131] duration metric: took 5.537177ms to wait for apiserver health ...
	I0920 18:22:49.241033   75086 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 18:22:49.246297   75086 system_pods.go:59] 9 kube-system pods found
	I0920 18:22:49.246335   75086 system_pods.go:61] "coredns-7c65d6cfc9-g5tkc" [8877c0e8-6c8f-4a62-94bd-508982faee3c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 18:22:49.246342   75086 system_pods.go:61] "coredns-7c65d6cfc9-jkkdn" [f64288e4-009c-4ba8-93e7-30ca5296af46] Running
	I0920 18:22:49.246348   75086 system_pods.go:61] "etcd-embed-certs-768431" [f0ba7c9a-cc8b-43d4-aa48-a8f923d3c515] Running
	I0920 18:22:49.246352   75086 system_pods.go:61] "kube-apiserver-embed-certs-768431" [dba3b6cc-95db-47a0-ba41-3aa3ed3e8adb] Running
	I0920 18:22:49.246356   75086 system_pods.go:61] "kube-controller-manager-embed-certs-768431" [ff933e33-4c5c-4a8c-8ee4-6f0cb3ea4c3a] Running
	I0920 18:22:49.246359   75086 system_pods.go:61] "kube-proxy-c4527" [2e2d5102-0c42-4b87-8a27-dd53b8eb41f9] Running
	I0920 18:22:49.246362   75086 system_pods.go:61] "kube-scheduler-embed-certs-768431" [8cca3406-a395-4dcd-97ac-d8ce3e8deac6] Running
	I0920 18:22:49.246367   75086 system_pods.go:61] "metrics-server-6867b74b74-9snmf" [5fb654f5-5e73-436e-bc9d-04ef5077deb4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 18:22:49.246373   75086 system_pods.go:61] "storage-provisioner" [09227b45-ea2a-4fcc-b082-9978e5f00a4b] Running
	I0920 18:22:49.246379   75086 system_pods.go:74] duration metric: took 5.340615ms to wait for pod list to return data ...
	I0920 18:22:49.246388   75086 default_sa.go:34] waiting for default service account to be created ...
	I0920 18:22:49.249593   75086 default_sa.go:45] found service account: "default"
	I0920 18:22:49.249617   75086 default_sa.go:55] duration metric: took 3.222486ms for default service account to be created ...
	I0920 18:22:49.249626   75086 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 18:22:49.255747   75086 system_pods.go:86] 9 kube-system pods found
	I0920 18:22:49.255780   75086 system_pods.go:89] "coredns-7c65d6cfc9-g5tkc" [8877c0e8-6c8f-4a62-94bd-508982faee3c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 18:22:49.255790   75086 system_pods.go:89] "coredns-7c65d6cfc9-jkkdn" [f64288e4-009c-4ba8-93e7-30ca5296af46] Running
	I0920 18:22:49.255799   75086 system_pods.go:89] "etcd-embed-certs-768431" [f0ba7c9a-cc8b-43d4-aa48-a8f923d3c515] Running
	I0920 18:22:49.255805   75086 system_pods.go:89] "kube-apiserver-embed-certs-768431" [dba3b6cc-95db-47a0-ba41-3aa3ed3e8adb] Running
	I0920 18:22:49.255811   75086 system_pods.go:89] "kube-controller-manager-embed-certs-768431" [ff933e33-4c5c-4a8c-8ee4-6f0cb3ea4c3a] Running
	I0920 18:22:49.255817   75086 system_pods.go:89] "kube-proxy-c4527" [2e2d5102-0c42-4b87-8a27-dd53b8eb41f9] Running
	I0920 18:22:49.255826   75086 system_pods.go:89] "kube-scheduler-embed-certs-768431" [8cca3406-a395-4dcd-97ac-d8ce3e8deac6] Running
	I0920 18:22:49.255834   75086 system_pods.go:89] "metrics-server-6867b74b74-9snmf" [5fb654f5-5e73-436e-bc9d-04ef5077deb4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 18:22:49.255845   75086 system_pods.go:89] "storage-provisioner" [09227b45-ea2a-4fcc-b082-9978e5f00a4b] Running
	I0920 18:22:49.255852   75086 system_pods.go:126] duration metric: took 6.220419ms to wait for k8s-apps to be running ...
	I0920 18:22:49.255863   75086 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 18:22:49.255918   75086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:22:49.271916   75086 system_svc.go:56] duration metric: took 16.046857ms WaitForService to wait for kubelet
	I0920 18:22:49.271946   75086 kubeadm.go:582] duration metric: took 8.845551531s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:22:49.271962   75086 node_conditions.go:102] verifying NodePressure condition ...
	I0920 18:22:49.277150   75086 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 18:22:49.277190   75086 node_conditions.go:123] node cpu capacity is 2
	I0920 18:22:49.277203   75086 node_conditions.go:105] duration metric: took 5.234938ms to run NodePressure ...
	I0920 18:22:49.277216   75086 start.go:241] waiting for startup goroutines ...
	I0920 18:22:49.277226   75086 start.go:246] waiting for cluster config update ...
	I0920 18:22:49.277240   75086 start.go:255] writing updated cluster config ...
	I0920 18:22:49.278062   75086 ssh_runner.go:195] Run: rm -f paused
	I0920 18:22:49.329596   75086 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 18:22:49.331593   75086 out.go:177] * Done! kubectl is now configured to use "embed-certs-768431" cluster and "default" namespace by default
	I0920 18:22:48.802304   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:51.301288   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:53.801957   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:56.301310   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:58.302165   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:23:00.801931   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:23:03.301289   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:23:05.801777   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:23:08.301539   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:23:08.801990   74753 pod_ready.go:82] duration metric: took 4m0.006972987s for pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace to be "Ready" ...
	E0920 18:23:08.802010   74753 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0920 18:23:08.802019   74753 pod_ready.go:39] duration metric: took 4m2.40678087s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:23:08.802035   74753 api_server.go:52] waiting for apiserver process to appear ...
	I0920 18:23:08.802062   74753 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:23:08.802110   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:23:08.848687   74753 cri.go:89] found id: "334e4df5baa4f8f7d58af2308fab7f78de22f7e07d6dfbcbeaa0df2a52f6b207"
	I0920 18:23:08.848709   74753 cri.go:89] found id: ""
	I0920 18:23:08.848716   74753 logs.go:276] 1 containers: [334e4df5baa4f8f7d58af2308fab7f78de22f7e07d6dfbcbeaa0df2a52f6b207]
	I0920 18:23:08.848764   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:08.853180   74753 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:23:08.853239   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:23:08.889768   74753 cri.go:89] found id: "98aa96314cf8f863121b9953ad6335628481f536fcab49b7c00505a17eb35ff2"
	I0920 18:23:08.889793   74753 cri.go:89] found id: ""
	I0920 18:23:08.889803   74753 logs.go:276] 1 containers: [98aa96314cf8f863121b9953ad6335628481f536fcab49b7c00505a17eb35ff2]
	I0920 18:23:08.889869   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:08.893865   74753 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:23:08.893937   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:23:08.936592   74753 cri.go:89] found id: "35f0d8dd053d4c376ccf21215492d8509ff701e7d09b345fdbe06d505ba2d6b4"
	I0920 18:23:08.936619   74753 cri.go:89] found id: ""
	I0920 18:23:08.936629   74753 logs.go:276] 1 containers: [35f0d8dd053d4c376ccf21215492d8509ff701e7d09b345fdbe06d505ba2d6b4]
	I0920 18:23:08.936768   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:08.941222   74753 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:23:08.941311   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:23:08.978894   74753 cri.go:89] found id: "8153479cebb05353bec31c41778746c485cb294bf41c3f98e30559ad7da64c64"
	I0920 18:23:08.978918   74753 cri.go:89] found id: ""
	I0920 18:23:08.978929   74753 logs.go:276] 1 containers: [8153479cebb05353bec31c41778746c485cb294bf41c3f98e30559ad7da64c64]
	I0920 18:23:08.978977   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:08.982783   74753 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:23:08.982845   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:23:09.018400   74753 cri.go:89] found id: "6df198ca54e8004a532de9b15cfe7a48a659ae06623c00a78cad528762944497"
	I0920 18:23:09.018422   74753 cri.go:89] found id: ""
	I0920 18:23:09.018430   74753 logs.go:276] 1 containers: [6df198ca54e8004a532de9b15cfe7a48a659ae06623c00a78cad528762944497]
	I0920 18:23:09.018485   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:09.022403   74753 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:23:09.022470   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:23:09.062315   74753 cri.go:89] found id: "3ebf4c520d684127cd0b6146d44e863090b9c2db3ac866e9ce3ef2d10d2d6da1"
	I0920 18:23:09.062385   74753 cri.go:89] found id: ""
	I0920 18:23:09.062397   74753 logs.go:276] 1 containers: [3ebf4c520d684127cd0b6146d44e863090b9c2db3ac866e9ce3ef2d10d2d6da1]
	I0920 18:23:09.062453   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:09.066976   74753 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:23:09.067049   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:23:09.107467   74753 cri.go:89] found id: ""
	I0920 18:23:09.107504   74753 logs.go:276] 0 containers: []
	W0920 18:23:09.107515   74753 logs.go:278] No container was found matching "kindnet"
	I0920 18:23:09.107523   74753 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0920 18:23:09.107583   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0920 18:23:09.150489   74753 cri.go:89] found id: "179e4a02f3459a613d3307806b477d54581555b18babca62d0c5e553b9562442"
	I0920 18:23:09.150519   74753 cri.go:89] found id: "3eb9abdf57de51979118cfa77b613c22e28122fcb1da9e72fbf6f3a946ed3531"
	I0920 18:23:09.150526   74753 cri.go:89] found id: ""
	I0920 18:23:09.150536   74753 logs.go:276] 2 containers: [179e4a02f3459a613d3307806b477d54581555b18babca62d0c5e553b9562442 3eb9abdf57de51979118cfa77b613c22e28122fcb1da9e72fbf6f3a946ed3531]
	I0920 18:23:09.150589   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:09.154618   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:09.158682   74753 logs.go:123] Gathering logs for container status ...
	I0920 18:23:09.158708   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:23:09.198393   74753 logs.go:123] Gathering logs for kubelet ...
	I0920 18:23:09.198420   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:23:09.266161   74753 logs.go:123] Gathering logs for kube-apiserver [334e4df5baa4f8f7d58af2308fab7f78de22f7e07d6dfbcbeaa0df2a52f6b207] ...
	I0920 18:23:09.266197   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 334e4df5baa4f8f7d58af2308fab7f78de22f7e07d6dfbcbeaa0df2a52f6b207"
	I0920 18:23:09.311406   74753 logs.go:123] Gathering logs for etcd [98aa96314cf8f863121b9953ad6335628481f536fcab49b7c00505a17eb35ff2] ...
	I0920 18:23:09.311438   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 98aa96314cf8f863121b9953ad6335628481f536fcab49b7c00505a17eb35ff2"
	I0920 18:23:09.353233   74753 logs.go:123] Gathering logs for kube-controller-manager [3ebf4c520d684127cd0b6146d44e863090b9c2db3ac866e9ce3ef2d10d2d6da1] ...
	I0920 18:23:09.353271   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ebf4c520d684127cd0b6146d44e863090b9c2db3ac866e9ce3ef2d10d2d6da1"
	I0920 18:23:09.415966   74753 logs.go:123] Gathering logs for storage-provisioner [3eb9abdf57de51979118cfa77b613c22e28122fcb1da9e72fbf6f3a946ed3531] ...
	I0920 18:23:09.416010   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3eb9abdf57de51979118cfa77b613c22e28122fcb1da9e72fbf6f3a946ed3531"
	I0920 18:23:09.462018   74753 logs.go:123] Gathering logs for storage-provisioner [179e4a02f3459a613d3307806b477d54581555b18babca62d0c5e553b9562442] ...
	I0920 18:23:09.462054   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 179e4a02f3459a613d3307806b477d54581555b18babca62d0c5e553b9562442"
	I0920 18:23:09.499851   74753 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:23:09.499883   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:23:10.018536   74753 logs.go:123] Gathering logs for dmesg ...
	I0920 18:23:10.018586   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:23:10.033607   74753 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:23:10.033639   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 18:23:10.167598   74753 logs.go:123] Gathering logs for coredns [35f0d8dd053d4c376ccf21215492d8509ff701e7d09b345fdbe06d505ba2d6b4] ...
	I0920 18:23:10.167645   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35f0d8dd053d4c376ccf21215492d8509ff701e7d09b345fdbe06d505ba2d6b4"
	I0920 18:23:10.201819   74753 logs.go:123] Gathering logs for kube-scheduler [8153479cebb05353bec31c41778746c485cb294bf41c3f98e30559ad7da64c64] ...
	I0920 18:23:10.201856   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8153479cebb05353bec31c41778746c485cb294bf41c3f98e30559ad7da64c64"
	I0920 18:23:10.237079   74753 logs.go:123] Gathering logs for kube-proxy [6df198ca54e8004a532de9b15cfe7a48a659ae06623c00a78cad528762944497] ...
	I0920 18:23:10.237108   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6df198ca54e8004a532de9b15cfe7a48a659ae06623c00a78cad528762944497"
	I0920 18:23:12.778360   74753 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:23:12.796411   74753 api_server.go:72] duration metric: took 4m13.647076581s to wait for apiserver process to appear ...
	I0920 18:23:12.796438   74753 api_server.go:88] waiting for apiserver healthz status ...
	I0920 18:23:12.796475   74753 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:23:12.796525   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:23:12.833259   74753 cri.go:89] found id: "334e4df5baa4f8f7d58af2308fab7f78de22f7e07d6dfbcbeaa0df2a52f6b207"
	I0920 18:23:12.833279   74753 cri.go:89] found id: ""
	I0920 18:23:12.833287   74753 logs.go:276] 1 containers: [334e4df5baa4f8f7d58af2308fab7f78de22f7e07d6dfbcbeaa0df2a52f6b207]
	I0920 18:23:12.833334   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:12.837497   74753 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:23:12.837568   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:23:12.877602   74753 cri.go:89] found id: "98aa96314cf8f863121b9953ad6335628481f536fcab49b7c00505a17eb35ff2"
	I0920 18:23:12.877628   74753 cri.go:89] found id: ""
	I0920 18:23:12.877639   74753 logs.go:276] 1 containers: [98aa96314cf8f863121b9953ad6335628481f536fcab49b7c00505a17eb35ff2]
	I0920 18:23:12.877688   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:12.881869   74753 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:23:12.881951   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:23:12.924617   74753 cri.go:89] found id: "35f0d8dd053d4c376ccf21215492d8509ff701e7d09b345fdbe06d505ba2d6b4"
	I0920 18:23:12.924641   74753 cri.go:89] found id: ""
	I0920 18:23:12.924650   74753 logs.go:276] 1 containers: [35f0d8dd053d4c376ccf21215492d8509ff701e7d09b345fdbe06d505ba2d6b4]
	I0920 18:23:12.924710   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:12.929317   74753 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:23:12.929381   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:23:12.968642   74753 cri.go:89] found id: "8153479cebb05353bec31c41778746c485cb294bf41c3f98e30559ad7da64c64"
	I0920 18:23:12.968672   74753 cri.go:89] found id: ""
	I0920 18:23:12.968682   74753 logs.go:276] 1 containers: [8153479cebb05353bec31c41778746c485cb294bf41c3f98e30559ad7da64c64]
	I0920 18:23:12.968745   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:12.974588   74753 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:23:12.974665   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:23:13.010036   74753 cri.go:89] found id: "6df198ca54e8004a532de9b15cfe7a48a659ae06623c00a78cad528762944497"
	I0920 18:23:13.010067   74753 cri.go:89] found id: ""
	I0920 18:23:13.010078   74753 logs.go:276] 1 containers: [6df198ca54e8004a532de9b15cfe7a48a659ae06623c00a78cad528762944497]
	I0920 18:23:13.010136   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:13.014300   74753 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:23:13.014358   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:23:13.055560   74753 cri.go:89] found id: "3ebf4c520d684127cd0b6146d44e863090b9c2db3ac866e9ce3ef2d10d2d6da1"
	I0920 18:23:13.055591   74753 cri.go:89] found id: ""
	I0920 18:23:13.055601   74753 logs.go:276] 1 containers: [3ebf4c520d684127cd0b6146d44e863090b9c2db3ac866e9ce3ef2d10d2d6da1]
	I0920 18:23:13.055663   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:13.059828   74753 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:23:13.059887   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:23:13.100161   74753 cri.go:89] found id: ""
	I0920 18:23:13.100189   74753 logs.go:276] 0 containers: []
	W0920 18:23:13.100199   74753 logs.go:278] No container was found matching "kindnet"
	I0920 18:23:13.100207   74753 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0920 18:23:13.100269   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0920 18:23:13.136788   74753 cri.go:89] found id: "179e4a02f3459a613d3307806b477d54581555b18babca62d0c5e553b9562442"
	I0920 18:23:13.136818   74753 cri.go:89] found id: "3eb9abdf57de51979118cfa77b613c22e28122fcb1da9e72fbf6f3a946ed3531"
	I0920 18:23:13.136824   74753 cri.go:89] found id: ""
	I0920 18:23:13.136831   74753 logs.go:276] 2 containers: [179e4a02f3459a613d3307806b477d54581555b18babca62d0c5e553b9562442 3eb9abdf57de51979118cfa77b613c22e28122fcb1da9e72fbf6f3a946ed3531]
	I0920 18:23:13.136894   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:13.140956   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:13.145991   74753 logs.go:123] Gathering logs for container status ...
	I0920 18:23:13.146023   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:23:13.187274   74753 logs.go:123] Gathering logs for kubelet ...
	I0920 18:23:13.187303   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:23:13.257111   74753 logs.go:123] Gathering logs for dmesg ...
	I0920 18:23:13.257147   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:23:13.271678   74753 logs.go:123] Gathering logs for kube-controller-manager [3ebf4c520d684127cd0b6146d44e863090b9c2db3ac866e9ce3ef2d10d2d6da1] ...
	I0920 18:23:13.271704   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ebf4c520d684127cd0b6146d44e863090b9c2db3ac866e9ce3ef2d10d2d6da1"
	I0920 18:23:13.327163   74753 logs.go:123] Gathering logs for storage-provisioner [179e4a02f3459a613d3307806b477d54581555b18babca62d0c5e553b9562442] ...
	I0920 18:23:13.327191   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 179e4a02f3459a613d3307806b477d54581555b18babca62d0c5e553b9562442"
	I0920 18:23:13.362002   74753 logs.go:123] Gathering logs for storage-provisioner [3eb9abdf57de51979118cfa77b613c22e28122fcb1da9e72fbf6f3a946ed3531] ...
	I0920 18:23:13.362039   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3eb9abdf57de51979118cfa77b613c22e28122fcb1da9e72fbf6f3a946ed3531"
	I0920 18:23:13.409907   74753 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:23:13.409938   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:23:13.840233   74753 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:23:13.840275   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 18:23:13.956466   74753 logs.go:123] Gathering logs for kube-apiserver [334e4df5baa4f8f7d58af2308fab7f78de22f7e07d6dfbcbeaa0df2a52f6b207] ...
	I0920 18:23:13.956499   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 334e4df5baa4f8f7d58af2308fab7f78de22f7e07d6dfbcbeaa0df2a52f6b207"
	I0920 18:23:14.006028   74753 logs.go:123] Gathering logs for etcd [98aa96314cf8f863121b9953ad6335628481f536fcab49b7c00505a17eb35ff2] ...
	I0920 18:23:14.006063   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 98aa96314cf8f863121b9953ad6335628481f536fcab49b7c00505a17eb35ff2"
	I0920 18:23:14.051354   74753 logs.go:123] Gathering logs for coredns [35f0d8dd053d4c376ccf21215492d8509ff701e7d09b345fdbe06d505ba2d6b4] ...
	I0920 18:23:14.051382   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35f0d8dd053d4c376ccf21215492d8509ff701e7d09b345fdbe06d505ba2d6b4"
	I0920 18:23:14.086486   74753 logs.go:123] Gathering logs for kube-scheduler [8153479cebb05353bec31c41778746c485cb294bf41c3f98e30559ad7da64c64] ...
	I0920 18:23:14.086518   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8153479cebb05353bec31c41778746c485cb294bf41c3f98e30559ad7da64c64"
	I0920 18:23:14.128119   74753 logs.go:123] Gathering logs for kube-proxy [6df198ca54e8004a532de9b15cfe7a48a659ae06623c00a78cad528762944497] ...
	I0920 18:23:14.128149   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6df198ca54e8004a532de9b15cfe7a48a659ae06623c00a78cad528762944497"
	I0920 18:23:16.668901   74753 api_server.go:253] Checking apiserver healthz at https://192.168.50.47:8443/healthz ...
	I0920 18:23:16.674235   74753 api_server.go:279] https://192.168.50.47:8443/healthz returned 200:
	ok
	I0920 18:23:16.675431   74753 api_server.go:141] control plane version: v1.31.1
	I0920 18:23:16.675449   74753 api_server.go:131] duration metric: took 3.879005012s to wait for apiserver health ...
	I0920 18:23:16.675456   74753 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 18:23:16.675481   74753 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:23:16.675527   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:23:16.711645   74753 cri.go:89] found id: "334e4df5baa4f8f7d58af2308fab7f78de22f7e07d6dfbcbeaa0df2a52f6b207"
	I0920 18:23:16.711673   74753 cri.go:89] found id: ""
	I0920 18:23:16.711683   74753 logs.go:276] 1 containers: [334e4df5baa4f8f7d58af2308fab7f78de22f7e07d6dfbcbeaa0df2a52f6b207]
	I0920 18:23:16.711743   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:16.716466   74753 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:23:16.716536   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:23:16.763061   74753 cri.go:89] found id: "98aa96314cf8f863121b9953ad6335628481f536fcab49b7c00505a17eb35ff2"
	I0920 18:23:16.763085   74753 cri.go:89] found id: ""
	I0920 18:23:16.763095   74753 logs.go:276] 1 containers: [98aa96314cf8f863121b9953ad6335628481f536fcab49b7c00505a17eb35ff2]
	I0920 18:23:16.763155   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:16.767018   74753 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:23:16.767075   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:23:16.801111   74753 cri.go:89] found id: "35f0d8dd053d4c376ccf21215492d8509ff701e7d09b345fdbe06d505ba2d6b4"
	I0920 18:23:16.801131   74753 cri.go:89] found id: ""
	I0920 18:23:16.801138   74753 logs.go:276] 1 containers: [35f0d8dd053d4c376ccf21215492d8509ff701e7d09b345fdbe06d505ba2d6b4]
	I0920 18:23:16.801192   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:16.805372   74753 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:23:16.805436   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:23:16.843368   74753 cri.go:89] found id: "8153479cebb05353bec31c41778746c485cb294bf41c3f98e30559ad7da64c64"
	I0920 18:23:16.843399   74753 cri.go:89] found id: ""
	I0920 18:23:16.843410   74753 logs.go:276] 1 containers: [8153479cebb05353bec31c41778746c485cb294bf41c3f98e30559ad7da64c64]
	I0920 18:23:16.843461   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:16.847284   74753 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:23:16.847341   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:23:16.882749   74753 cri.go:89] found id: "6df198ca54e8004a532de9b15cfe7a48a659ae06623c00a78cad528762944497"
	I0920 18:23:16.882770   74753 cri.go:89] found id: ""
	I0920 18:23:16.882777   74753 logs.go:276] 1 containers: [6df198ca54e8004a532de9b15cfe7a48a659ae06623c00a78cad528762944497]
	I0920 18:23:16.882838   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:16.887003   74753 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:23:16.887083   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:23:16.923261   74753 cri.go:89] found id: "3ebf4c520d684127cd0b6146d44e863090b9c2db3ac866e9ce3ef2d10d2d6da1"
	I0920 18:23:16.923289   74753 cri.go:89] found id: ""
	I0920 18:23:16.923303   74753 logs.go:276] 1 containers: [3ebf4c520d684127cd0b6146d44e863090b9c2db3ac866e9ce3ef2d10d2d6da1]
	I0920 18:23:16.923357   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:16.927722   74753 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:23:16.927782   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:23:16.963753   74753 cri.go:89] found id: ""
	I0920 18:23:16.963780   74753 logs.go:276] 0 containers: []
	W0920 18:23:16.963791   74753 logs.go:278] No container was found matching "kindnet"
	I0920 18:23:16.963799   74753 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0920 18:23:16.963866   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0920 18:23:17.000548   74753 cri.go:89] found id: "179e4a02f3459a613d3307806b477d54581555b18babca62d0c5e553b9562442"
	I0920 18:23:17.000566   74753 cri.go:89] found id: "3eb9abdf57de51979118cfa77b613c22e28122fcb1da9e72fbf6f3a946ed3531"
	I0920 18:23:17.000571   74753 cri.go:89] found id: ""
	I0920 18:23:17.000578   74753 logs.go:276] 2 containers: [179e4a02f3459a613d3307806b477d54581555b18babca62d0c5e553b9562442 3eb9abdf57de51979118cfa77b613c22e28122fcb1da9e72fbf6f3a946ed3531]
	I0920 18:23:17.000627   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:17.004708   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:17.008489   74753 logs.go:123] Gathering logs for coredns [35f0d8dd053d4c376ccf21215492d8509ff701e7d09b345fdbe06d505ba2d6b4] ...
	I0920 18:23:17.008518   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35f0d8dd053d4c376ccf21215492d8509ff701e7d09b345fdbe06d505ba2d6b4"
	I0920 18:23:17.045971   74753 logs.go:123] Gathering logs for kube-scheduler [8153479cebb05353bec31c41778746c485cb294bf41c3f98e30559ad7da64c64] ...
	I0920 18:23:17.046005   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8153479cebb05353bec31c41778746c485cb294bf41c3f98e30559ad7da64c64"
	I0920 18:23:17.083976   74753 logs.go:123] Gathering logs for kube-controller-manager [3ebf4c520d684127cd0b6146d44e863090b9c2db3ac866e9ce3ef2d10d2d6da1] ...
	I0920 18:23:17.084005   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ebf4c520d684127cd0b6146d44e863090b9c2db3ac866e9ce3ef2d10d2d6da1"
	I0920 18:23:17.137192   74753 logs.go:123] Gathering logs for storage-provisioner [179e4a02f3459a613d3307806b477d54581555b18babca62d0c5e553b9562442] ...
	I0920 18:23:17.137228   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 179e4a02f3459a613d3307806b477d54581555b18babca62d0c5e553b9562442"
	I0920 18:23:17.177203   74753 logs.go:123] Gathering logs for dmesg ...
	I0920 18:23:17.177234   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:23:17.192612   74753 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:23:17.192639   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 18:23:17.300681   74753 logs.go:123] Gathering logs for kube-apiserver [334e4df5baa4f8f7d58af2308fab7f78de22f7e07d6dfbcbeaa0df2a52f6b207] ...
	I0920 18:23:17.300714   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 334e4df5baa4f8f7d58af2308fab7f78de22f7e07d6dfbcbeaa0df2a52f6b207"
	I0920 18:23:17.346835   74753 logs.go:123] Gathering logs for storage-provisioner [3eb9abdf57de51979118cfa77b613c22e28122fcb1da9e72fbf6f3a946ed3531] ...
	I0920 18:23:17.346866   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3eb9abdf57de51979118cfa77b613c22e28122fcb1da9e72fbf6f3a946ed3531"
	I0920 18:23:17.382625   74753 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:23:17.382650   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:23:17.750803   74753 logs.go:123] Gathering logs for container status ...
	I0920 18:23:17.750853   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:23:17.801114   74753 logs.go:123] Gathering logs for kubelet ...
	I0920 18:23:17.801142   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:23:17.866547   74753 logs.go:123] Gathering logs for etcd [98aa96314cf8f863121b9953ad6335628481f536fcab49b7c00505a17eb35ff2] ...
	I0920 18:23:17.866589   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 98aa96314cf8f863121b9953ad6335628481f536fcab49b7c00505a17eb35ff2"
	I0920 18:23:17.909489   74753 logs.go:123] Gathering logs for kube-proxy [6df198ca54e8004a532de9b15cfe7a48a659ae06623c00a78cad528762944497] ...
	I0920 18:23:17.909519   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6df198ca54e8004a532de9b15cfe7a48a659ae06623c00a78cad528762944497"
	I0920 18:23:20.456014   74753 system_pods.go:59] 8 kube-system pods found
	I0920 18:23:20.456055   74753 system_pods.go:61] "coredns-7c65d6cfc9-j2t5h" [dd2636f1-3200-4f22-957c-046277c9be8c] Running
	I0920 18:23:20.456063   74753 system_pods.go:61] "etcd-no-preload-956403" [daf84be0-e673-4685-a998-47ec64664484] Running
	I0920 18:23:20.456068   74753 system_pods.go:61] "kube-apiserver-no-preload-956403" [416217e6-3c91-49ec-bd69-812a49b4afd9] Running
	I0920 18:23:20.456074   74753 system_pods.go:61] "kube-controller-manager-no-preload-956403" [30fae740-b073-4619-bca5-010ca90b2667] Running
	I0920 18:23:20.456079   74753 system_pods.go:61] "kube-proxy-sz4bm" [269600fb-ef65-4b17-8c07-76c79e35f5a8] Running
	I0920 18:23:20.456085   74753 system_pods.go:61] "kube-scheduler-no-preload-956403" [46432927-e506-4665-8825-c5f6a2ed0458] Running
	I0920 18:23:20.456095   74753 system_pods.go:61] "metrics-server-6867b74b74-tfsff" [599ba06a-6d4d-483b-b390-a3595a814757] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 18:23:20.456101   74753 system_pods.go:61] "storage-provisioner" [df661627-7d32-4467-9805-1ae65d4fa35c] Running
	I0920 18:23:20.456109   74753 system_pods.go:74] duration metric: took 3.780646412s to wait for pod list to return data ...
	I0920 18:23:20.456116   74753 default_sa.go:34] waiting for default service account to be created ...
	I0920 18:23:20.459571   74753 default_sa.go:45] found service account: "default"
	I0920 18:23:20.459598   74753 default_sa.go:55] duration metric: took 3.475906ms for default service account to be created ...
	I0920 18:23:20.459607   74753 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 18:23:20.464453   74753 system_pods.go:86] 8 kube-system pods found
	I0920 18:23:20.464489   74753 system_pods.go:89] "coredns-7c65d6cfc9-j2t5h" [dd2636f1-3200-4f22-957c-046277c9be8c] Running
	I0920 18:23:20.464495   74753 system_pods.go:89] "etcd-no-preload-956403" [daf84be0-e673-4685-a998-47ec64664484] Running
	I0920 18:23:20.464499   74753 system_pods.go:89] "kube-apiserver-no-preload-956403" [416217e6-3c91-49ec-bd69-812a49b4afd9] Running
	I0920 18:23:20.464544   74753 system_pods.go:89] "kube-controller-manager-no-preload-956403" [30fae740-b073-4619-bca5-010ca90b2667] Running
	I0920 18:23:20.464559   74753 system_pods.go:89] "kube-proxy-sz4bm" [269600fb-ef65-4b17-8c07-76c79e35f5a8] Running
	I0920 18:23:20.464563   74753 system_pods.go:89] "kube-scheduler-no-preload-956403" [46432927-e506-4665-8825-c5f6a2ed0458] Running
	I0920 18:23:20.464570   74753 system_pods.go:89] "metrics-server-6867b74b74-tfsff" [599ba06a-6d4d-483b-b390-a3595a814757] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 18:23:20.464577   74753 system_pods.go:89] "storage-provisioner" [df661627-7d32-4467-9805-1ae65d4fa35c] Running
	I0920 18:23:20.464583   74753 system_pods.go:126] duration metric: took 4.971732ms to wait for k8s-apps to be running ...
	I0920 18:23:20.464592   74753 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 18:23:20.464637   74753 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:23:20.483425   74753 system_svc.go:56] duration metric: took 18.826162ms WaitForService to wait for kubelet
	I0920 18:23:20.483452   74753 kubeadm.go:582] duration metric: took 4m21.334124064s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:23:20.483469   74753 node_conditions.go:102] verifying NodePressure condition ...
	I0920 18:23:20.486703   74753 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 18:23:20.486725   74753 node_conditions.go:123] node cpu capacity is 2
	I0920 18:23:20.486736   74753 node_conditions.go:105] duration metric: took 3.263035ms to run NodePressure ...
	I0920 18:23:20.486745   74753 start.go:241] waiting for startup goroutines ...
	I0920 18:23:20.486752   74753 start.go:246] waiting for cluster config update ...
	I0920 18:23:20.486763   74753 start.go:255] writing updated cluster config ...
	I0920 18:23:20.487023   74753 ssh_runner.go:195] Run: rm -f paused
	I0920 18:23:20.538569   74753 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 18:23:20.540783   74753 out.go:177] * Done! kubectl is now configured to use "no-preload-956403" cluster and "default" namespace by default
	I0920 18:24:23.891635   75577 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0920 18:24:23.891735   75577 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0920 18:24:23.893591   75577 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0920 18:24:23.893681   75577 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 18:24:23.893785   75577 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 18:24:23.893913   75577 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 18:24:23.894056   75577 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0920 18:24:23.894134   75577 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 18:24:23.895929   75577 out.go:235]   - Generating certificates and keys ...
	I0920 18:24:23.896024   75577 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 18:24:23.896127   75577 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 18:24:23.896245   75577 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 18:24:23.896329   75577 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 18:24:23.896426   75577 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 18:24:23.896510   75577 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 18:24:23.896603   75577 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 18:24:23.896686   75577 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 18:24:23.896783   75577 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 18:24:23.896865   75577 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 18:24:23.896916   75577 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 18:24:23.896984   75577 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 18:24:23.897054   75577 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 18:24:23.897112   75577 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 18:24:23.897174   75577 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 18:24:23.897237   75577 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 18:24:23.897367   75577 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 18:24:23.897488   75577 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 18:24:23.897554   75577 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 18:24:23.897660   75577 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 18:24:23.899056   75577 out.go:235]   - Booting up control plane ...
	I0920 18:24:23.899173   75577 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 18:24:23.899287   75577 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 18:24:23.899371   75577 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 18:24:23.899460   75577 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 18:24:23.899639   75577 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0920 18:24:23.899719   75577 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0920 18:24:23.899823   75577 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:24:23.900047   75577 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:24:23.900124   75577 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:24:23.900322   75577 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:24:23.900392   75577 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:24:23.900556   75577 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:24:23.900634   75577 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:24:23.900826   75577 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:24:23.900884   75577 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:24:23.901044   75577 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:24:23.901052   75577 kubeadm.go:310] 
	I0920 18:24:23.901094   75577 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0920 18:24:23.901129   75577 kubeadm.go:310] 		timed out waiting for the condition
	I0920 18:24:23.901135   75577 kubeadm.go:310] 
	I0920 18:24:23.901179   75577 kubeadm.go:310] 	This error is likely caused by:
	I0920 18:24:23.901209   75577 kubeadm.go:310] 		- The kubelet is not running
	I0920 18:24:23.901314   75577 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0920 18:24:23.901335   75577 kubeadm.go:310] 
	I0920 18:24:23.901458   75577 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0920 18:24:23.901515   75577 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0920 18:24:23.901547   75577 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0920 18:24:23.901556   75577 kubeadm.go:310] 
	I0920 18:24:23.901719   75577 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0920 18:24:23.901859   75577 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0920 18:24:23.901870   75577 kubeadm.go:310] 
	I0920 18:24:23.902030   75577 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0920 18:24:23.902122   75577 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0920 18:24:23.902186   75577 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0920 18:24:23.902264   75577 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0920 18:24:23.902299   75577 kubeadm.go:310] 
	W0920 18:24:23.902388   75577 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0920 18:24:23.902435   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0920 18:24:24.370490   75577 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:24:24.385383   75577 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 18:24:24.395443   75577 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 18:24:24.395463   75577 kubeadm.go:157] found existing configuration files:
	
	I0920 18:24:24.395505   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 18:24:24.404947   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 18:24:24.405021   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 18:24:24.415145   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 18:24:24.424199   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 18:24:24.424279   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 18:24:24.435108   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 18:24:24.446684   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 18:24:24.446742   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 18:24:24.457199   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 18:24:24.467240   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 18:24:24.467315   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 18:24:24.479293   75577 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 18:24:24.704601   75577 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 18:26:21.167677   75577 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0920 18:26:21.167760   75577 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0920 18:26:21.170176   75577 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0920 18:26:21.170249   75577 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 18:26:21.170353   75577 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 18:26:21.170485   75577 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 18:26:21.170653   75577 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0920 18:26:21.170778   75577 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 18:26:21.172479   75577 out.go:235]   - Generating certificates and keys ...
	I0920 18:26:21.172559   75577 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 18:26:21.172623   75577 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 18:26:21.172734   75577 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 18:26:21.172820   75577 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 18:26:21.172901   75577 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 18:26:21.172948   75577 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 18:26:21.173054   75577 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 18:26:21.173145   75577 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 18:26:21.173238   75577 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 18:26:21.173325   75577 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 18:26:21.173381   75577 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 18:26:21.173468   75577 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 18:26:21.173517   75577 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 18:26:21.173562   75577 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 18:26:21.173657   75577 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 18:26:21.173753   75577 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 18:26:21.173931   75577 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 18:26:21.174042   75577 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 18:26:21.174120   75577 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 18:26:21.174226   75577 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 18:26:21.175785   75577 out.go:235]   - Booting up control plane ...
	I0920 18:26:21.175897   75577 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 18:26:21.176062   75577 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 18:26:21.176179   75577 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 18:26:21.176283   75577 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 18:26:21.176482   75577 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0920 18:26:21.176567   75577 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0920 18:26:21.176654   75577 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:26:21.176850   75577 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:26:21.176952   75577 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:26:21.177146   75577 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:26:21.177255   75577 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:26:21.177460   75577 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:26:21.177527   75577 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:26:21.177681   75577 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:26:21.177758   75577 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:26:21.177952   75577 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:26:21.177966   75577 kubeadm.go:310] 
	I0920 18:26:21.178001   75577 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0920 18:26:21.178035   75577 kubeadm.go:310] 		timed out waiting for the condition
	I0920 18:26:21.178041   75577 kubeadm.go:310] 
	I0920 18:26:21.178070   75577 kubeadm.go:310] 	This error is likely caused by:
	I0920 18:26:21.178099   75577 kubeadm.go:310] 		- The kubelet is not running
	I0920 18:26:21.178239   75577 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0920 18:26:21.178251   75577 kubeadm.go:310] 
	I0920 18:26:21.178348   75577 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0920 18:26:21.178378   75577 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0920 18:26:21.178415   75577 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0920 18:26:21.178421   75577 kubeadm.go:310] 
	I0920 18:26:21.178505   75577 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0920 18:26:21.178581   75577 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0920 18:26:21.178590   75577 kubeadm.go:310] 
	I0920 18:26:21.178695   75577 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0920 18:26:21.178770   75577 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0920 18:26:21.178832   75577 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0920 18:26:21.178893   75577 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0920 18:26:21.178950   75577 kubeadm.go:394] duration metric: took 7m57.80706374s to StartCluster
	I0920 18:26:21.178961   75577 kubeadm.go:310] 
	I0920 18:26:21.178989   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:26:21.179047   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:26:21.229528   75577 cri.go:89] found id: ""
	I0920 18:26:21.229554   75577 logs.go:276] 0 containers: []
	W0920 18:26:21.229564   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:26:21.229572   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:26:21.229644   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:26:21.265997   75577 cri.go:89] found id: ""
	I0920 18:26:21.266021   75577 logs.go:276] 0 containers: []
	W0920 18:26:21.266029   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:26:21.266034   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:26:21.266081   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:26:21.309358   75577 cri.go:89] found id: ""
	I0920 18:26:21.309391   75577 logs.go:276] 0 containers: []
	W0920 18:26:21.309402   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:26:21.309409   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:26:21.309469   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:26:21.349418   75577 cri.go:89] found id: ""
	I0920 18:26:21.349442   75577 logs.go:276] 0 containers: []
	W0920 18:26:21.349451   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:26:21.349457   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:26:21.349537   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:26:21.388067   75577 cri.go:89] found id: ""
	I0920 18:26:21.388092   75577 logs.go:276] 0 containers: []
	W0920 18:26:21.388105   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:26:21.388111   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:26:21.388165   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:26:21.428474   75577 cri.go:89] found id: ""
	I0920 18:26:21.428501   75577 logs.go:276] 0 containers: []
	W0920 18:26:21.428512   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:26:21.428519   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:26:21.428580   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:26:21.466188   75577 cri.go:89] found id: ""
	I0920 18:26:21.466215   75577 logs.go:276] 0 containers: []
	W0920 18:26:21.466225   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:26:21.466232   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:26:21.466291   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:26:21.504396   75577 cri.go:89] found id: ""
	I0920 18:26:21.504422   75577 logs.go:276] 0 containers: []
	W0920 18:26:21.504432   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:26:21.504443   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:26:21.504457   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:26:21.608057   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:26:21.608098   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:26:21.672536   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:26:21.672564   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:26:21.723219   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:26:21.723251   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:26:21.736436   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:26:21.736463   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:26:21.809055   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0920 18:26:21.809079   75577 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0920 18:26:21.809128   75577 out.go:270] * 
	W0920 18:26:21.809197   75577 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0920 18:26:21.809217   75577 out.go:270] * 
	W0920 18:26:21.810193   75577 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 18:26:21.814385   75577 out.go:201] 
	W0920 18:26:21.816118   75577 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0920 18:26:21.816186   75577 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0920 18:26:21.816225   75577 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0920 18:26:21.818478   75577 out.go:201] 
	
	
	==> CRI-O <==
	Sep 20 18:26:23 old-k8s-version-744025 crio[628]: time="2024-09-20 18:26:23.773684310Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726856783773653308,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b8eda2fd-bfd2-40dd-af65-f838f2e80b69 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:26:23 old-k8s-version-744025 crio[628]: time="2024-09-20 18:26:23.774349297Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5a4ea792-6760-4333-a8dc-1633848f09f0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:26:23 old-k8s-version-744025 crio[628]: time="2024-09-20 18:26:23.774412295Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5a4ea792-6760-4333-a8dc-1633848f09f0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:26:23 old-k8s-version-744025 crio[628]: time="2024-09-20 18:26:23.774450059Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=5a4ea792-6760-4333-a8dc-1633848f09f0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:26:23 old-k8s-version-744025 crio[628]: time="2024-09-20 18:26:23.810055425Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=70c1d295-937b-4b94-a3b2-c626c59f7602 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:26:23 old-k8s-version-744025 crio[628]: time="2024-09-20 18:26:23.810193049Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=70c1d295-937b-4b94-a3b2-c626c59f7602 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:26:23 old-k8s-version-744025 crio[628]: time="2024-09-20 18:26:23.811312630Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f6554e5d-0ea3-4faa-ad89-40c55c61c9e9 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:26:23 old-k8s-version-744025 crio[628]: time="2024-09-20 18:26:23.811719490Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726856783811690197,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f6554e5d-0ea3-4faa-ad89-40c55c61c9e9 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:26:23 old-k8s-version-744025 crio[628]: time="2024-09-20 18:26:23.812213248Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=706b8704-ccd7-4d5e-87a8-f9946d4d9cfd name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:26:23 old-k8s-version-744025 crio[628]: time="2024-09-20 18:26:23.812276207Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=706b8704-ccd7-4d5e-87a8-f9946d4d9cfd name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:26:23 old-k8s-version-744025 crio[628]: time="2024-09-20 18:26:23.812313624Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=706b8704-ccd7-4d5e-87a8-f9946d4d9cfd name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:26:23 old-k8s-version-744025 crio[628]: time="2024-09-20 18:26:23.844748737Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a12d816a-1697-4ec0-bdaa-2dc2e01d1902 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:26:23 old-k8s-version-744025 crio[628]: time="2024-09-20 18:26:23.844835495Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a12d816a-1697-4ec0-bdaa-2dc2e01d1902 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:26:23 old-k8s-version-744025 crio[628]: time="2024-09-20 18:26:23.847121050Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=faf59e35-d696-488b-a23d-526281f42e99 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:26:23 old-k8s-version-744025 crio[628]: time="2024-09-20 18:26:23.847953692Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726856783847920907,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=faf59e35-d696-488b-a23d-526281f42e99 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:26:23 old-k8s-version-744025 crio[628]: time="2024-09-20 18:26:23.849426274Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ecd27236-ef75-42c3-96db-b424d4f14e4e name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:26:23 old-k8s-version-744025 crio[628]: time="2024-09-20 18:26:23.849486212Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ecd27236-ef75-42c3-96db-b424d4f14e4e name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:26:23 old-k8s-version-744025 crio[628]: time="2024-09-20 18:26:23.849532762Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ecd27236-ef75-42c3-96db-b424d4f14e4e name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:26:23 old-k8s-version-744025 crio[628]: time="2024-09-20 18:26:23.882559774Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=15cf4075-462b-4715-9462-04f77716aaab name=/runtime.v1.RuntimeService/Version
	Sep 20 18:26:23 old-k8s-version-744025 crio[628]: time="2024-09-20 18:26:23.882637374Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=15cf4075-462b-4715-9462-04f77716aaab name=/runtime.v1.RuntimeService/Version
	Sep 20 18:26:23 old-k8s-version-744025 crio[628]: time="2024-09-20 18:26:23.884636068Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=49b35fc4-5b4a-4960-b952-e2be4c0a71f4 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:26:23 old-k8s-version-744025 crio[628]: time="2024-09-20 18:26:23.885119934Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726856783885057580,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=49b35fc4-5b4a-4960-b952-e2be4c0a71f4 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:26:23 old-k8s-version-744025 crio[628]: time="2024-09-20 18:26:23.885724498Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1fec69de-b308-45e2-973e-1448b1db67fc name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:26:23 old-k8s-version-744025 crio[628]: time="2024-09-20 18:26:23.885774715Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1fec69de-b308-45e2-973e-1448b1db67fc name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:26:23 old-k8s-version-744025 crio[628]: time="2024-09-20 18:26:23.885813677Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=1fec69de-b308-45e2-973e-1448b1db67fc name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Sep20 18:17] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050746] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037709] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Sep20 18:18] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.985932] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.595098] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.239386] systemd-fstab-generator[555]: Ignoring "noauto" option for root device
	[  +0.063752] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.070306] systemd-fstab-generator[567]: Ignoring "noauto" option for root device
	[  +0.206728] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.121183] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.259648] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +6.744745] systemd-fstab-generator[874]: Ignoring "noauto" option for root device
	[  +0.100047] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.166710] systemd-fstab-generator[998]: Ignoring "noauto" option for root device
	[ +12.315396] kauditd_printk_skb: 46 callbacks suppressed
	[Sep20 18:22] systemd-fstab-generator[4992]: Ignoring "noauto" option for root device
	[Sep20 18:24] systemd-fstab-generator[5273]: Ignoring "noauto" option for root device
	[  +0.069847] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 18:26:24 up 8 min,  0 users,  load average: 0.08, 0.16, 0.09
	Linux old-k8s-version-744025 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Sep 20 18:26:20 old-k8s-version-744025 kubelet[5454]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:138 +0x185
	Sep 20 18:26:20 old-k8s-version-744025 kubelet[5454]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run.func1()
	Sep 20 18:26:20 old-k8s-version-744025 kubelet[5454]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:222 +0x70
	Sep 20 18:26:20 old-k8s-version-744025 kubelet[5454]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0009db6f0)
	Sep 20 18:26:20 old-k8s-version-744025 kubelet[5454]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
	Sep 20 18:26:20 old-k8s-version-744025 kubelet[5454]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000c17ef0, 0x4f0ac20, 0xc0009f00a0, 0x1, 0xc0001020c0)
	Sep 20 18:26:20 old-k8s-version-744025 kubelet[5454]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
	Sep 20 18:26:20 old-k8s-version-744025 kubelet[5454]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0000ce460, 0xc0001020c0)
	Sep 20 18:26:20 old-k8s-version-744025 kubelet[5454]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Sep 20 18:26:20 old-k8s-version-744025 kubelet[5454]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Sep 20 18:26:20 old-k8s-version-744025 kubelet[5454]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Sep 20 18:26:20 old-k8s-version-744025 kubelet[5454]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc0008cb240, 0xc00098f2a0)
	Sep 20 18:26:20 old-k8s-version-744025 kubelet[5454]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Sep 20 18:26:20 old-k8s-version-744025 kubelet[5454]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Sep 20 18:26:20 old-k8s-version-744025 kubelet[5454]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Sep 20 18:26:20 old-k8s-version-744025 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Sep 20 18:26:20 old-k8s-version-744025 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Sep 20 18:26:21 old-k8s-version-744025 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Sep 20 18:26:21 old-k8s-version-744025 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Sep 20 18:26:21 old-k8s-version-744025 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Sep 20 18:26:21 old-k8s-version-744025 kubelet[5504]: I0920 18:26:21.647068    5504 server.go:416] Version: v1.20.0
	Sep 20 18:26:21 old-k8s-version-744025 kubelet[5504]: I0920 18:26:21.647457    5504 server.go:837] Client rotation is on, will bootstrap in background
	Sep 20 18:26:21 old-k8s-version-744025 kubelet[5504]: I0920 18:26:21.649916    5504 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Sep 20 18:26:21 old-k8s-version-744025 kubelet[5504]: I0920 18:26:21.651412    5504 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Sep 20 18:26:21 old-k8s-version-744025 kubelet[5504]: W0920 18:26:21.651582    5504 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-744025 -n old-k8s-version-744025
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-744025 -n old-k8s-version-744025: exit status 2 (236.500961ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-744025" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (732.79s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0920 18:22:43.197076   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-553719 -n default-k8s-diff-port-553719
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-09-20 18:31:38.078099277 +0000 UTC m=+6479.722180825
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-553719 -n default-k8s-diff-port-553719
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-553719 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-553719 logs -n 25: (2.240704187s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-833505 sudo cat                             | flannel-833505               | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC | 20 Sep 24 18:09 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p flannel-833505 sudo                                 | flannel-833505               | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC | 20 Sep 24 18:09 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p flannel-833505 sudo                                 | flannel-833505               | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC | 20 Sep 24 18:09 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p flannel-833505 sudo                                 | flannel-833505               | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC | 20 Sep 24 18:09 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p flannel-833505 sudo find                            | flannel-833505               | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC | 20 Sep 24 18:09 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p flannel-833505 sudo crio                            | flannel-833505               | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC | 20 Sep 24 18:09 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p flannel-833505                                      | flannel-833505               | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC | 20 Sep 24 18:09 UTC |
	| delete  | -p                                                     | disable-driver-mounts-739804 | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC | 20 Sep 24 18:09 UTC |
	|         | disable-driver-mounts-739804                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-553719 | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC | 20 Sep 24 18:10 UTC |
	|         | default-k8s-diff-port-553719                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-956403             | no-preload-956403            | jenkins | v1.34.0 | 20 Sep 24 18:10 UTC | 20 Sep 24 18:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-956403                                   | no-preload-956403            | jenkins | v1.34.0 | 20 Sep 24 18:10 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-768431            | embed-certs-768431           | jenkins | v1.34.0 | 20 Sep 24 18:10 UTC | 20 Sep 24 18:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-768431                                  | embed-certs-768431           | jenkins | v1.34.0 | 20 Sep 24 18:10 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-553719  | default-k8s-diff-port-553719 | jenkins | v1.34.0 | 20 Sep 24 18:11 UTC | 20 Sep 24 18:11 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-553719 | jenkins | v1.34.0 | 20 Sep 24 18:11 UTC |                     |
	|         | default-k8s-diff-port-553719                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-956403                  | no-preload-956403            | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-744025        | old-k8s-version-744025       | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p no-preload-956403                                   | no-preload-956403            | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC | 20 Sep 24 18:23 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-768431                 | embed-certs-768431           | jenkins | v1.34.0 | 20 Sep 24 18:13 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-768431                                  | embed-certs-768431           | jenkins | v1.34.0 | 20 Sep 24 18:13 UTC | 20 Sep 24 18:22 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-553719       | default-k8s-diff-port-553719 | jenkins | v1.34.0 | 20 Sep 24 18:13 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-553719 | jenkins | v1.34.0 | 20 Sep 24 18:13 UTC | 20 Sep 24 18:22 UTC |
	|         | default-k8s-diff-port-553719                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-744025                              | old-k8s-version-744025       | jenkins | v1.34.0 | 20 Sep 24 18:14 UTC | 20 Sep 24 18:14 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-744025             | old-k8s-version-744025       | jenkins | v1.34.0 | 20 Sep 24 18:14 UTC | 20 Sep 24 18:14 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-744025                              | old-k8s-version-744025       | jenkins | v1.34.0 | 20 Sep 24 18:14 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 18:14:12
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 18:14:12.735020   75577 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:14:12.735385   75577 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:14:12.735400   75577 out.go:358] Setting ErrFile to fd 2...
	I0920 18:14:12.735407   75577 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:14:12.735610   75577 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-8777/.minikube/bin
	I0920 18:14:12.736161   75577 out.go:352] Setting JSON to false
	I0920 18:14:12.737033   75577 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6996,"bootTime":1726849057,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 18:14:12.737131   75577 start.go:139] virtualization: kvm guest
	I0920 18:14:12.739127   75577 out.go:177] * [old-k8s-version-744025] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 18:14:12.740288   75577 notify.go:220] Checking for updates...
	I0920 18:14:12.740310   75577 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 18:14:12.741542   75577 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 18:14:12.742727   75577 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-8777/kubeconfig
	I0920 18:14:12.743806   75577 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-8777/.minikube
	I0920 18:14:12.744863   75577 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 18:14:12.745877   75577 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 18:14:12.747201   75577 config.go:182] Loaded profile config "old-k8s-version-744025": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0920 18:14:12.747636   75577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:14:12.747678   75577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:14:12.762352   75577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42889
	I0920 18:14:12.762734   75577 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:14:12.763321   75577 main.go:141] libmachine: Using API Version  1
	I0920 18:14:12.763348   75577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:14:12.763742   75577 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:14:12.763968   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .DriverName
	I0920 18:14:12.765767   75577 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0920 18:14:12.767036   75577 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 18:14:12.767377   75577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:14:12.767449   75577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:14:12.782473   75577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40777
	I0920 18:14:12.782971   75577 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:14:12.783455   75577 main.go:141] libmachine: Using API Version  1
	I0920 18:14:12.783477   75577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:14:12.783807   75577 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:14:12.784035   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .DriverName
	I0920 18:14:12.820265   75577 out.go:177] * Using the kvm2 driver based on existing profile
	I0920 18:14:12.821397   75577 start.go:297] selected driver: kvm2
	I0920 18:14:12.821409   75577 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-744025 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-744025 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.207 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:14:12.821519   75577 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 18:14:12.822217   75577 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:14:12.822309   75577 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19672-8777/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 18:14:12.837641   75577 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 18:14:12.838107   75577 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:14:12.838143   75577 cni.go:84] Creating CNI manager for ""
	I0920 18:14:12.838193   75577 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:14:12.838231   75577 start.go:340] cluster config:
	{Name:old-k8s-version-744025 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-744025 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.207 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:14:12.838329   75577 iso.go:125] acquiring lock: {Name:mkba95ef0488e46f622333e9f317f43def93040b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:14:12.840059   75577 out.go:177] * Starting "old-k8s-version-744025" primary control-plane node in "old-k8s-version-744025" cluster
	I0920 18:14:15.230099   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:14:12.841339   75577 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0920 18:14:12.841384   75577 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0920 18:14:12.841394   75577 cache.go:56] Caching tarball of preloaded images
	I0920 18:14:12.841473   75577 preload.go:172] Found /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 18:14:12.841482   75577 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0920 18:14:12.841594   75577 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/config.json ...
	I0920 18:14:12.841781   75577 start.go:360] acquireMachinesLock for old-k8s-version-744025: {Name:mkfeedb385cf08b5d2aa00913e85815d02a180c2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 18:14:18.302232   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:14:24.382087   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:14:27.454110   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:14:33.534076   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:14:36.606161   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:14:42.686057   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:14:45.758152   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:14:51.838087   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:14:54.910159   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:00.990183   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:04.062105   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:10.142153   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:13.218090   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:19.294132   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:22.366139   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:28.446103   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:31.518062   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:37.598126   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:40.670145   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:46.750116   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:49.822142   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:55.902120   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:58.974169   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:16:05.054139   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:16:08.126122   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:16:14.206089   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:16:17.278137   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:16:23.358156   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:16:26.430180   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:16:32.510115   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:16:35.582114   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:16:41.662126   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:16:44.734140   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:16:50.814123   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:16:53.886188   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:16:59.966139   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:17:03.038067   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:17:09.118169   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:17:12.190091   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:17:15.194165   75086 start.go:364] duration metric: took 4m1.091611985s to acquireMachinesLock for "embed-certs-768431"
	I0920 18:17:15.194223   75086 start.go:96] Skipping create...Using existing machine configuration
	I0920 18:17:15.194231   75086 fix.go:54] fixHost starting: 
	I0920 18:17:15.194737   75086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:17:15.194794   75086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:17:15.210558   75086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43191
	I0920 18:17:15.211084   75086 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:17:15.211527   75086 main.go:141] libmachine: Using API Version  1
	I0920 18:17:15.211550   75086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:17:15.211882   75086 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:17:15.212099   75086 main.go:141] libmachine: (embed-certs-768431) Calling .DriverName
	I0920 18:17:15.212217   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetState
	I0920 18:17:15.213955   75086 fix.go:112] recreateIfNeeded on embed-certs-768431: state=Stopped err=<nil>
	I0920 18:17:15.213995   75086 main.go:141] libmachine: (embed-certs-768431) Calling .DriverName
	W0920 18:17:15.214129   75086 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 18:17:15.215650   75086 out.go:177] * Restarting existing kvm2 VM for "embed-certs-768431" ...
	I0920 18:17:15.191760   74753 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:17:15.191794   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetMachineName
	I0920 18:17:15.192105   74753 buildroot.go:166] provisioning hostname "no-preload-956403"
	I0920 18:17:15.192133   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetMachineName
	I0920 18:17:15.192325   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:17:15.194039   74753 machine.go:96] duration metric: took 4m37.421116123s to provisionDockerMachine
	I0920 18:17:15.194077   74753 fix.go:56] duration metric: took 4m37.444193238s for fixHost
	I0920 18:17:15.194087   74753 start.go:83] releasing machines lock for "no-preload-956403", held for 4m37.444234787s
	W0920 18:17:15.194109   74753 start.go:714] error starting host: provision: host is not running
	W0920 18:17:15.194214   74753 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0920 18:17:15.194222   74753 start.go:729] Will try again in 5 seconds ...
	I0920 18:17:15.216590   75086 main.go:141] libmachine: (embed-certs-768431) Calling .Start
	I0920 18:17:15.216773   75086 main.go:141] libmachine: (embed-certs-768431) Ensuring networks are active...
	I0920 18:17:15.217439   75086 main.go:141] libmachine: (embed-certs-768431) Ensuring network default is active
	I0920 18:17:15.217769   75086 main.go:141] libmachine: (embed-certs-768431) Ensuring network mk-embed-certs-768431 is active
	I0920 18:17:15.218058   75086 main.go:141] libmachine: (embed-certs-768431) Getting domain xml...
	I0920 18:17:15.218674   75086 main.go:141] libmachine: (embed-certs-768431) Creating domain...
	I0920 18:17:16.454922   75086 main.go:141] libmachine: (embed-certs-768431) Waiting to get IP...
	I0920 18:17:16.456011   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:16.456437   75086 main.go:141] libmachine: (embed-certs-768431) DBG | unable to find current IP address of domain embed-certs-768431 in network mk-embed-certs-768431
	I0920 18:17:16.456518   75086 main.go:141] libmachine: (embed-certs-768431) DBG | I0920 18:17:16.456416   76214 retry.go:31] will retry after 282.081004ms: waiting for machine to come up
	I0920 18:17:16.740062   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:16.740455   75086 main.go:141] libmachine: (embed-certs-768431) DBG | unable to find current IP address of domain embed-certs-768431 in network mk-embed-certs-768431
	I0920 18:17:16.740505   75086 main.go:141] libmachine: (embed-certs-768431) DBG | I0920 18:17:16.740420   76214 retry.go:31] will retry after 335.306847ms: waiting for machine to come up
	I0920 18:17:17.077169   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:17.077564   75086 main.go:141] libmachine: (embed-certs-768431) DBG | unable to find current IP address of domain embed-certs-768431 in network mk-embed-certs-768431
	I0920 18:17:17.077594   75086 main.go:141] libmachine: (embed-certs-768431) DBG | I0920 18:17:17.077513   76214 retry.go:31] will retry after 455.255315ms: waiting for machine to come up
	I0920 18:17:17.534166   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:17.534545   75086 main.go:141] libmachine: (embed-certs-768431) DBG | unable to find current IP address of domain embed-certs-768431 in network mk-embed-certs-768431
	I0920 18:17:17.534570   75086 main.go:141] libmachine: (embed-certs-768431) DBG | I0920 18:17:17.534511   76214 retry.go:31] will retry after 476.184378ms: waiting for machine to come up
	I0920 18:17:18.011940   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:18.012405   75086 main.go:141] libmachine: (embed-certs-768431) DBG | unable to find current IP address of domain embed-certs-768431 in network mk-embed-certs-768431
	I0920 18:17:18.012436   75086 main.go:141] libmachine: (embed-certs-768431) DBG | I0920 18:17:18.012359   76214 retry.go:31] will retry after 597.678215ms: waiting for machine to come up
	I0920 18:17:18.611179   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:18.611560   75086 main.go:141] libmachine: (embed-certs-768431) DBG | unable to find current IP address of domain embed-certs-768431 in network mk-embed-certs-768431
	I0920 18:17:18.611588   75086 main.go:141] libmachine: (embed-certs-768431) DBG | I0920 18:17:18.611521   76214 retry.go:31] will retry after 696.074491ms: waiting for machine to come up
	I0920 18:17:20.194460   74753 start.go:360] acquireMachinesLock for no-preload-956403: {Name:mkfeedb385cf08b5d2aa00913e85815d02a180c2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 18:17:19.309493   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:19.309940   75086 main.go:141] libmachine: (embed-certs-768431) DBG | unable to find current IP address of domain embed-certs-768431 in network mk-embed-certs-768431
	I0920 18:17:19.309958   75086 main.go:141] libmachine: (embed-certs-768431) DBG | I0920 18:17:19.309909   76214 retry.go:31] will retry after 825.638908ms: waiting for machine to come up
	I0920 18:17:20.137018   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:20.137380   75086 main.go:141] libmachine: (embed-certs-768431) DBG | unable to find current IP address of domain embed-certs-768431 in network mk-embed-certs-768431
	I0920 18:17:20.137407   75086 main.go:141] libmachine: (embed-certs-768431) DBG | I0920 18:17:20.137338   76214 retry.go:31] will retry after 997.909719ms: waiting for machine to come up
	I0920 18:17:21.137154   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:21.137608   75086 main.go:141] libmachine: (embed-certs-768431) DBG | unable to find current IP address of domain embed-certs-768431 in network mk-embed-certs-768431
	I0920 18:17:21.137630   75086 main.go:141] libmachine: (embed-certs-768431) DBG | I0920 18:17:21.137552   76214 retry.go:31] will retry after 1.368594293s: waiting for machine to come up
	I0920 18:17:22.507834   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:22.508202   75086 main.go:141] libmachine: (embed-certs-768431) DBG | unable to find current IP address of domain embed-certs-768431 in network mk-embed-certs-768431
	I0920 18:17:22.508228   75086 main.go:141] libmachine: (embed-certs-768431) DBG | I0920 18:17:22.508152   76214 retry.go:31] will retry after 1.922265011s: waiting for machine to come up
	I0920 18:17:24.431977   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:24.432422   75086 main.go:141] libmachine: (embed-certs-768431) DBG | unable to find current IP address of domain embed-certs-768431 in network mk-embed-certs-768431
	I0920 18:17:24.432452   75086 main.go:141] libmachine: (embed-certs-768431) DBG | I0920 18:17:24.432370   76214 retry.go:31] will retry after 2.875158038s: waiting for machine to come up
	I0920 18:17:27.309993   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:27.310512   75086 main.go:141] libmachine: (embed-certs-768431) DBG | unable to find current IP address of domain embed-certs-768431 in network mk-embed-certs-768431
	I0920 18:17:27.310539   75086 main.go:141] libmachine: (embed-certs-768431) DBG | I0920 18:17:27.310459   76214 retry.go:31] will retry after 3.089759463s: waiting for machine to come up
	I0920 18:17:30.402254   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:30.402718   75086 main.go:141] libmachine: (embed-certs-768431) DBG | unable to find current IP address of domain embed-certs-768431 in network mk-embed-certs-768431
	I0920 18:17:30.402757   75086 main.go:141] libmachine: (embed-certs-768431) DBG | I0920 18:17:30.402671   76214 retry.go:31] will retry after 3.42897838s: waiting for machine to come up
	I0920 18:17:33.835196   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:33.835576   75086 main.go:141] libmachine: (embed-certs-768431) Found IP for machine: 192.168.61.202
	I0920 18:17:33.835606   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has current primary IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:33.835615   75086 main.go:141] libmachine: (embed-certs-768431) Reserving static IP address...
	I0920 18:17:33.835974   75086 main.go:141] libmachine: (embed-certs-768431) Reserved static IP address: 192.168.61.202
	I0920 18:17:33.836010   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "embed-certs-768431", mac: "52:54:00:d2:f2:e2", ip: "192.168.61.202"} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:33.836024   75086 main.go:141] libmachine: (embed-certs-768431) Waiting for SSH to be available...
	I0920 18:17:33.836053   75086 main.go:141] libmachine: (embed-certs-768431) DBG | skip adding static IP to network mk-embed-certs-768431 - found existing host DHCP lease matching {name: "embed-certs-768431", mac: "52:54:00:d2:f2:e2", ip: "192.168.61.202"}
	I0920 18:17:33.836065   75086 main.go:141] libmachine: (embed-certs-768431) DBG | Getting to WaitForSSH function...
	I0920 18:17:33.838215   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:33.838561   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:33.838593   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:33.838735   75086 main.go:141] libmachine: (embed-certs-768431) DBG | Using SSH client type: external
	I0920 18:17:33.838768   75086 main.go:141] libmachine: (embed-certs-768431) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/embed-certs-768431/id_rsa (-rw-------)
	I0920 18:17:33.838809   75086 main.go:141] libmachine: (embed-certs-768431) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.202 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-8777/.minikube/machines/embed-certs-768431/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 18:17:33.838826   75086 main.go:141] libmachine: (embed-certs-768431) DBG | About to run SSH command:
	I0920 18:17:33.838838   75086 main.go:141] libmachine: (embed-certs-768431) DBG | exit 0
	I0920 18:17:33.962092   75086 main.go:141] libmachine: (embed-certs-768431) DBG | SSH cmd err, output: <nil>: 
	I0920 18:17:33.962513   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetConfigRaw
	I0920 18:17:33.963115   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetIP
	I0920 18:17:33.965714   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:33.966036   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:33.966056   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:33.966391   75086 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/embed-certs-768431/config.json ...
	I0920 18:17:33.966621   75086 machine.go:93] provisionDockerMachine start ...
	I0920 18:17:33.966641   75086 main.go:141] libmachine: (embed-certs-768431) Calling .DriverName
	I0920 18:17:33.966848   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHHostname
	I0920 18:17:33.968954   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:33.969235   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:33.969266   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:33.969358   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHPort
	I0920 18:17:33.969542   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:33.969694   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:33.969854   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHUsername
	I0920 18:17:33.970001   75086 main.go:141] libmachine: Using SSH client type: native
	I0920 18:17:33.970194   75086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0920 18:17:33.970204   75086 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 18:17:35.178887   75264 start.go:364] duration metric: took 3m58.041934266s to acquireMachinesLock for "default-k8s-diff-port-553719"
	I0920 18:17:35.178955   75264 start.go:96] Skipping create...Using existing machine configuration
	I0920 18:17:35.178968   75264 fix.go:54] fixHost starting: 
	I0920 18:17:35.179541   75264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:17:35.179604   75264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:17:35.199776   75264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39643
	I0920 18:17:35.200255   75264 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:17:35.200832   75264 main.go:141] libmachine: Using API Version  1
	I0920 18:17:35.200858   75264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:17:35.201213   75264 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:17:35.201427   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .DriverName
	I0920 18:17:35.201575   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetState
	I0920 18:17:35.203239   75264 fix.go:112] recreateIfNeeded on default-k8s-diff-port-553719: state=Stopped err=<nil>
	I0920 18:17:35.203279   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .DriverName
	W0920 18:17:35.203415   75264 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 18:17:35.205529   75264 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-553719" ...
	I0920 18:17:34.078412   75086 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 18:17:34.078447   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetMachineName
	I0920 18:17:34.078648   75086 buildroot.go:166] provisioning hostname "embed-certs-768431"
	I0920 18:17:34.078676   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetMachineName
	I0920 18:17:34.078854   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHHostname
	I0920 18:17:34.081703   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.082042   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:34.082079   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.082175   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHPort
	I0920 18:17:34.082371   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:34.082525   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:34.082656   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHUsername
	I0920 18:17:34.082778   75086 main.go:141] libmachine: Using SSH client type: native
	I0920 18:17:34.082937   75086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0920 18:17:34.082949   75086 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-768431 && echo "embed-certs-768431" | sudo tee /etc/hostname
	I0920 18:17:34.199661   75086 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-768431
	
	I0920 18:17:34.199726   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHHostname
	I0920 18:17:34.202468   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.202875   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:34.202903   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.203084   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHPort
	I0920 18:17:34.203328   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:34.203477   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:34.203630   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHUsername
	I0920 18:17:34.203769   75086 main.go:141] libmachine: Using SSH client type: native
	I0920 18:17:34.203936   75086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0920 18:17:34.203952   75086 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-768431' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-768431/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-768431' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 18:17:34.314604   75086 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:17:34.314632   75086 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-8777/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-8777/.minikube}
	I0920 18:17:34.314656   75086 buildroot.go:174] setting up certificates
	I0920 18:17:34.314666   75086 provision.go:84] configureAuth start
	I0920 18:17:34.314674   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetMachineName
	I0920 18:17:34.314940   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetIP
	I0920 18:17:34.317999   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.318306   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:34.318326   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.318538   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHHostname
	I0920 18:17:34.320715   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.321046   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:34.321073   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.321187   75086 provision.go:143] copyHostCerts
	I0920 18:17:34.321256   75086 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem, removing ...
	I0920 18:17:34.321276   75086 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem
	I0920 18:17:34.321359   75086 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem (1082 bytes)
	I0920 18:17:34.321452   75086 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem, removing ...
	I0920 18:17:34.321459   75086 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem
	I0920 18:17:34.321493   75086 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem (1123 bytes)
	I0920 18:17:34.321549   75086 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem, removing ...
	I0920 18:17:34.321558   75086 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem
	I0920 18:17:34.321580   75086 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem (1675 bytes)
	I0920 18:17:34.321692   75086 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem org=jenkins.embed-certs-768431 san=[127.0.0.1 192.168.61.202 embed-certs-768431 localhost minikube]
	I0920 18:17:34.547266   75086 provision.go:177] copyRemoteCerts
	I0920 18:17:34.547343   75086 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 18:17:34.547373   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHHostname
	I0920 18:17:34.550265   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.550648   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:34.550680   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.550895   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHPort
	I0920 18:17:34.551084   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:34.551227   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHUsername
	I0920 18:17:34.551359   75086 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/embed-certs-768431/id_rsa Username:docker}
	I0920 18:17:34.632404   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0920 18:17:34.656468   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 18:17:34.681704   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0920 18:17:34.706486   75086 provision.go:87] duration metric: took 391.807931ms to configureAuth
	I0920 18:17:34.706514   75086 buildroot.go:189] setting minikube options for container-runtime
	I0920 18:17:34.706681   75086 config.go:182] Loaded profile config "embed-certs-768431": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:17:34.706750   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHHostname
	I0920 18:17:34.709500   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.709851   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:34.709915   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.709972   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHPort
	I0920 18:17:34.710199   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:34.710337   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:34.710511   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHUsername
	I0920 18:17:34.710669   75086 main.go:141] libmachine: Using SSH client type: native
	I0920 18:17:34.710854   75086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0920 18:17:34.710875   75086 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 18:17:34.941246   75086 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 18:17:34.941277   75086 machine.go:96] duration metric: took 974.641764ms to provisionDockerMachine
	I0920 18:17:34.941293   75086 start.go:293] postStartSetup for "embed-certs-768431" (driver="kvm2")
	I0920 18:17:34.941341   75086 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 18:17:34.941370   75086 main.go:141] libmachine: (embed-certs-768431) Calling .DriverName
	I0920 18:17:34.941757   75086 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 18:17:34.941792   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHHostname
	I0920 18:17:34.944366   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.944712   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:34.944749   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.944861   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHPort
	I0920 18:17:34.945051   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:34.945198   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHUsername
	I0920 18:17:34.945320   75086 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/embed-certs-768431/id_rsa Username:docker}
	I0920 18:17:35.028570   75086 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 18:17:35.032711   75086 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 18:17:35.032751   75086 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/addons for local assets ...
	I0920 18:17:35.032859   75086 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/files for local assets ...
	I0920 18:17:35.032973   75086 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem -> 159732.pem in /etc/ssl/certs
	I0920 18:17:35.033127   75086 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 18:17:35.042212   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /etc/ssl/certs/159732.pem (1708 bytes)
	I0920 18:17:35.066379   75086 start.go:296] duration metric: took 125.0653ms for postStartSetup
	I0920 18:17:35.066429   75086 fix.go:56] duration metric: took 19.872197784s for fixHost
	I0920 18:17:35.066456   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHHostname
	I0920 18:17:35.069888   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:35.070261   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:35.070286   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:35.070472   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHPort
	I0920 18:17:35.070693   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:35.070836   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:35.070989   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHUsername
	I0920 18:17:35.071204   75086 main.go:141] libmachine: Using SSH client type: native
	I0920 18:17:35.071396   75086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0920 18:17:35.071407   75086 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 18:17:35.178674   75086 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726856255.154624802
	
	I0920 18:17:35.178704   75086 fix.go:216] guest clock: 1726856255.154624802
	I0920 18:17:35.178712   75086 fix.go:229] Guest: 2024-09-20 18:17:35.154624802 +0000 UTC Remote: 2024-09-20 18:17:35.066435161 +0000 UTC m=+261.108982198 (delta=88.189641ms)
	I0920 18:17:35.178734   75086 fix.go:200] guest clock delta is within tolerance: 88.189641ms
	I0920 18:17:35.178760   75086 start.go:83] releasing machines lock for "embed-certs-768431", held for 19.984535459s
	I0920 18:17:35.178801   75086 main.go:141] libmachine: (embed-certs-768431) Calling .DriverName
	I0920 18:17:35.179101   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetIP
	I0920 18:17:35.181807   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:35.182179   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:35.182208   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:35.182353   75086 main.go:141] libmachine: (embed-certs-768431) Calling .DriverName
	I0920 18:17:35.182850   75086 main.go:141] libmachine: (embed-certs-768431) Calling .DriverName
	I0920 18:17:35.182993   75086 main.go:141] libmachine: (embed-certs-768431) Calling .DriverName
	I0920 18:17:35.183097   75086 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 18:17:35.183171   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHHostname
	I0920 18:17:35.183171   75086 ssh_runner.go:195] Run: cat /version.json
	I0920 18:17:35.183230   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHHostname
	I0920 18:17:35.185905   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:35.185942   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:35.186249   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:35.186279   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:35.186317   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:35.186331   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:35.186476   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHPort
	I0920 18:17:35.186597   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHPort
	I0920 18:17:35.186687   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:35.186791   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:35.186816   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHUsername
	I0920 18:17:35.186974   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHUsername
	I0920 18:17:35.186992   75086 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/embed-certs-768431/id_rsa Username:docker}
	I0920 18:17:35.187127   75086 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/embed-certs-768431/id_rsa Username:docker}
	I0920 18:17:35.311021   75086 ssh_runner.go:195] Run: systemctl --version
	I0920 18:17:35.317587   75086 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 18:17:35.462634   75086 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 18:17:35.469331   75086 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 18:17:35.469444   75086 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 18:17:35.485752   75086 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 18:17:35.485780   75086 start.go:495] detecting cgroup driver to use...
	I0920 18:17:35.485903   75086 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 18:17:35.507004   75086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 18:17:35.525868   75086 docker.go:217] disabling cri-docker service (if available) ...
	I0920 18:17:35.525934   75086 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 18:17:35.542533   75086 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 18:17:35.557622   75086 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 18:17:35.669845   75086 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 18:17:35.827784   75086 docker.go:233] disabling docker service ...
	I0920 18:17:35.827855   75086 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 18:17:35.842877   75086 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 18:17:35.859468   75086 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 18:17:35.994094   75086 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 18:17:36.123353   75086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 18:17:36.137941   75086 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 18:17:36.157781   75086 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 18:17:36.157871   75086 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:36.168857   75086 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 18:17:36.168929   75086 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:36.179807   75086 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:36.192260   75086 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:36.203220   75086 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 18:17:36.214388   75086 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:36.224686   75086 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:36.242143   75086 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:36.257047   75086 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 18:17:36.272014   75086 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 18:17:36.272130   75086 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 18:17:36.286083   75086 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 18:17:36.296624   75086 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:17:36.414196   75086 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 18:17:36.511654   75086 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 18:17:36.511720   75086 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 18:17:36.516593   75086 start.go:563] Will wait 60s for crictl version
	I0920 18:17:36.516640   75086 ssh_runner.go:195] Run: which crictl
	I0920 18:17:36.520360   75086 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 18:17:36.556846   75086 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 18:17:36.556949   75086 ssh_runner.go:195] Run: crio --version
	I0920 18:17:36.584699   75086 ssh_runner.go:195] Run: crio --version
	I0920 18:17:36.615179   75086 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 18:17:35.206839   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .Start
	I0920 18:17:35.207055   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Ensuring networks are active...
	I0920 18:17:35.207928   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Ensuring network default is active
	I0920 18:17:35.208260   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Ensuring network mk-default-k8s-diff-port-553719 is active
	I0920 18:17:35.208679   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Getting domain xml...
	I0920 18:17:35.209488   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Creating domain...
	I0920 18:17:36.500751   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting to get IP...
	I0920 18:17:36.501630   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:36.502033   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | unable to find current IP address of domain default-k8s-diff-port-553719 in network mk-default-k8s-diff-port-553719
	I0920 18:17:36.502137   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | I0920 18:17:36.502044   76346 retry.go:31] will retry after 216.050114ms: waiting for machine to come up
	I0920 18:17:36.719559   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:36.720002   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | unable to find current IP address of domain default-k8s-diff-port-553719 in network mk-default-k8s-diff-port-553719
	I0920 18:17:36.720026   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | I0920 18:17:36.719977   76346 retry.go:31] will retry after 303.832728ms: waiting for machine to come up
	I0920 18:17:37.025606   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:37.026096   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | unable to find current IP address of domain default-k8s-diff-port-553719 in network mk-default-k8s-diff-port-553719
	I0920 18:17:37.026137   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | I0920 18:17:37.026050   76346 retry.go:31] will retry after 316.827461ms: waiting for machine to come up
	I0920 18:17:36.616640   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetIP
	I0920 18:17:36.619927   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:36.620377   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:36.620407   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:36.620642   75086 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0920 18:17:36.627189   75086 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:17:36.639842   75086 kubeadm.go:883] updating cluster {Name:embed-certs-768431 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-768431 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.202 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 18:17:36.639953   75086 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:17:36.640019   75086 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:17:36.676519   75086 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 18:17:36.676599   75086 ssh_runner.go:195] Run: which lz4
	I0920 18:17:36.680472   75086 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 18:17:36.684650   75086 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 18:17:36.684683   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0920 18:17:38.090996   75086 crio.go:462] duration metric: took 1.410558742s to copy over tarball
	I0920 18:17:38.091068   75086 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 18:17:37.344880   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:37.345393   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | unable to find current IP address of domain default-k8s-diff-port-553719 in network mk-default-k8s-diff-port-553719
	I0920 18:17:37.345419   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | I0920 18:17:37.345323   76346 retry.go:31] will retry after 456.966571ms: waiting for machine to come up
	I0920 18:17:37.804436   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:37.804919   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | unable to find current IP address of domain default-k8s-diff-port-553719 in network mk-default-k8s-diff-port-553719
	I0920 18:17:37.804954   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | I0920 18:17:37.804891   76346 retry.go:31] will retry after 582.103589ms: waiting for machine to come up
	I0920 18:17:38.388738   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:38.389266   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | unable to find current IP address of domain default-k8s-diff-port-553719 in network mk-default-k8s-diff-port-553719
	I0920 18:17:38.389297   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | I0920 18:17:38.389217   76346 retry.go:31] will retry after 884.882678ms: waiting for machine to come up
	I0920 18:17:39.276048   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:39.276478   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | unable to find current IP address of domain default-k8s-diff-port-553719 in network mk-default-k8s-diff-port-553719
	I0920 18:17:39.276504   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | I0920 18:17:39.276453   76346 retry.go:31] will retry after 807.001285ms: waiting for machine to come up
	I0920 18:17:40.085749   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:40.086228   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | unable to find current IP address of domain default-k8s-diff-port-553719 in network mk-default-k8s-diff-port-553719
	I0920 18:17:40.086256   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | I0920 18:17:40.086190   76346 retry.go:31] will retry after 1.283354255s: waiting for machine to come up
	I0920 18:17:41.370861   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:41.371314   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | unable to find current IP address of domain default-k8s-diff-port-553719 in network mk-default-k8s-diff-port-553719
	I0920 18:17:41.371343   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | I0920 18:17:41.371265   76346 retry.go:31] will retry after 1.756084886s: waiting for machine to come up
	I0920 18:17:40.301535   75086 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.210437306s)
	I0920 18:17:40.301567   75086 crio.go:469] duration metric: took 2.210539553s to extract the tarball
	I0920 18:17:40.301578   75086 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 18:17:40.338638   75086 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:17:40.381753   75086 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 18:17:40.381780   75086 cache_images.go:84] Images are preloaded, skipping loading
	I0920 18:17:40.381788   75086 kubeadm.go:934] updating node { 192.168.61.202 8443 v1.31.1 crio true true} ...
	I0920 18:17:40.381914   75086 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-768431 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.202
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-768431 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 18:17:40.381998   75086 ssh_runner.go:195] Run: crio config
	I0920 18:17:40.428457   75086 cni.go:84] Creating CNI manager for ""
	I0920 18:17:40.428482   75086 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:17:40.428491   75086 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 18:17:40.428512   75086 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.202 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-768431 NodeName:embed-certs-768431 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.202"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.202 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 18:17:40.428681   75086 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.202
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-768431"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.202
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.202"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 18:17:40.428800   75086 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 18:17:40.439049   75086 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 18:17:40.439123   75086 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 18:17:40.449220   75086 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0920 18:17:40.466012   75086 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 18:17:40.482158   75086 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0920 18:17:40.499225   75086 ssh_runner.go:195] Run: grep 192.168.61.202	control-plane.minikube.internal$ /etc/hosts
	I0920 18:17:40.502817   75086 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.202	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:17:40.515241   75086 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:17:40.638615   75086 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:17:40.655790   75086 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/embed-certs-768431 for IP: 192.168.61.202
	I0920 18:17:40.655814   75086 certs.go:194] generating shared ca certs ...
	I0920 18:17:40.655834   75086 certs.go:226] acquiring lock for ca certs: {Name:mkc7ef6c737c6bdc3fdd9dcff8f57029c020d8f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:17:40.656006   75086 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key
	I0920 18:17:40.656070   75086 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key
	I0920 18:17:40.656084   75086 certs.go:256] generating profile certs ...
	I0920 18:17:40.656193   75086 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/embed-certs-768431/client.key
	I0920 18:17:40.656281   75086 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/embed-certs-768431/apiserver.key.a3dbf377
	I0920 18:17:40.656354   75086 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/embed-certs-768431/proxy-client.key
	I0920 18:17:40.656508   75086 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem (1338 bytes)
	W0920 18:17:40.656560   75086 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973_empty.pem, impossibly tiny 0 bytes
	I0920 18:17:40.656573   75086 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 18:17:40.656620   75086 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem (1082 bytes)
	I0920 18:17:40.656654   75086 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem (1123 bytes)
	I0920 18:17:40.656679   75086 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem (1675 bytes)
	I0920 18:17:40.656733   75086 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem (1708 bytes)
	I0920 18:17:40.657567   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 18:17:40.693111   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 18:17:40.733664   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 18:17:40.774367   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0920 18:17:40.800930   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/embed-certs-768431/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0920 18:17:40.827793   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/embed-certs-768431/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 18:17:40.851189   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/embed-certs-768431/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 18:17:40.875092   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/embed-certs-768431/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 18:17:40.898639   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem --> /usr/share/ca-certificates/15973.pem (1338 bytes)
	I0920 18:17:40.921569   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /usr/share/ca-certificates/159732.pem (1708 bytes)
	I0920 18:17:40.945739   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 18:17:40.969942   75086 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 18:17:40.987563   75086 ssh_runner.go:195] Run: openssl version
	I0920 18:17:40.993363   75086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15973.pem && ln -fs /usr/share/ca-certificates/15973.pem /etc/ssl/certs/15973.pem"
	I0920 18:17:41.004673   75086 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15973.pem
	I0920 18:17:41.008985   75086 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:04 /usr/share/ca-certificates/15973.pem
	I0920 18:17:41.009032   75086 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15973.pem
	I0920 18:17:41.014950   75086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15973.pem /etc/ssl/certs/51391683.0"
	I0920 18:17:41.027944   75086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/159732.pem && ln -fs /usr/share/ca-certificates/159732.pem /etc/ssl/certs/159732.pem"
	I0920 18:17:41.040711   75086 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/159732.pem
	I0920 18:17:41.045304   75086 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:04 /usr/share/ca-certificates/159732.pem
	I0920 18:17:41.045361   75086 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/159732.pem
	I0920 18:17:41.050891   75086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/159732.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 18:17:41.061542   75086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 18:17:41.072622   75086 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:17:41.076916   75086 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:17:41.076965   75086 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:17:41.082644   75086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 18:17:41.093136   75086 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 18:17:41.097496   75086 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 18:17:41.103393   75086 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 18:17:41.109444   75086 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 18:17:41.115844   75086 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 18:17:41.121578   75086 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 18:17:41.127292   75086 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 18:17:41.133310   75086 kubeadm.go:392] StartCluster: {Name:embed-certs-768431 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-768431 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.202 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:17:41.133393   75086 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 18:17:41.133439   75086 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:17:41.177284   75086 cri.go:89] found id: ""
	I0920 18:17:41.177380   75086 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 18:17:41.187289   75086 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 18:17:41.187311   75086 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 18:17:41.187362   75086 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 18:17:41.196445   75086 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 18:17:41.197939   75086 kubeconfig.go:125] found "embed-certs-768431" server: "https://192.168.61.202:8443"
	I0920 18:17:41.201030   75086 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 18:17:41.210356   75086 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.202
	I0920 18:17:41.210394   75086 kubeadm.go:1160] stopping kube-system containers ...
	I0920 18:17:41.210422   75086 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0920 18:17:41.210499   75086 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:17:41.244659   75086 cri.go:89] found id: ""
	I0920 18:17:41.244721   75086 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 18:17:41.264341   75086 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 18:17:41.275229   75086 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 18:17:41.275252   75086 kubeadm.go:157] found existing configuration files:
	
	I0920 18:17:41.275314   75086 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 18:17:41.285420   75086 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 18:17:41.285502   75086 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 18:17:41.295902   75086 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 18:17:41.305951   75086 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 18:17:41.306015   75086 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 18:17:41.316567   75086 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 18:17:41.325623   75086 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 18:17:41.325691   75086 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 18:17:41.336405   75086 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 18:17:41.346501   75086 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 18:17:41.346574   75086 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 18:17:41.357340   75086 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 18:17:41.367956   75086 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:17:41.479022   75086 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:17:42.796122   75086 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.317061067s)
	I0920 18:17:42.796155   75086 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:17:43.053135   75086 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:17:43.139770   75086 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:17:43.244066   75086 api_server.go:52] waiting for apiserver process to appear ...
	I0920 18:17:43.244166   75086 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:17:43.744622   75086 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:17:43.129383   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:43.129827   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | unable to find current IP address of domain default-k8s-diff-port-553719 in network mk-default-k8s-diff-port-553719
	I0920 18:17:43.129864   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | I0920 18:17:43.129798   76346 retry.go:31] will retry after 2.259775905s: waiting for machine to come up
	I0920 18:17:45.391508   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:45.391915   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | unable to find current IP address of domain default-k8s-diff-port-553719 in network mk-default-k8s-diff-port-553719
	I0920 18:17:45.391941   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | I0920 18:17:45.391885   76346 retry.go:31] will retry after 1.767770692s: waiting for machine to come up
	I0920 18:17:44.244723   75086 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:17:44.265463   75086 api_server.go:72] duration metric: took 1.02139562s to wait for apiserver process to appear ...
	I0920 18:17:44.265500   75086 api_server.go:88] waiting for apiserver healthz status ...
	I0920 18:17:44.265528   75086 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0920 18:17:46.935510   75086 api_server.go:279] https://192.168.61.202:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 18:17:46.935571   75086 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 18:17:46.935589   75086 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0920 18:17:46.947553   75086 api_server.go:279] https://192.168.61.202:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 18:17:46.947586   75086 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 18:17:47.265919   75086 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0920 18:17:47.272986   75086 api_server.go:279] https://192.168.61.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 18:17:47.273020   75086 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 18:17:47.766662   75086 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0920 18:17:47.783700   75086 api_server.go:279] https://192.168.61.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 18:17:47.783749   75086 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 18:17:48.266012   75086 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0920 18:17:48.270573   75086 api_server.go:279] https://192.168.61.202:8443/healthz returned 200:
	ok
	I0920 18:17:48.277290   75086 api_server.go:141] control plane version: v1.31.1
	I0920 18:17:48.277331   75086 api_server.go:131] duration metric: took 4.011813655s to wait for apiserver health ...
	I0920 18:17:48.277342   75086 cni.go:84] Creating CNI manager for ""
	I0920 18:17:48.277351   75086 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:17:48.278863   75086 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 18:17:48.279934   75086 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 18:17:48.304037   75086 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 18:17:48.337082   75086 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 18:17:48.349485   75086 system_pods.go:59] 8 kube-system pods found
	I0920 18:17:48.349541   75086 system_pods.go:61] "coredns-7c65d6cfc9-cskt4" [2ce6fa43-bdf9-4625-a198-8f59ee9fcf0b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 18:17:48.349554   75086 system_pods.go:61] "etcd-embed-certs-768431" [656af9fd-b380-4934-a0b5-5d39a755de44] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0920 18:17:48.349565   75086 system_pods.go:61] "kube-apiserver-embed-certs-768431" [e128f4ff-3432-4d27-83de-ed76b686403d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0920 18:17:48.349573   75086 system_pods.go:61] "kube-controller-manager-embed-certs-768431" [a2c7e6d7-e351-4e6a-ab95-638aecdaab28] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0920 18:17:48.349579   75086 system_pods.go:61] "kube-proxy-cshjm" [86a82f1b-d8d5-4ab7-89fb-84b836dc0470] Running
	I0920 18:17:48.349586   75086 system_pods.go:61] "kube-scheduler-embed-certs-768431" [64bc50a6-2e99-4992-b249-9ffdf463539a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0920 18:17:48.349601   75086 system_pods.go:61] "metrics-server-6867b74b74-dwnt6" [bb0a120f-ca9e-4b54-826b-55aa20b2f6a4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 18:17:48.349606   75086 system_pods.go:61] "storage-provisioner" [083c6964-a774-4b85-8e6d-acd04ededb3c] Running
	I0920 18:17:48.349617   75086 system_pods.go:74] duration metric: took 12.507925ms to wait for pod list to return data ...
	I0920 18:17:48.349630   75086 node_conditions.go:102] verifying NodePressure condition ...
	I0920 18:17:48.353604   75086 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 18:17:48.353635   75086 node_conditions.go:123] node cpu capacity is 2
	I0920 18:17:48.353648   75086 node_conditions.go:105] duration metric: took 4.012436ms to run NodePressure ...
	I0920 18:17:48.353668   75086 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:17:48.623666   75086 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0920 18:17:48.634113   75086 kubeadm.go:739] kubelet initialised
	I0920 18:17:48.634184   75086 kubeadm.go:740] duration metric: took 10.458173ms waiting for restarted kubelet to initialise ...
	I0920 18:17:48.634205   75086 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:17:48.642334   75086 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-cskt4" in "kube-system" namespace to be "Ready" ...
	I0920 18:17:47.161670   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:47.162064   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | unable to find current IP address of domain default-k8s-diff-port-553719 in network mk-default-k8s-diff-port-553719
	I0920 18:17:47.162094   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | I0920 18:17:47.162020   76346 retry.go:31] will retry after 2.299554771s: waiting for machine to come up
	I0920 18:17:49.463022   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:49.463483   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | unable to find current IP address of domain default-k8s-diff-port-553719 in network mk-default-k8s-diff-port-553719
	I0920 18:17:49.463515   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | I0920 18:17:49.463442   76346 retry.go:31] will retry after 4.372569793s: waiting for machine to come up
	I0920 18:17:50.650781   75086 pod_ready.go:103] pod "coredns-7c65d6cfc9-cskt4" in "kube-system" namespace has status "Ready":"False"
	I0920 18:17:53.149156   75086 pod_ready.go:103] pod "coredns-7c65d6cfc9-cskt4" in "kube-system" namespace has status "Ready":"False"
	I0920 18:17:55.087078   75577 start.go:364] duration metric: took 3m42.245259516s to acquireMachinesLock for "old-k8s-version-744025"
	I0920 18:17:55.087155   75577 start.go:96] Skipping create...Using existing machine configuration
	I0920 18:17:55.087166   75577 fix.go:54] fixHost starting: 
	I0920 18:17:55.087618   75577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:17:55.087671   75577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:17:55.107336   75577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34989
	I0920 18:17:55.107839   75577 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:17:55.108462   75577 main.go:141] libmachine: Using API Version  1
	I0920 18:17:55.108496   75577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:17:55.108855   75577 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:17:55.109072   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .DriverName
	I0920 18:17:55.109222   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetState
	I0920 18:17:55.110831   75577 fix.go:112] recreateIfNeeded on old-k8s-version-744025: state=Stopped err=<nil>
	I0920 18:17:55.110857   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .DriverName
	W0920 18:17:55.110973   75577 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 18:17:55.113408   75577 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-744025" ...
	I0920 18:17:53.840215   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:53.840817   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has current primary IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:53.840844   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Found IP for machine: 192.168.72.190
	I0920 18:17:53.840859   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Reserving static IP address...
	I0920 18:17:53.841438   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Reserved static IP address: 192.168.72.190
	I0920 18:17:53.841460   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for SSH to be available...
	I0920 18:17:53.841475   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-553719", mac: "52:54:00:dd:93:60", ip: "192.168.72.190"} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:53.841503   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | skip adding static IP to network mk-default-k8s-diff-port-553719 - found existing host DHCP lease matching {name: "default-k8s-diff-port-553719", mac: "52:54:00:dd:93:60", ip: "192.168.72.190"}
	I0920 18:17:53.841583   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | Getting to WaitForSSH function...
	I0920 18:17:53.843772   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:53.844082   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:53.844115   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:53.844201   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | Using SSH client type: external
	I0920 18:17:53.844238   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/default-k8s-diff-port-553719/id_rsa (-rw-------)
	I0920 18:17:53.844282   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.190 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-8777/.minikube/machines/default-k8s-diff-port-553719/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 18:17:53.844296   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | About to run SSH command:
	I0920 18:17:53.844309   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | exit 0
	I0920 18:17:53.965938   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | SSH cmd err, output: <nil>: 
	I0920 18:17:53.966354   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetConfigRaw
	I0920 18:17:53.967105   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetIP
	I0920 18:17:53.969715   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:53.970112   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:53.970140   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:53.970429   75264 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/default-k8s-diff-port-553719/config.json ...
	I0920 18:17:53.970686   75264 machine.go:93] provisionDockerMachine start ...
	I0920 18:17:53.970707   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .DriverName
	I0920 18:17:53.970924   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHHostname
	I0920 18:17:53.973207   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:53.973541   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:53.973571   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:53.973716   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHPort
	I0920 18:17:53.973901   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:53.974041   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:53.974155   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHUsername
	I0920 18:17:53.974298   75264 main.go:141] libmachine: Using SSH client type: native
	I0920 18:17:53.974518   75264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.190 22 <nil> <nil>}
	I0920 18:17:53.974533   75264 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 18:17:54.078095   75264 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 18:17:54.078121   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetMachineName
	I0920 18:17:54.078377   75264 buildroot.go:166] provisioning hostname "default-k8s-diff-port-553719"
	I0920 18:17:54.078397   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetMachineName
	I0920 18:17:54.078540   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHHostname
	I0920 18:17:54.081589   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.081998   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:54.082032   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.082179   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHPort
	I0920 18:17:54.082375   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:54.082539   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:54.082743   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHUsername
	I0920 18:17:54.082949   75264 main.go:141] libmachine: Using SSH client type: native
	I0920 18:17:54.083153   75264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.190 22 <nil> <nil>}
	I0920 18:17:54.083173   75264 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-553719 && echo "default-k8s-diff-port-553719" | sudo tee /etc/hostname
	I0920 18:17:54.199155   75264 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-553719
	
	I0920 18:17:54.199192   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHHostname
	I0920 18:17:54.202231   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.202650   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:54.202685   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.202835   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHPort
	I0920 18:17:54.203112   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:54.203289   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:54.203458   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHUsername
	I0920 18:17:54.203696   75264 main.go:141] libmachine: Using SSH client type: native
	I0920 18:17:54.203944   75264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.190 22 <nil> <nil>}
	I0920 18:17:54.203969   75264 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-553719' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-553719/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-553719' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 18:17:54.312356   75264 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:17:54.312386   75264 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-8777/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-8777/.minikube}
	I0920 18:17:54.312432   75264 buildroot.go:174] setting up certificates
	I0920 18:17:54.312450   75264 provision.go:84] configureAuth start
	I0920 18:17:54.312464   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetMachineName
	I0920 18:17:54.312758   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetIP
	I0920 18:17:54.315327   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.315697   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:54.315725   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.315870   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHHostname
	I0920 18:17:54.318203   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.318653   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:54.318679   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.318824   75264 provision.go:143] copyHostCerts
	I0920 18:17:54.318885   75264 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem, removing ...
	I0920 18:17:54.318906   75264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem
	I0920 18:17:54.318978   75264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem (1675 bytes)
	I0920 18:17:54.319096   75264 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem, removing ...
	I0920 18:17:54.319107   75264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem
	I0920 18:17:54.319141   75264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem (1082 bytes)
	I0920 18:17:54.319384   75264 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem, removing ...
	I0920 18:17:54.319401   75264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem
	I0920 18:17:54.319465   75264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem (1123 bytes)
	I0920 18:17:54.319551   75264 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-553719 san=[127.0.0.1 192.168.72.190 default-k8s-diff-port-553719 localhost minikube]
	I0920 18:17:54.453062   75264 provision.go:177] copyRemoteCerts
	I0920 18:17:54.453131   75264 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 18:17:54.453160   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHHostname
	I0920 18:17:54.456047   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.456402   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:54.456431   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.456598   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHPort
	I0920 18:17:54.456796   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:54.456970   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHUsername
	I0920 18:17:54.457092   75264 sshutil.go:53] new ssh client: &{IP:192.168.72.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/default-k8s-diff-port-553719/id_rsa Username:docker}
	I0920 18:17:54.544567   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0920 18:17:54.570009   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0920 18:17:54.594249   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0920 18:17:54.617652   75264 provision.go:87] duration metric: took 305.186554ms to configureAuth
	I0920 18:17:54.617686   75264 buildroot.go:189] setting minikube options for container-runtime
	I0920 18:17:54.617971   75264 config.go:182] Loaded profile config "default-k8s-diff-port-553719": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:17:54.618064   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHHostname
	I0920 18:17:54.621408   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.621882   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:54.621915   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.622140   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHPort
	I0920 18:17:54.622368   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:54.622592   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:54.622833   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHUsername
	I0920 18:17:54.623047   75264 main.go:141] libmachine: Using SSH client type: native
	I0920 18:17:54.623287   75264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.190 22 <nil> <nil>}
	I0920 18:17:54.623305   75264 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 18:17:54.859786   75264 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 18:17:54.859810   75264 machine.go:96] duration metric: took 889.110491ms to provisionDockerMachine
	I0920 18:17:54.859820   75264 start.go:293] postStartSetup for "default-k8s-diff-port-553719" (driver="kvm2")
	I0920 18:17:54.859831   75264 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 18:17:54.859850   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .DriverName
	I0920 18:17:54.860209   75264 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 18:17:54.860258   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHHostname
	I0920 18:17:54.863576   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.863933   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:54.863966   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.864168   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHPort
	I0920 18:17:54.864345   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:54.864494   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHUsername
	I0920 18:17:54.864640   75264 sshutil.go:53] new ssh client: &{IP:192.168.72.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/default-k8s-diff-port-553719/id_rsa Username:docker}
	I0920 18:17:54.944717   75264 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 18:17:54.948836   75264 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 18:17:54.948870   75264 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/addons for local assets ...
	I0920 18:17:54.948937   75264 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/files for local assets ...
	I0920 18:17:54.949014   75264 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem -> 159732.pem in /etc/ssl/certs
	I0920 18:17:54.949116   75264 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 18:17:54.958726   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /etc/ssl/certs/159732.pem (1708 bytes)
	I0920 18:17:54.982792   75264 start.go:296] duration metric: took 122.958621ms for postStartSetup
	I0920 18:17:54.982831   75264 fix.go:56] duration metric: took 19.803863913s for fixHost
	I0920 18:17:54.982856   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHHostname
	I0920 18:17:54.985588   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.985924   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:54.985966   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.986145   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHPort
	I0920 18:17:54.986407   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:54.986586   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:54.986784   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHUsername
	I0920 18:17:54.986982   75264 main.go:141] libmachine: Using SSH client type: native
	I0920 18:17:54.987233   75264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.190 22 <nil> <nil>}
	I0920 18:17:54.987248   75264 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 18:17:55.086859   75264 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726856275.038902431
	
	I0920 18:17:55.086888   75264 fix.go:216] guest clock: 1726856275.038902431
	I0920 18:17:55.086900   75264 fix.go:229] Guest: 2024-09-20 18:17:55.038902431 +0000 UTC Remote: 2024-09-20 18:17:54.98283641 +0000 UTC m=+257.985357778 (delta=56.066021ms)
	I0920 18:17:55.086959   75264 fix.go:200] guest clock delta is within tolerance: 56.066021ms
	I0920 18:17:55.086968   75264 start.go:83] releasing machines lock for "default-k8s-diff-port-553719", held for 19.908037967s
	I0920 18:17:55.087009   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .DriverName
	I0920 18:17:55.087303   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetIP
	I0920 18:17:55.090396   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:55.090777   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:55.090805   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:55.090973   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .DriverName
	I0920 18:17:55.091481   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .DriverName
	I0920 18:17:55.091684   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .DriverName
	I0920 18:17:55.091772   75264 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 18:17:55.091827   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHHostname
	I0920 18:17:55.091951   75264 ssh_runner.go:195] Run: cat /version.json
	I0920 18:17:55.091976   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHHostname
	I0920 18:17:55.094742   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:55.094878   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:55.095102   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:55.095133   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:55.095265   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHPort
	I0920 18:17:55.095362   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:55.095400   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:55.095443   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:55.095550   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHPort
	I0920 18:17:55.095619   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHUsername
	I0920 18:17:55.095689   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:55.095763   75264 sshutil.go:53] new ssh client: &{IP:192.168.72.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/default-k8s-diff-port-553719/id_rsa Username:docker}
	I0920 18:17:55.095795   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHUsername
	I0920 18:17:55.095950   75264 sshutil.go:53] new ssh client: &{IP:192.168.72.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/default-k8s-diff-port-553719/id_rsa Username:docker}
	I0920 18:17:55.213223   75264 ssh_runner.go:195] Run: systemctl --version
	I0920 18:17:55.220952   75264 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 18:17:55.370747   75264 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 18:17:55.377509   75264 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 18:17:55.377595   75264 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 18:17:55.395830   75264 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 18:17:55.395854   75264 start.go:495] detecting cgroup driver to use...
	I0920 18:17:55.395920   75264 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 18:17:55.412885   75264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 18:17:55.428380   75264 docker.go:217] disabling cri-docker service (if available) ...
	I0920 18:17:55.428433   75264 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 18:17:55.444371   75264 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 18:17:55.459485   75264 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 18:17:55.583649   75264 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 18:17:55.768003   75264 docker.go:233] disabling docker service ...
	I0920 18:17:55.768065   75264 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 18:17:55.787062   75264 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 18:17:55.802662   75264 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 18:17:55.967892   75264 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 18:17:56.105744   75264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 18:17:56.120499   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 18:17:56.140527   75264 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 18:17:56.140613   75264 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:56.151282   75264 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 18:17:56.151355   75264 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:56.164680   75264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:56.176142   75264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:56.187120   75264 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 18:17:56.198384   75264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:56.209298   75264 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:56.226714   75264 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:56.237886   75264 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 18:17:56.253664   75264 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 18:17:56.253778   75264 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 18:17:56.267429   75264 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 18:17:56.279118   75264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:17:56.412692   75264 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 18:17:56.513349   75264 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 18:17:56.513438   75264 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 18:17:56.517974   75264 start.go:563] Will wait 60s for crictl version
	I0920 18:17:56.518042   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:17:56.521966   75264 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 18:17:56.561446   75264 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 18:17:56.561520   75264 ssh_runner.go:195] Run: crio --version
	I0920 18:17:56.592009   75264 ssh_runner.go:195] Run: crio --version
	I0920 18:17:56.626555   75264 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 18:17:56.627668   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetIP
	I0920 18:17:56.630995   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:56.631450   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:56.631480   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:56.631751   75264 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0920 18:17:56.636473   75264 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:17:56.648824   75264 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-553719 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-553719 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.190 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 18:17:56.648937   75264 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:17:56.648980   75264 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:17:56.691029   75264 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 18:17:56.691104   75264 ssh_runner.go:195] Run: which lz4
	I0920 18:17:56.695538   75264 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 18:17:56.699703   75264 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 18:17:56.699735   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0920 18:17:55.114625   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .Start
	I0920 18:17:55.114814   75577 main.go:141] libmachine: (old-k8s-version-744025) Ensuring networks are active...
	I0920 18:17:55.115808   75577 main.go:141] libmachine: (old-k8s-version-744025) Ensuring network default is active
	I0920 18:17:55.116209   75577 main.go:141] libmachine: (old-k8s-version-744025) Ensuring network mk-old-k8s-version-744025 is active
	I0920 18:17:55.116615   75577 main.go:141] libmachine: (old-k8s-version-744025) Getting domain xml...
	I0920 18:17:55.117403   75577 main.go:141] libmachine: (old-k8s-version-744025) Creating domain...
	I0920 18:17:56.411359   75577 main.go:141] libmachine: (old-k8s-version-744025) Waiting to get IP...
	I0920 18:17:56.412507   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:17:56.413010   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:17:56.413094   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:17:56.412996   76990 retry.go:31] will retry after 300.110477ms: waiting for machine to come up
	I0920 18:17:56.714648   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:17:56.715163   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:17:56.715183   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:17:56.715114   76990 retry.go:31] will retry after 352.948637ms: waiting for machine to come up
	I0920 18:17:57.069760   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:17:57.070449   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:17:57.070474   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:17:57.070404   76990 retry.go:31] will retry after 335.58281ms: waiting for machine to come up
	I0920 18:17:57.408023   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:17:57.408521   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:17:57.408546   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:17:57.408469   76990 retry.go:31] will retry after 498.040148ms: waiting for machine to come up
	I0920 18:17:54.653346   75086 pod_ready.go:93] pod "coredns-7c65d6cfc9-cskt4" in "kube-system" namespace has status "Ready":"True"
	I0920 18:17:54.653373   75086 pod_ready.go:82] duration metric: took 6.011015799s for pod "coredns-7c65d6cfc9-cskt4" in "kube-system" namespace to be "Ready" ...
	I0920 18:17:54.653383   75086 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:17:56.660606   75086 pod_ready.go:103] pod "etcd-embed-certs-768431" in "kube-system" namespace has status "Ready":"False"
	I0920 18:17:58.162253   75086 pod_ready.go:93] pod "etcd-embed-certs-768431" in "kube-system" namespace has status "Ready":"True"
	I0920 18:17:58.162282   75086 pod_ready.go:82] duration metric: took 3.508893207s for pod "etcd-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:17:58.162292   75086 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:17:58.109419   75264 crio.go:462] duration metric: took 1.413913626s to copy over tarball
	I0920 18:17:58.109504   75264 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 18:18:00.360623   75264 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.25108327s)
	I0920 18:18:00.360655   75264 crio.go:469] duration metric: took 2.251201466s to extract the tarball
	I0920 18:18:00.360703   75264 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 18:18:00.400822   75264 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:18:00.451366   75264 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 18:18:00.451402   75264 cache_images.go:84] Images are preloaded, skipping loading
	I0920 18:18:00.451413   75264 kubeadm.go:934] updating node { 192.168.72.190 8444 v1.31.1 crio true true} ...
	I0920 18:18:00.451590   75264 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-553719 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.190
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-553719 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 18:18:00.451688   75264 ssh_runner.go:195] Run: crio config
	I0920 18:18:00.502703   75264 cni.go:84] Creating CNI manager for ""
	I0920 18:18:00.502729   75264 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:18:00.502740   75264 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 18:18:00.502778   75264 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.190 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-553719 NodeName:default-k8s-diff-port-553719 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.190"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.190 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 18:18:00.502927   75264 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.190
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-553719"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.190
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.190"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 18:18:00.502996   75264 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 18:18:00.513768   75264 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 18:18:00.513870   75264 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 18:18:00.523631   75264 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0920 18:18:00.540428   75264 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 18:18:00.556858   75264 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0920 18:18:00.574954   75264 ssh_runner.go:195] Run: grep 192.168.72.190	control-plane.minikube.internal$ /etc/hosts
	I0920 18:18:00.578951   75264 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.190	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:18:00.592592   75264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:18:00.712035   75264 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:18:00.728657   75264 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/default-k8s-diff-port-553719 for IP: 192.168.72.190
	I0920 18:18:00.728684   75264 certs.go:194] generating shared ca certs ...
	I0920 18:18:00.728706   75264 certs.go:226] acquiring lock for ca certs: {Name:mkc7ef6c737c6bdc3fdd9dcff8f57029c020d8f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:18:00.728877   75264 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key
	I0920 18:18:00.728937   75264 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key
	I0920 18:18:00.728948   75264 certs.go:256] generating profile certs ...
	I0920 18:18:00.729055   75264 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/default-k8s-diff-port-553719/client.key
	I0920 18:18:00.729151   75264 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/default-k8s-diff-port-553719/apiserver.key.cc1f66e7
	I0920 18:18:00.729205   75264 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/default-k8s-diff-port-553719/proxy-client.key
	I0920 18:18:00.729368   75264 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem (1338 bytes)
	W0920 18:18:00.729415   75264 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973_empty.pem, impossibly tiny 0 bytes
	I0920 18:18:00.729425   75264 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 18:18:00.729453   75264 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem (1082 bytes)
	I0920 18:18:00.729474   75264 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem (1123 bytes)
	I0920 18:18:00.729501   75264 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem (1675 bytes)
	I0920 18:18:00.729538   75264 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem (1708 bytes)
	I0920 18:18:00.730192   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 18:18:00.775634   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 18:18:00.812006   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 18:18:00.846904   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0920 18:18:00.877908   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/default-k8s-diff-port-553719/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0920 18:18:00.910057   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/default-k8s-diff-port-553719/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 18:18:00.935377   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/default-k8s-diff-port-553719/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 18:18:00.960143   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/default-k8s-diff-port-553719/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 18:18:00.983906   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 18:18:01.007105   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem --> /usr/share/ca-certificates/15973.pem (1338 bytes)
	I0920 18:18:01.030331   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /usr/share/ca-certificates/159732.pem (1708 bytes)
	I0920 18:18:01.055976   75264 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 18:18:01.073892   75264 ssh_runner.go:195] Run: openssl version
	I0920 18:18:01.081246   75264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 18:18:01.092174   75264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:18:01.096541   75264 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:18:01.096595   75264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:18:01.103564   75264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 18:18:01.117697   75264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15973.pem && ln -fs /usr/share/ca-certificates/15973.pem /etc/ssl/certs/15973.pem"
	I0920 18:18:01.130349   75264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15973.pem
	I0920 18:18:01.135397   75264 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:04 /usr/share/ca-certificates/15973.pem
	I0920 18:18:01.135471   75264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15973.pem
	I0920 18:18:01.141399   75264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15973.pem /etc/ssl/certs/51391683.0"
	I0920 18:18:01.153123   75264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/159732.pem && ln -fs /usr/share/ca-certificates/159732.pem /etc/ssl/certs/159732.pem"
	I0920 18:18:01.165157   75264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/159732.pem
	I0920 18:18:01.170596   75264 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:04 /usr/share/ca-certificates/159732.pem
	I0920 18:18:01.170678   75264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/159732.pem
	I0920 18:18:01.176534   75264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/159732.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 18:18:01.188576   75264 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 18:18:01.193401   75264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 18:18:01.199557   75264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 18:18:01.205410   75264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 18:18:01.211296   75264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 18:18:01.216953   75264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 18:18:01.223417   75264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 18:18:01.229172   75264 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-553719 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-553719 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.190 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:18:01.229428   75264 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 18:18:01.229488   75264 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:18:01.270537   75264 cri.go:89] found id: ""
	I0920 18:18:01.270610   75264 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 18:18:01.284566   75264 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 18:18:01.284588   75264 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 18:18:01.284638   75264 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 18:18:01.297884   75264 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 18:18:01.298879   75264 kubeconfig.go:125] found "default-k8s-diff-port-553719" server: "https://192.168.72.190:8444"
	I0920 18:18:01.300996   75264 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 18:18:01.312183   75264 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.190
	I0920 18:18:01.312251   75264 kubeadm.go:1160] stopping kube-system containers ...
	I0920 18:18:01.312266   75264 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0920 18:18:01.312324   75264 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:18:01.358873   75264 cri.go:89] found id: ""
	I0920 18:18:01.358959   75264 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 18:18:01.377027   75264 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 18:18:01.386872   75264 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 18:18:01.386890   75264 kubeadm.go:157] found existing configuration files:
	
	I0920 18:18:01.386931   75264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0920 18:18:01.396044   75264 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 18:18:01.396118   75264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 18:18:01.405783   75264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0920 18:18:01.415296   75264 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 18:18:01.415367   75264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 18:18:01.427782   75264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0920 18:18:01.438838   75264 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 18:18:01.438897   75264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 18:18:01.450255   75264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0920 18:18:01.461237   75264 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 18:18:01.461313   75264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 18:18:01.472543   75264 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 18:18:01.483456   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:01.607155   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:17:57.908137   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:17:57.908561   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:17:57.908589   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:17:57.908522   76990 retry.go:31] will retry after 749.044696ms: waiting for machine to come up
	I0920 18:17:58.658869   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:17:58.659393   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:17:58.659424   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:17:58.659342   76990 retry.go:31] will retry after 949.936088ms: waiting for machine to come up
	I0920 18:17:59.610936   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:17:59.611467   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:17:59.611513   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:17:59.611430   76990 retry.go:31] will retry after 762.437104ms: waiting for machine to come up
	I0920 18:18:00.375768   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:00.376207   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:18:00.376235   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:18:00.376152   76990 retry.go:31] will retry after 1.228102027s: waiting for machine to come up
	I0920 18:18:01.606490   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:01.606958   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:18:01.606982   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:18:01.606907   76990 retry.go:31] will retry after 1.383186524s: waiting for machine to come up
	I0920 18:18:00.169862   75086 pod_ready.go:103] pod "kube-apiserver-embed-certs-768431" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:01.669351   75086 pod_ready.go:93] pod "kube-apiserver-embed-certs-768431" in "kube-system" namespace has status "Ready":"True"
	I0920 18:18:01.669378   75086 pod_ready.go:82] duration metric: took 3.507078668s for pod "kube-apiserver-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:01.669392   75086 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:01.674981   75086 pod_ready.go:93] pod "kube-controller-manager-embed-certs-768431" in "kube-system" namespace has status "Ready":"True"
	I0920 18:18:01.675006   75086 pod_ready.go:82] duration metric: took 5.604682ms for pod "kube-controller-manager-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:01.675018   75086 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-cshjm" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:01.680522   75086 pod_ready.go:93] pod "kube-proxy-cshjm" in "kube-system" namespace has status "Ready":"True"
	I0920 18:18:01.680547   75086 pod_ready.go:82] duration metric: took 5.521295ms for pod "kube-proxy-cshjm" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:01.680559   75086 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:01.685097   75086 pod_ready.go:93] pod "kube-scheduler-embed-certs-768431" in "kube-system" namespace has status "Ready":"True"
	I0920 18:18:01.685120   75086 pod_ready.go:82] duration metric: took 4.551724ms for pod "kube-scheduler-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:01.685131   75086 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:03.693234   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:02.268120   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:02.471363   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:02.530979   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:02.580339   75264 api_server.go:52] waiting for apiserver process to appear ...
	I0920 18:18:02.580455   75264 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:03.081156   75264 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:03.580564   75264 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:04.080991   75264 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:04.095108   75264 api_server.go:72] duration metric: took 1.514770034s to wait for apiserver process to appear ...
	I0920 18:18:04.095139   75264 api_server.go:88] waiting for apiserver healthz status ...
	I0920 18:18:04.095164   75264 api_server.go:253] Checking apiserver healthz at https://192.168.72.190:8444/healthz ...
	I0920 18:18:04.095704   75264 api_server.go:269] stopped: https://192.168.72.190:8444/healthz: Get "https://192.168.72.190:8444/healthz": dial tcp 192.168.72.190:8444: connect: connection refused
	I0920 18:18:04.595270   75264 api_server.go:253] Checking apiserver healthz at https://192.168.72.190:8444/healthz ...
	I0920 18:18:06.378496   75264 api_server.go:279] https://192.168.72.190:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 18:18:06.378532   75264 api_server.go:103] status: https://192.168.72.190:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 18:18:06.378550   75264 api_server.go:253] Checking apiserver healthz at https://192.168.72.190:8444/healthz ...
	I0920 18:18:06.425353   75264 api_server.go:279] https://192.168.72.190:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 18:18:06.425386   75264 api_server.go:103] status: https://192.168.72.190:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 18:18:06.595786   75264 api_server.go:253] Checking apiserver healthz at https://192.168.72.190:8444/healthz ...
	I0920 18:18:06.602114   75264 api_server.go:279] https://192.168.72.190:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 18:18:06.602148   75264 api_server.go:103] status: https://192.168.72.190:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 18:18:07.096095   75264 api_server.go:253] Checking apiserver healthz at https://192.168.72.190:8444/healthz ...
	I0920 18:18:07.102320   75264 api_server.go:279] https://192.168.72.190:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 18:18:07.102348   75264 api_server.go:103] status: https://192.168.72.190:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 18:18:07.595804   75264 api_server.go:253] Checking apiserver healthz at https://192.168.72.190:8444/healthz ...
	I0920 18:18:07.600025   75264 api_server.go:279] https://192.168.72.190:8444/healthz returned 200:
	ok
	I0920 18:18:07.606598   75264 api_server.go:141] control plane version: v1.31.1
	I0920 18:18:07.606626   75264 api_server.go:131] duration metric: took 3.511479158s to wait for apiserver health ...
	I0920 18:18:07.606637   75264 cni.go:84] Creating CNI manager for ""
	I0920 18:18:07.606645   75264 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:18:07.608423   75264 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 18:18:02.992412   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:02.992887   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:18:02.992919   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:18:02.992822   76990 retry.go:31] will retry after 2.100326569s: waiting for machine to come up
	I0920 18:18:05.095088   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:05.095546   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:18:05.095569   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:18:05.095507   76990 retry.go:31] will retry after 1.758181729s: waiting for machine to come up
	I0920 18:18:06.855172   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:06.855654   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:18:06.855678   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:18:06.855606   76990 retry.go:31] will retry after 2.478116743s: waiting for machine to come up
	I0920 18:18:05.694350   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:08.191644   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:07.609444   75264 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 18:18:07.620420   75264 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 18:18:07.637796   75264 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 18:18:07.652676   75264 system_pods.go:59] 8 kube-system pods found
	I0920 18:18:07.652710   75264 system_pods.go:61] "coredns-7c65d6cfc9-dmdfb" [8e4ffdb3-37e0-4498-bdf5-d6e7dbcad020] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 18:18:07.652721   75264 system_pods.go:61] "etcd-default-k8s-diff-port-553719" [e8de0e96-5f3a-4f3f-ae69-c5da7d3c2eb7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0920 18:18:07.652727   75264 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-553719" [5164e26c-7fcd-41dc-865f-429046a9ad61] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0920 18:18:07.652735   75264 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-553719" [b2c00a76-9645-4c63-9812-43e6ed9f1d4f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0920 18:18:07.652741   75264 system_pods.go:61] "kube-proxy-p9crq" [83e0f53d-6960-42c4-904d-ea85ba9160f4] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0920 18:18:07.652746   75264 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-553719" [42155f7b-95bb-467b-acf5-89eb4f80cf76] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0920 18:18:07.652751   75264 system_pods.go:61] "metrics-server-6867b74b74-vtl79" [29e0b6eb-22a9-4e37-97f9-83b48cc38193] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 18:18:07.652760   75264 system_pods.go:61] "storage-provisioner" [6fad2d07-f99e-45ac-9657-bce6d73d7fce] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0920 18:18:07.652769   75264 system_pods.go:74] duration metric: took 14.950839ms to wait for pod list to return data ...
	I0920 18:18:07.652780   75264 node_conditions.go:102] verifying NodePressure condition ...
	I0920 18:18:07.658890   75264 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 18:18:07.658927   75264 node_conditions.go:123] node cpu capacity is 2
	I0920 18:18:07.658941   75264 node_conditions.go:105] duration metric: took 6.15496ms to run NodePressure ...
	I0920 18:18:07.658964   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:07.932231   75264 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0920 18:18:07.936474   75264 kubeadm.go:739] kubelet initialised
	I0920 18:18:07.936500   75264 kubeadm.go:740] duration metric: took 4.236071ms waiting for restarted kubelet to initialise ...
	I0920 18:18:07.936507   75264 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:18:07.941918   75264 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-dmdfb" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:07.947609   75264 pod_ready.go:98] node "default-k8s-diff-port-553719" hosting pod "coredns-7c65d6cfc9-dmdfb" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:07.947642   75264 pod_ready.go:82] duration metric: took 5.693968ms for pod "coredns-7c65d6cfc9-dmdfb" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:07.947653   75264 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-553719" hosting pod "coredns-7c65d6cfc9-dmdfb" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:07.947660   75264 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:07.954317   75264 pod_ready.go:98] node "default-k8s-diff-port-553719" hosting pod "etcd-default-k8s-diff-port-553719" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:07.954358   75264 pod_ready.go:82] duration metric: took 6.689603ms for pod "etcd-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:07.954373   75264 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-553719" hosting pod "etcd-default-k8s-diff-port-553719" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:07.954382   75264 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:07.966099   75264 pod_ready.go:98] node "default-k8s-diff-port-553719" hosting pod "kube-apiserver-default-k8s-diff-port-553719" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:07.966126   75264 pod_ready.go:82] duration metric: took 11.735727ms for pod "kube-apiserver-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:07.966141   75264 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-553719" hosting pod "kube-apiserver-default-k8s-diff-port-553719" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:07.966154   75264 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:08.041768   75264 pod_ready.go:98] node "default-k8s-diff-port-553719" hosting pod "kube-controller-manager-default-k8s-diff-port-553719" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:08.041794   75264 pod_ready.go:82] duration metric: took 75.630916ms for pod "kube-controller-manager-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:08.041805   75264 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-553719" hosting pod "kube-controller-manager-default-k8s-diff-port-553719" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:08.041816   75264 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-p9crq" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:08.440423   75264 pod_ready.go:98] node "default-k8s-diff-port-553719" hosting pod "kube-proxy-p9crq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:08.440459   75264 pod_ready.go:82] duration metric: took 398.635469ms for pod "kube-proxy-p9crq" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:08.440471   75264 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-553719" hosting pod "kube-proxy-p9crq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:08.440480   75264 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:08.841917   75264 pod_ready.go:98] node "default-k8s-diff-port-553719" hosting pod "kube-scheduler-default-k8s-diff-port-553719" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:08.841953   75264 pod_ready.go:82] duration metric: took 401.459059ms for pod "kube-scheduler-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:08.841968   75264 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-553719" hosting pod "kube-scheduler-default-k8s-diff-port-553719" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:08.841977   75264 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:09.241198   75264 pod_ready.go:98] node "default-k8s-diff-port-553719" hosting pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:09.241225   75264 pod_ready.go:82] duration metric: took 399.238784ms for pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:09.241237   75264 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-553719" hosting pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:09.241245   75264 pod_ready.go:39] duration metric: took 1.304729447s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:18:09.241259   75264 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 18:18:09.253519   75264 ops.go:34] apiserver oom_adj: -16
	I0920 18:18:09.253546   75264 kubeadm.go:597] duration metric: took 7.96895197s to restartPrimaryControlPlane
	I0920 18:18:09.253558   75264 kubeadm.go:394] duration metric: took 8.024395552s to StartCluster
	I0920 18:18:09.253586   75264 settings.go:142] acquiring lock: {Name:mk2a13c58cdc0faf8cddca5d6716175d45db9bfd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:18:09.253682   75264 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19672-8777/kubeconfig
	I0920 18:18:09.255907   75264 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/kubeconfig: {Name:mkf32a4c736808e023459b2f0e40188618a38db1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:18:09.256208   75264 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.190 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:18:09.256322   75264 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 18:18:09.256394   75264 config.go:182] Loaded profile config "default-k8s-diff-port-553719": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:18:09.256420   75264 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-553719"
	I0920 18:18:09.256428   75264 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-553719"
	I0920 18:18:09.256440   75264 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-553719"
	I0920 18:18:09.256448   75264 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-553719"
	I0920 18:18:09.256457   75264 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-553719"
	W0920 18:18:09.256468   75264 addons.go:243] addon metrics-server should already be in state true
	I0920 18:18:09.256496   75264 host.go:66] Checking if "default-k8s-diff-port-553719" exists ...
	I0920 18:18:09.256441   75264 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-553719"
	W0920 18:18:09.256569   75264 addons.go:243] addon storage-provisioner should already be in state true
	I0920 18:18:09.256602   75264 host.go:66] Checking if "default-k8s-diff-port-553719" exists ...
	I0920 18:18:09.256814   75264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:09.256844   75264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:09.256882   75264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:09.256926   75264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:09.256930   75264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:09.256957   75264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:09.258738   75264 out.go:177] * Verifying Kubernetes components...
	I0920 18:18:09.259893   75264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:18:09.272294   75264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45635
	I0920 18:18:09.272357   75264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39993
	I0920 18:18:09.272304   75264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32923
	I0920 18:18:09.272766   75264 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:09.272889   75264 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:09.272918   75264 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:09.273279   75264 main.go:141] libmachine: Using API Version  1
	I0920 18:18:09.273294   75264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:09.273416   75264 main.go:141] libmachine: Using API Version  1
	I0920 18:18:09.273438   75264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:09.273457   75264 main.go:141] libmachine: Using API Version  1
	I0920 18:18:09.273478   75264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:09.273657   75264 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:09.273850   75264 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:09.273922   75264 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:09.274017   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetState
	I0920 18:18:09.274244   75264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:09.274287   75264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:09.274399   75264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:09.274430   75264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:09.277535   75264 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-553719"
	W0920 18:18:09.277556   75264 addons.go:243] addon default-storageclass should already be in state true
	I0920 18:18:09.277586   75264 host.go:66] Checking if "default-k8s-diff-port-553719" exists ...
	I0920 18:18:09.277990   75264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:09.278041   75264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:09.290955   75264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39845
	I0920 18:18:09.291487   75264 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:09.292058   75264 main.go:141] libmachine: Using API Version  1
	I0920 18:18:09.292087   75264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:09.292409   75264 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:09.292607   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetState
	I0920 18:18:09.293351   75264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44575
	I0920 18:18:09.293790   75264 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:09.294056   75264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44517
	I0920 18:18:09.294412   75264 main.go:141] libmachine: Using API Version  1
	I0920 18:18:09.294438   75264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:09.294540   75264 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:09.294854   75264 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:09.294902   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .DriverName
	I0920 18:18:09.295362   75264 main.go:141] libmachine: Using API Version  1
	I0920 18:18:09.295387   75264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:09.295521   75264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:09.295569   75264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:09.295783   75264 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:09.295993   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetState
	I0920 18:18:09.297231   75264 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0920 18:18:09.298214   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .DriverName
	I0920 18:18:09.298825   75264 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 18:18:09.298849   75264 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 18:18:09.298870   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHHostname
	I0920 18:18:09.299849   75264 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:18:09.301018   75264 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 18:18:09.301034   75264 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 18:18:09.301048   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHHostname
	I0920 18:18:09.302841   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:18:09.303141   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:18:09.303165   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:18:09.303335   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHPort
	I0920 18:18:09.303491   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:18:09.303627   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHUsername
	I0920 18:18:09.303772   75264 sshutil.go:53] new ssh client: &{IP:192.168.72.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/default-k8s-diff-port-553719/id_rsa Username:docker}
	I0920 18:18:09.304104   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:18:09.304507   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:18:09.304552   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:18:09.304623   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHPort
	I0920 18:18:09.304786   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:18:09.304946   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHUsername
	I0920 18:18:09.305085   75264 sshutil.go:53] new ssh client: &{IP:192.168.72.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/default-k8s-diff-port-553719/id_rsa Username:docker}
	I0920 18:18:09.312912   75264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35085
	I0920 18:18:09.313277   75264 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:09.313780   75264 main.go:141] libmachine: Using API Version  1
	I0920 18:18:09.313801   75264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:09.314355   75264 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:09.314525   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetState
	I0920 18:18:09.316088   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .DriverName
	I0920 18:18:09.316415   75264 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 18:18:09.316429   75264 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 18:18:09.316448   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHHostname
	I0920 18:18:09.319116   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:18:09.319484   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:18:09.319512   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:18:09.319664   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHPort
	I0920 18:18:09.319832   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:18:09.319984   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHUsername
	I0920 18:18:09.320098   75264 sshutil.go:53] new ssh client: &{IP:192.168.72.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/default-k8s-diff-port-553719/id_rsa Username:docker}
	I0920 18:18:09.457570   75264 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:18:09.476315   75264 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-553719" to be "Ready" ...
	I0920 18:18:09.599420   75264 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 18:18:09.599442   75264 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0920 18:18:09.600777   75264 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 18:18:09.621891   75264 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 18:18:09.626217   75264 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 18:18:09.626251   75264 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 18:18:09.675537   75264 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 18:18:09.675569   75264 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 18:18:09.723167   75264 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 18:18:10.813006   75264 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.191083617s)
	I0920 18:18:10.813080   75264 main.go:141] libmachine: Making call to close driver server
	I0920 18:18:10.813095   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .Close
	I0920 18:18:10.813403   75264 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:18:10.813452   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | Closing plugin on server side
	I0920 18:18:10.813455   75264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:18:10.813497   75264 main.go:141] libmachine: Making call to close driver server
	I0920 18:18:10.813518   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .Close
	I0920 18:18:10.813748   75264 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:18:10.813764   75264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:18:10.813783   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | Closing plugin on server side
	I0920 18:18:10.813975   75264 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.213167075s)
	I0920 18:18:10.814018   75264 main.go:141] libmachine: Making call to close driver server
	I0920 18:18:10.814034   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .Close
	I0920 18:18:10.814323   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | Closing plugin on server side
	I0920 18:18:10.814325   75264 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:18:10.814345   75264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:18:10.814358   75264 main.go:141] libmachine: Making call to close driver server
	I0920 18:18:10.814368   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .Close
	I0920 18:18:10.814653   75264 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:18:10.814673   75264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:18:10.821768   75264 main.go:141] libmachine: Making call to close driver server
	I0920 18:18:10.821785   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .Close
	I0920 18:18:10.822016   75264 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:18:10.822034   75264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:18:10.846062   75264 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.122849108s)
	I0920 18:18:10.846122   75264 main.go:141] libmachine: Making call to close driver server
	I0920 18:18:10.846139   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .Close
	I0920 18:18:10.846441   75264 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:18:10.846469   75264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:18:10.846469   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | Closing plugin on server side
	I0920 18:18:10.846479   75264 main.go:141] libmachine: Making call to close driver server
	I0920 18:18:10.846488   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .Close
	I0920 18:18:10.846715   75264 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:18:10.846737   75264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:18:10.846748   75264 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-553719"
	I0920 18:18:10.846749   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | Closing plugin on server side
	I0920 18:18:10.848702   75264 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0920 18:18:10.849990   75264 addons.go:510] duration metric: took 1.593667614s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0920 18:18:11.480490   75264 node_ready.go:53] node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:09.334928   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:09.335367   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:18:09.335402   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:18:09.335314   76990 retry.go:31] will retry after 4.194120768s: waiting for machine to come up
	I0920 18:18:10.192078   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:12.192186   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:14.778926   74753 start.go:364] duration metric: took 54.584395971s to acquireMachinesLock for "no-preload-956403"
	I0920 18:18:14.778979   74753 start.go:96] Skipping create...Using existing machine configuration
	I0920 18:18:14.778987   74753 fix.go:54] fixHost starting: 
	I0920 18:18:14.779392   74753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:14.779430   74753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:14.796004   74753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45761
	I0920 18:18:14.796509   74753 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:14.797006   74753 main.go:141] libmachine: Using API Version  1
	I0920 18:18:14.797028   74753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:14.797330   74753 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:14.797499   74753 main.go:141] libmachine: (no-preload-956403) Calling .DriverName
	I0920 18:18:14.797650   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetState
	I0920 18:18:14.799376   74753 fix.go:112] recreateIfNeeded on no-preload-956403: state=Stopped err=<nil>
	I0920 18:18:14.799400   74753 main.go:141] libmachine: (no-preload-956403) Calling .DriverName
	W0920 18:18:14.799564   74753 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 18:18:14.802451   74753 out.go:177] * Restarting existing kvm2 VM for "no-preload-956403" ...
	I0920 18:18:13.533702   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.534281   75577 main.go:141] libmachine: (old-k8s-version-744025) Found IP for machine: 192.168.39.207
	I0920 18:18:13.534309   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has current primary IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.534314   75577 main.go:141] libmachine: (old-k8s-version-744025) Reserving static IP address...
	I0920 18:18:13.534725   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "old-k8s-version-744025", mac: "52:54:00:e5:57:41", ip: "192.168.39.207"} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:13.534761   75577 main.go:141] libmachine: (old-k8s-version-744025) Reserved static IP address: 192.168.39.207
	I0920 18:18:13.534783   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | skip adding static IP to network mk-old-k8s-version-744025 - found existing host DHCP lease matching {name: "old-k8s-version-744025", mac: "52:54:00:e5:57:41", ip: "192.168.39.207"}
	I0920 18:18:13.534800   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | Getting to WaitForSSH function...
	I0920 18:18:13.534816   75577 main.go:141] libmachine: (old-k8s-version-744025) Waiting for SSH to be available...
	I0920 18:18:13.536879   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.537271   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:13.537301   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.537391   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | Using SSH client type: external
	I0920 18:18:13.537439   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/old-k8s-version-744025/id_rsa (-rw-------)
	I0920 18:18:13.537476   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.207 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-8777/.minikube/machines/old-k8s-version-744025/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 18:18:13.537486   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | About to run SSH command:
	I0920 18:18:13.537498   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | exit 0
	I0920 18:18:13.662214   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | SSH cmd err, output: <nil>: 
	I0920 18:18:13.662565   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetConfigRaw
	I0920 18:18:13.663269   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetIP
	I0920 18:18:13.666068   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.666530   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:13.666561   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.666899   75577 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/config.json ...
	I0920 18:18:13.667111   75577 machine.go:93] provisionDockerMachine start ...
	I0920 18:18:13.667129   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .DriverName
	I0920 18:18:13.667331   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHHostname
	I0920 18:18:13.669347   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.669717   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:13.669743   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.669908   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHPort
	I0920 18:18:13.670167   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:13.670334   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:13.670453   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHUsername
	I0920 18:18:13.670583   75577 main.go:141] libmachine: Using SSH client type: native
	I0920 18:18:13.670759   75577 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.207 22 <nil> <nil>}
	I0920 18:18:13.670770   75577 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 18:18:13.774059   75577 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 18:18:13.774093   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetMachineName
	I0920 18:18:13.774406   75577 buildroot.go:166] provisioning hostname "old-k8s-version-744025"
	I0920 18:18:13.774434   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetMachineName
	I0920 18:18:13.774618   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHHostname
	I0920 18:18:13.777175   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.777482   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:13.777513   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.777633   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHPort
	I0920 18:18:13.777803   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:13.777966   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:13.778082   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHUsername
	I0920 18:18:13.778235   75577 main.go:141] libmachine: Using SSH client type: native
	I0920 18:18:13.778404   75577 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.207 22 <nil> <nil>}
	I0920 18:18:13.778417   75577 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-744025 && echo "old-k8s-version-744025" | sudo tee /etc/hostname
	I0920 18:18:13.900180   75577 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-744025
	
	I0920 18:18:13.900224   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHHostname
	I0920 18:18:13.902958   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.903309   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:13.903340   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.903543   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHPort
	I0920 18:18:13.903762   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:13.903931   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:13.904051   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHUsername
	I0920 18:18:13.904214   75577 main.go:141] libmachine: Using SSH client type: native
	I0920 18:18:13.904393   75577 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.207 22 <nil> <nil>}
	I0920 18:18:13.904409   75577 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-744025' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-744025/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-744025' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 18:18:14.023748   75577 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:18:14.023781   75577 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-8777/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-8777/.minikube}
	I0920 18:18:14.023834   75577 buildroot.go:174] setting up certificates
	I0920 18:18:14.023851   75577 provision.go:84] configureAuth start
	I0920 18:18:14.023866   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetMachineName
	I0920 18:18:14.024154   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetIP
	I0920 18:18:14.027240   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.027640   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:14.027778   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.027867   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHHostname
	I0920 18:18:14.030383   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.030741   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:14.030765   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.030915   75577 provision.go:143] copyHostCerts
	I0920 18:18:14.030979   75577 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem, removing ...
	I0920 18:18:14.030999   75577 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem
	I0920 18:18:14.031072   75577 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem (1082 bytes)
	I0920 18:18:14.031188   75577 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem, removing ...
	I0920 18:18:14.031205   75577 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem
	I0920 18:18:14.031240   75577 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem (1123 bytes)
	I0920 18:18:14.031340   75577 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem, removing ...
	I0920 18:18:14.031351   75577 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem
	I0920 18:18:14.031378   75577 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem (1675 bytes)
	I0920 18:18:14.031455   75577 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-744025 san=[127.0.0.1 192.168.39.207 localhost minikube old-k8s-version-744025]
	I0920 18:18:14.140775   75577 provision.go:177] copyRemoteCerts
	I0920 18:18:14.140847   75577 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 18:18:14.140883   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHHostname
	I0920 18:18:14.143599   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.144016   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:14.144062   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.144199   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHPort
	I0920 18:18:14.144377   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:14.144526   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHUsername
	I0920 18:18:14.144656   75577 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/old-k8s-version-744025/id_rsa Username:docker}
	I0920 18:18:14.228286   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 18:18:14.257293   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0920 18:18:14.281335   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0920 18:18:14.305736   75577 provision.go:87] duration metric: took 281.853458ms to configureAuth
	I0920 18:18:14.305762   75577 buildroot.go:189] setting minikube options for container-runtime
	I0920 18:18:14.305983   75577 config.go:182] Loaded profile config "old-k8s-version-744025": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0920 18:18:14.306076   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHHostname
	I0920 18:18:14.308833   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.309220   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:14.309244   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.309535   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHPort
	I0920 18:18:14.309779   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:14.309974   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:14.310124   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHUsername
	I0920 18:18:14.310316   75577 main.go:141] libmachine: Using SSH client type: native
	I0920 18:18:14.310535   75577 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.207 22 <nil> <nil>}
	I0920 18:18:14.310551   75577 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 18:18:14.536416   75577 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 18:18:14.536442   75577 machine.go:96] duration metric: took 869.318772ms to provisionDockerMachine
	I0920 18:18:14.536456   75577 start.go:293] postStartSetup for "old-k8s-version-744025" (driver="kvm2")
	I0920 18:18:14.536468   75577 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 18:18:14.536510   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .DriverName
	I0920 18:18:14.536803   75577 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 18:18:14.536831   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHHostname
	I0920 18:18:14.539503   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.539923   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:14.539951   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.540126   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHPort
	I0920 18:18:14.540303   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:14.540463   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHUsername
	I0920 18:18:14.540649   75577 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/old-k8s-version-744025/id_rsa Username:docker}
	I0920 18:18:14.625866   75577 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 18:18:14.630527   75577 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 18:18:14.630551   75577 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/addons for local assets ...
	I0920 18:18:14.630637   75577 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/files for local assets ...
	I0920 18:18:14.630727   75577 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem -> 159732.pem in /etc/ssl/certs
	I0920 18:18:14.630840   75577 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 18:18:14.640965   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /etc/ssl/certs/159732.pem (1708 bytes)
	I0920 18:18:14.668227   75577 start.go:296] duration metric: took 131.756559ms for postStartSetup
	I0920 18:18:14.668268   75577 fix.go:56] duration metric: took 19.581104117s for fixHost
	I0920 18:18:14.668295   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHHostname
	I0920 18:18:14.671138   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.671520   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:14.671549   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.671777   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHPort
	I0920 18:18:14.671981   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:14.672141   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:14.672280   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHUsername
	I0920 18:18:14.672436   75577 main.go:141] libmachine: Using SSH client type: native
	I0920 18:18:14.672606   75577 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.207 22 <nil> <nil>}
	I0920 18:18:14.672616   75577 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 18:18:14.778752   75577 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726856294.753230182
	
	I0920 18:18:14.778785   75577 fix.go:216] guest clock: 1726856294.753230182
	I0920 18:18:14.778796   75577 fix.go:229] Guest: 2024-09-20 18:18:14.753230182 +0000 UTC Remote: 2024-09-20 18:18:14.668273351 +0000 UTC m=+241.967312055 (delta=84.956831ms)
	I0920 18:18:14.778827   75577 fix.go:200] guest clock delta is within tolerance: 84.956831ms
	I0920 18:18:14.778836   75577 start.go:83] releasing machines lock for "old-k8s-version-744025", held for 19.691716285s
	I0920 18:18:14.778874   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .DriverName
	I0920 18:18:14.779135   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetIP
	I0920 18:18:14.781932   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.782357   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:14.782386   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.782572   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .DriverName
	I0920 18:18:14.783124   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .DriverName
	I0920 18:18:14.783328   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .DriverName
	I0920 18:18:14.783401   75577 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 18:18:14.783465   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHHostname
	I0920 18:18:14.783519   75577 ssh_runner.go:195] Run: cat /version.json
	I0920 18:18:14.783552   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHHostname
	I0920 18:18:14.786645   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.786817   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.786994   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:14.787023   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.787207   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:14.787259   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.787264   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHPort
	I0920 18:18:14.787404   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHPort
	I0920 18:18:14.787469   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:14.787561   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:14.787612   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHUsername
	I0920 18:18:14.787711   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHUsername
	I0920 18:18:14.787845   75577 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/old-k8s-version-744025/id_rsa Username:docker}
	I0920 18:18:14.787845   75577 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/old-k8s-version-744025/id_rsa Username:docker}
	I0920 18:18:14.872183   75577 ssh_runner.go:195] Run: systemctl --version
	I0920 18:18:14.913275   75577 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 18:18:15.071299   75577 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 18:18:15.078725   75577 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 18:18:15.078806   75577 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 18:18:15.101497   75577 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 18:18:15.101522   75577 start.go:495] detecting cgroup driver to use...
	I0920 18:18:15.101579   75577 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 18:18:15.120626   75577 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 18:18:15.137297   75577 docker.go:217] disabling cri-docker service (if available) ...
	I0920 18:18:15.137401   75577 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 18:18:15.152152   75577 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 18:18:15.167359   75577 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 18:18:15.288763   75577 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 18:18:15.475429   75577 docker.go:233] disabling docker service ...
	I0920 18:18:15.475512   75577 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 18:18:15.493331   75577 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 18:18:15.510326   75577 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 18:18:15.629119   75577 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 18:18:15.758923   75577 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 18:18:15.776778   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 18:18:15.798980   75577 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0920 18:18:15.799042   75577 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:18:15.810264   75577 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 18:18:15.810322   75577 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:18:15.821060   75577 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:18:15.832685   75577 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:18:15.843026   75577 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 18:18:15.853996   75577 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 18:18:15.864126   75577 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 18:18:15.864183   75577 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 18:18:15.878216   75577 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 18:18:15.889820   75577 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:18:16.015555   75577 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 18:18:16.118797   75577 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 18:18:16.118880   75577 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 18:18:16.124430   75577 start.go:563] Will wait 60s for crictl version
	I0920 18:18:16.124485   75577 ssh_runner.go:195] Run: which crictl
	I0920 18:18:16.128632   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 18:18:16.165596   75577 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 18:18:16.165707   75577 ssh_runner.go:195] Run: crio --version
	I0920 18:18:16.198772   75577 ssh_runner.go:195] Run: crio --version
	I0920 18:18:16.238511   75577 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0920 18:18:13.980331   75264 node_ready.go:53] node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:15.981435   75264 node_ready.go:53] node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:16.982412   75264 node_ready.go:49] node "default-k8s-diff-port-553719" has status "Ready":"True"
	I0920 18:18:16.982441   75264 node_ready.go:38] duration metric: took 7.506086526s for node "default-k8s-diff-port-553719" to be "Ready" ...
	I0920 18:18:16.982457   75264 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:18:16.989870   75264 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-dmdfb" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:14.803646   74753 main.go:141] libmachine: (no-preload-956403) Calling .Start
	I0920 18:18:14.803856   74753 main.go:141] libmachine: (no-preload-956403) Ensuring networks are active...
	I0920 18:18:14.804594   74753 main.go:141] libmachine: (no-preload-956403) Ensuring network default is active
	I0920 18:18:14.804920   74753 main.go:141] libmachine: (no-preload-956403) Ensuring network mk-no-preload-956403 is active
	I0920 18:18:14.805341   74753 main.go:141] libmachine: (no-preload-956403) Getting domain xml...
	I0920 18:18:14.806068   74753 main.go:141] libmachine: (no-preload-956403) Creating domain...
	I0920 18:18:16.196663   74753 main.go:141] libmachine: (no-preload-956403) Waiting to get IP...
	I0920 18:18:16.197381   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:16.197762   74753 main.go:141] libmachine: (no-preload-956403) DBG | unable to find current IP address of domain no-preload-956403 in network mk-no-preload-956403
	I0920 18:18:16.197885   74753 main.go:141] libmachine: (no-preload-956403) DBG | I0920 18:18:16.197764   77223 retry.go:31] will retry after 214.087819ms: waiting for machine to come up
	I0920 18:18:16.413365   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:16.413951   74753 main.go:141] libmachine: (no-preload-956403) DBG | unable to find current IP address of domain no-preload-956403 in network mk-no-preload-956403
	I0920 18:18:16.413977   74753 main.go:141] libmachine: (no-preload-956403) DBG | I0920 18:18:16.413916   77223 retry.go:31] will retry after 249.35647ms: waiting for machine to come up
	I0920 18:18:16.665587   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:16.666168   74753 main.go:141] libmachine: (no-preload-956403) DBG | unable to find current IP address of domain no-preload-956403 in network mk-no-preload-956403
	I0920 18:18:16.666203   74753 main.go:141] libmachine: (no-preload-956403) DBG | I0920 18:18:16.666132   77223 retry.go:31] will retry after 374.598012ms: waiting for machine to come up
	I0920 18:18:17.042911   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:17.043594   74753 main.go:141] libmachine: (no-preload-956403) DBG | unable to find current IP address of domain no-preload-956403 in network mk-no-preload-956403
	I0920 18:18:17.043618   74753 main.go:141] libmachine: (no-preload-956403) DBG | I0920 18:18:17.043550   77223 retry.go:31] will retry after 536.252353ms: waiting for machine to come up
	I0920 18:18:17.581141   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:17.581582   74753 main.go:141] libmachine: (no-preload-956403) DBG | unable to find current IP address of domain no-preload-956403 in network mk-no-preload-956403
	I0920 18:18:17.581616   74753 main.go:141] libmachine: (no-preload-956403) DBG | I0920 18:18:17.581528   77223 retry.go:31] will retry after 459.241867ms: waiting for machine to come up
	I0920 18:18:16.239946   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetIP
	I0920 18:18:16.242727   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:16.243147   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:16.243184   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:16.243561   75577 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 18:18:16.247928   75577 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:18:16.262165   75577 kubeadm.go:883] updating cluster {Name:old-k8s-version-744025 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-744025 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.207 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 18:18:16.262310   75577 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0920 18:18:16.262358   75577 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:18:16.313771   75577 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0920 18:18:16.313854   75577 ssh_runner.go:195] Run: which lz4
	I0920 18:18:16.318361   75577 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 18:18:16.322529   75577 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 18:18:16.322570   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0920 18:18:14.192319   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:16.194161   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:18.699498   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:18.999985   75264 pod_ready.go:103] pod "coredns-7c65d6cfc9-dmdfb" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:21.497825   75264 pod_ready.go:103] pod "coredns-7c65d6cfc9-dmdfb" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:18.042075   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:18.042566   74753 main.go:141] libmachine: (no-preload-956403) DBG | unable to find current IP address of domain no-preload-956403 in network mk-no-preload-956403
	I0920 18:18:18.042603   74753 main.go:141] libmachine: (no-preload-956403) DBG | I0920 18:18:18.042533   77223 retry.go:31] will retry after 833.585895ms: waiting for machine to come up
	I0920 18:18:18.877534   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:18.878023   74753 main.go:141] libmachine: (no-preload-956403) DBG | unable to find current IP address of domain no-preload-956403 in network mk-no-preload-956403
	I0920 18:18:18.878047   74753 main.go:141] libmachine: (no-preload-956403) DBG | I0920 18:18:18.877988   77223 retry.go:31] will retry after 1.035805905s: waiting for machine to come up
	I0920 18:18:19.915316   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:19.915735   74753 main.go:141] libmachine: (no-preload-956403) DBG | unable to find current IP address of domain no-preload-956403 in network mk-no-preload-956403
	I0920 18:18:19.915859   74753 main.go:141] libmachine: (no-preload-956403) DBG | I0920 18:18:19.915787   77223 retry.go:31] will retry after 978.827371ms: waiting for machine to come up
	I0920 18:18:20.896532   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:20.897185   74753 main.go:141] libmachine: (no-preload-956403) DBG | unable to find current IP address of domain no-preload-956403 in network mk-no-preload-956403
	I0920 18:18:20.897216   74753 main.go:141] libmachine: (no-preload-956403) DBG | I0920 18:18:20.897122   77223 retry.go:31] will retry after 1.808853939s: waiting for machine to come up
	I0920 18:18:17.953626   75577 crio.go:462] duration metric: took 1.635321078s to copy over tarball
	I0920 18:18:17.953717   75577 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 18:18:21.049561   75577 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.095809102s)
	I0920 18:18:21.049588   75577 crio.go:469] duration metric: took 3.095926273s to extract the tarball
	I0920 18:18:21.049596   75577 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 18:18:21.093521   75577 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:18:21.132318   75577 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0920 18:18:21.132361   75577 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0920 18:18:21.132426   75577 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:18:21.132449   75577 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 18:18:21.132432   75577 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 18:18:21.132551   75577 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 18:18:21.132587   75577 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0920 18:18:21.132708   75577 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0920 18:18:21.132819   75577 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 18:18:21.132557   75577 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0920 18:18:21.134217   75577 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0920 18:18:21.134256   75577 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0920 18:18:21.134277   75577 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 18:18:21.134313   75577 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0920 18:18:21.134327   75577 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 18:18:21.134341   75577 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 18:18:21.134266   75577 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 18:18:21.134356   75577 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:18:21.421546   75577 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0920 18:18:21.467311   75577 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0920 18:18:21.467349   75577 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0920 18:18:21.467400   75577 ssh_runner.go:195] Run: which crictl
	I0920 18:18:21.471158   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 18:18:21.474543   75577 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0920 18:18:21.480501   75577 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 18:18:21.499826   75577 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0920 18:18:21.499835   75577 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0920 18:18:21.505712   75577 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0920 18:18:21.514377   75577 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0920 18:18:21.514897   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 18:18:21.601531   75577 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0920 18:18:21.601582   75577 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0920 18:18:21.601635   75577 ssh_runner.go:195] Run: which crictl
	I0920 18:18:21.660417   75577 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0920 18:18:21.660457   75577 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 18:18:21.660504   75577 ssh_runner.go:195] Run: which crictl
	I0920 18:18:21.660528   75577 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0920 18:18:21.660571   75577 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 18:18:21.660625   75577 ssh_runner.go:195] Run: which crictl
	I0920 18:18:21.667538   75577 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0920 18:18:21.667558   75577 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0920 18:18:21.667583   75577 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 18:18:21.667595   75577 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0920 18:18:21.667633   75577 ssh_runner.go:195] Run: which crictl
	I0920 18:18:21.667652   75577 ssh_runner.go:195] Run: which crictl
	I0920 18:18:21.676529   75577 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0920 18:18:21.676580   75577 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 18:18:21.676602   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 18:18:21.676633   75577 ssh_runner.go:195] Run: which crictl
	I0920 18:18:21.676643   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 18:18:21.680041   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 18:18:21.681078   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 18:18:21.681180   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 18:18:21.683409   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 18:18:21.804439   75577 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0920 18:18:21.804492   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 18:18:21.804530   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 18:18:21.804585   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 18:18:21.823434   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 18:18:21.823497   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 18:18:21.823539   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 18:18:21.932568   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 18:18:21.932619   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 18:18:21.932639   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 18:18:21.958001   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 18:18:21.971089   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 18:18:21.975643   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 18:18:22.100118   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 18:18:22.100173   75577 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0920 18:18:22.100197   75577 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0920 18:18:22.100276   75577 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0920 18:18:22.100333   75577 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0920 18:18:22.109895   75577 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0920 18:18:22.137798   75577 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0920 18:18:22.331884   75577 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:18:22.476470   75577 cache_images.go:92] duration metric: took 1.344086626s to LoadCachedImages
	W0920 18:18:22.476575   75577 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0920 18:18:22.476595   75577 kubeadm.go:934] updating node { 192.168.39.207 8443 v1.20.0 crio true true} ...
	I0920 18:18:22.476742   75577 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-744025 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.207
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-744025 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 18:18:22.476833   75577 ssh_runner.go:195] Run: crio config
	I0920 18:18:22.528964   75577 cni.go:84] Creating CNI manager for ""
	I0920 18:18:22.528990   75577 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:18:22.528999   75577 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 18:18:22.529016   75577 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.207 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-744025 NodeName:old-k8s-version-744025 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.207"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.207 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0920 18:18:22.529173   75577 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.207
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-744025"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.207
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.207"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 18:18:22.529233   75577 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0920 18:18:22.540849   75577 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 18:18:22.540935   75577 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 18:18:22.551148   75577 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0920 18:18:22.569199   75577 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 18:18:22.585909   75577 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0920 18:18:22.604987   75577 ssh_runner.go:195] Run: grep 192.168.39.207	control-plane.minikube.internal$ /etc/hosts
	I0920 18:18:22.609152   75577 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.207	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:18:22.628042   75577 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:18:21.191366   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:23.193152   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:23.914443   75264 pod_ready.go:103] pod "coredns-7c65d6cfc9-dmdfb" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:25.002152   75264 pod_ready.go:93] pod "coredns-7c65d6cfc9-dmdfb" in "kube-system" namespace has status "Ready":"True"
	I0920 18:18:25.002185   75264 pod_ready.go:82] duration metric: took 8.012280973s for pod "coredns-7c65d6cfc9-dmdfb" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:25.002198   75264 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:25.010541   75264 pod_ready.go:93] pod "etcd-default-k8s-diff-port-553719" in "kube-system" namespace has status "Ready":"True"
	I0920 18:18:25.010570   75264 pod_ready.go:82] duration metric: took 8.362504ms for pod "etcd-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:25.010591   75264 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:25.023701   75264 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-553719" in "kube-system" namespace has status "Ready":"True"
	I0920 18:18:25.023741   75264 pod_ready.go:82] duration metric: took 13.139423ms for pod "kube-apiserver-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:25.023758   75264 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:25.031247   75264 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-553719" in "kube-system" namespace has status "Ready":"True"
	I0920 18:18:25.031282   75264 pod_ready.go:82] duration metric: took 7.515474ms for pod "kube-controller-manager-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:25.031307   75264 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-p9crq" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:25.119242   75264 pod_ready.go:93] pod "kube-proxy-p9crq" in "kube-system" namespace has status "Ready":"True"
	I0920 18:18:25.119267   75264 pod_ready.go:82] duration metric: took 87.951791ms for pod "kube-proxy-p9crq" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:25.119280   75264 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:25.518243   75264 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-553719" in "kube-system" namespace has status "Ready":"True"
	I0920 18:18:25.518267   75264 pod_ready.go:82] duration metric: took 398.979314ms for pod "kube-scheduler-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:25.518281   75264 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:22.707778   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:22.708271   74753 main.go:141] libmachine: (no-preload-956403) DBG | unable to find current IP address of domain no-preload-956403 in network mk-no-preload-956403
	I0920 18:18:22.708333   74753 main.go:141] libmachine: (no-preload-956403) DBG | I0920 18:18:22.708161   77223 retry.go:31] will retry after 1.516042611s: waiting for machine to come up
	I0920 18:18:24.225560   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:24.225987   74753 main.go:141] libmachine: (no-preload-956403) DBG | unable to find current IP address of domain no-preload-956403 in network mk-no-preload-956403
	I0920 18:18:24.226041   74753 main.go:141] libmachine: (no-preload-956403) DBG | I0920 18:18:24.225968   77223 retry.go:31] will retry after 2.135874415s: waiting for machine to come up
	I0920 18:18:26.363371   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:26.363861   74753 main.go:141] libmachine: (no-preload-956403) DBG | unable to find current IP address of domain no-preload-956403 in network mk-no-preload-956403
	I0920 18:18:26.363895   74753 main.go:141] libmachine: (no-preload-956403) DBG | I0920 18:18:26.363777   77223 retry.go:31] will retry after 3.29383193s: waiting for machine to come up
	I0920 18:18:22.796651   75577 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:18:22.813789   75577 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025 for IP: 192.168.39.207
	I0920 18:18:22.813817   75577 certs.go:194] generating shared ca certs ...
	I0920 18:18:22.813848   75577 certs.go:226] acquiring lock for ca certs: {Name:mkc7ef6c737c6bdc3fdd9dcff8f57029c020d8f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:18:22.814045   75577 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key
	I0920 18:18:22.814101   75577 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key
	I0920 18:18:22.814118   75577 certs.go:256] generating profile certs ...
	I0920 18:18:22.814290   75577 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/client.key
	I0920 18:18:22.814383   75577 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/apiserver.key.3105b99d
	I0920 18:18:22.814445   75577 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/proxy-client.key
	I0920 18:18:22.814626   75577 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem (1338 bytes)
	W0920 18:18:22.814660   75577 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973_empty.pem, impossibly tiny 0 bytes
	I0920 18:18:22.814666   75577 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 18:18:22.814691   75577 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem (1082 bytes)
	I0920 18:18:22.814717   75577 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem (1123 bytes)
	I0920 18:18:22.814749   75577 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem (1675 bytes)
	I0920 18:18:22.814813   75577 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem (1708 bytes)
	I0920 18:18:22.815736   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 18:18:22.863832   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 18:18:22.921747   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 18:18:22.959556   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0920 18:18:22.992097   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0920 18:18:23.027565   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 18:18:23.057374   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 18:18:23.094290   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 18:18:23.120095   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem --> /usr/share/ca-certificates/15973.pem (1338 bytes)
	I0920 18:18:23.144374   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /usr/share/ca-certificates/159732.pem (1708 bytes)
	I0920 18:18:23.169431   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 18:18:23.195779   75577 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 18:18:23.212820   75577 ssh_runner.go:195] Run: openssl version
	I0920 18:18:23.218876   75577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 18:18:23.229684   75577 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:18:23.234533   75577 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:18:23.234603   75577 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:18:23.240460   75577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 18:18:23.251940   75577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15973.pem && ln -fs /usr/share/ca-certificates/15973.pem /etc/ssl/certs/15973.pem"
	I0920 18:18:23.263308   75577 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15973.pem
	I0920 18:18:23.268059   75577 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:04 /usr/share/ca-certificates/15973.pem
	I0920 18:18:23.268128   75577 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15973.pem
	I0920 18:18:23.274199   75577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15973.pem /etc/ssl/certs/51391683.0"
	I0920 18:18:23.286362   75577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/159732.pem && ln -fs /usr/share/ca-certificates/159732.pem /etc/ssl/certs/159732.pem"
	I0920 18:18:23.303962   75577 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/159732.pem
	I0920 18:18:23.310907   75577 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:04 /usr/share/ca-certificates/159732.pem
	I0920 18:18:23.310981   75577 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/159732.pem
	I0920 18:18:23.317881   75577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/159732.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 18:18:23.329247   75577 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 18:18:23.334223   75577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 18:18:23.340565   75577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 18:18:23.346929   75577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 18:18:23.353681   75577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 18:18:23.359699   75577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 18:18:23.365749   75577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 18:18:23.371899   75577 kubeadm.go:392] StartCluster: {Name:old-k8s-version-744025 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-744025 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.207 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:18:23.371981   75577 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 18:18:23.372027   75577 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:18:23.419904   75577 cri.go:89] found id: ""
	I0920 18:18:23.419982   75577 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 18:18:23.431761   75577 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 18:18:23.431782   75577 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 18:18:23.431833   75577 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 18:18:23.444358   75577 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 18:18:23.445545   75577 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-744025" does not appear in /home/jenkins/minikube-integration/19672-8777/kubeconfig
	I0920 18:18:23.446531   75577 kubeconfig.go:62] /home/jenkins/minikube-integration/19672-8777/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-744025" cluster setting kubeconfig missing "old-k8s-version-744025" context setting]
	I0920 18:18:23.447808   75577 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/kubeconfig: {Name:mkf32a4c736808e023459b2f0e40188618a38db1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:18:23.568927   75577 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 18:18:23.579991   75577 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.207
	I0920 18:18:23.580025   75577 kubeadm.go:1160] stopping kube-system containers ...
	I0920 18:18:23.580038   75577 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0920 18:18:23.580097   75577 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:18:23.625568   75577 cri.go:89] found id: ""
	I0920 18:18:23.625648   75577 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 18:18:23.643938   75577 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 18:18:23.654375   75577 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 18:18:23.654398   75577 kubeadm.go:157] found existing configuration files:
	
	I0920 18:18:23.654453   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 18:18:23.664335   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 18:18:23.664409   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 18:18:23.674996   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 18:18:23.685310   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 18:18:23.685401   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 18:18:23.696241   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 18:18:23.706386   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 18:18:23.706465   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 18:18:23.716491   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 18:18:23.726566   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 18:18:23.726626   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 18:18:23.738576   75577 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 18:18:23.749510   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:23.877503   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:24.789322   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:25.054969   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:25.152117   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:25.249140   75577 api_server.go:52] waiting for apiserver process to appear ...
	I0920 18:18:25.249245   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:25.749427   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:26.249360   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:26.749895   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:27.249636   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:25.194678   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:27.693680   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:27.524378   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:29.524998   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:32.025310   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:29.661278   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:29.661743   74753 main.go:141] libmachine: (no-preload-956403) DBG | unable to find current IP address of domain no-preload-956403 in network mk-no-preload-956403
	I0920 18:18:29.661762   74753 main.go:141] libmachine: (no-preload-956403) DBG | I0920 18:18:29.661714   77223 retry.go:31] will retry after 3.154777794s: waiting for machine to come up
	I0920 18:18:27.749629   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:28.250139   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:28.749691   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:29.249709   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:29.749550   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:30.250186   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:30.749562   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:31.249622   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:31.749925   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:32.250324   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:30.191419   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:32.191873   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:32.820331   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:32.820870   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has current primary IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:32.820895   74753 main.go:141] libmachine: (no-preload-956403) Found IP for machine: 192.168.50.47
	I0920 18:18:32.820908   74753 main.go:141] libmachine: (no-preload-956403) Reserving static IP address...
	I0920 18:18:32.821313   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "no-preload-956403", mac: "52:54:00:b6:13:30", ip: "192.168.50.47"} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:32.821339   74753 main.go:141] libmachine: (no-preload-956403) Reserved static IP address: 192.168.50.47
	I0920 18:18:32.821355   74753 main.go:141] libmachine: (no-preload-956403) DBG | skip adding static IP to network mk-no-preload-956403 - found existing host DHCP lease matching {name: "no-preload-956403", mac: "52:54:00:b6:13:30", ip: "192.168.50.47"}
	I0920 18:18:32.821372   74753 main.go:141] libmachine: (no-preload-956403) DBG | Getting to WaitForSSH function...
	I0920 18:18:32.821389   74753 main.go:141] libmachine: (no-preload-956403) Waiting for SSH to be available...
	I0920 18:18:32.823590   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:32.823894   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:32.823928   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:32.824019   74753 main.go:141] libmachine: (no-preload-956403) DBG | Using SSH client type: external
	I0920 18:18:32.824046   74753 main.go:141] libmachine: (no-preload-956403) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/no-preload-956403/id_rsa (-rw-------)
	I0920 18:18:32.824077   74753 main.go:141] libmachine: (no-preload-956403) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.47 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-8777/.minikube/machines/no-preload-956403/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 18:18:32.824090   74753 main.go:141] libmachine: (no-preload-956403) DBG | About to run SSH command:
	I0920 18:18:32.824102   74753 main.go:141] libmachine: (no-preload-956403) DBG | exit 0
	I0920 18:18:32.950122   74753 main.go:141] libmachine: (no-preload-956403) DBG | SSH cmd err, output: <nil>: 
	I0920 18:18:32.950537   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetConfigRaw
	I0920 18:18:32.951187   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetIP
	I0920 18:18:32.953731   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:32.954073   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:32.954102   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:32.954365   74753 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/no-preload-956403/config.json ...
	I0920 18:18:32.954603   74753 machine.go:93] provisionDockerMachine start ...
	I0920 18:18:32.954621   74753 main.go:141] libmachine: (no-preload-956403) Calling .DriverName
	I0920 18:18:32.954814   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:18:32.957049   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:32.957379   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:32.957425   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:32.957538   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHPort
	I0920 18:18:32.957717   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:32.957920   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:32.958094   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHUsername
	I0920 18:18:32.958282   74753 main.go:141] libmachine: Using SSH client type: native
	I0920 18:18:32.958494   74753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.47 22 <nil> <nil>}
	I0920 18:18:32.958506   74753 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 18:18:33.058187   74753 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 18:18:33.058222   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetMachineName
	I0920 18:18:33.058477   74753 buildroot.go:166] provisioning hostname "no-preload-956403"
	I0920 18:18:33.058508   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetMachineName
	I0920 18:18:33.058668   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:18:33.061310   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.061616   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:33.061649   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.061783   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHPort
	I0920 18:18:33.061961   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:33.062089   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:33.062197   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHUsername
	I0920 18:18:33.062340   74753 main.go:141] libmachine: Using SSH client type: native
	I0920 18:18:33.062536   74753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.47 22 <nil> <nil>}
	I0920 18:18:33.062553   74753 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-956403 && echo "no-preload-956403" | sudo tee /etc/hostname
	I0920 18:18:33.175825   74753 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-956403
	
	I0920 18:18:33.175860   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:18:33.178483   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.178749   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:33.178769   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.178904   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHPort
	I0920 18:18:33.179077   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:33.179226   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:33.179366   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHUsername
	I0920 18:18:33.179545   74753 main.go:141] libmachine: Using SSH client type: native
	I0920 18:18:33.179710   74753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.47 22 <nil> <nil>}
	I0920 18:18:33.179726   74753 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-956403' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-956403/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-956403' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 18:18:33.290675   74753 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:18:33.290701   74753 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-8777/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-8777/.minikube}
	I0920 18:18:33.290722   74753 buildroot.go:174] setting up certificates
	I0920 18:18:33.290735   74753 provision.go:84] configureAuth start
	I0920 18:18:33.290747   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetMachineName
	I0920 18:18:33.291015   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetIP
	I0920 18:18:33.293810   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.294244   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:33.294282   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.294337   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:18:33.296376   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.296749   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:33.296776   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.296930   74753 provision.go:143] copyHostCerts
	I0920 18:18:33.296985   74753 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem, removing ...
	I0920 18:18:33.296995   74753 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem
	I0920 18:18:33.297048   74753 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem (1082 bytes)
	I0920 18:18:33.297166   74753 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem, removing ...
	I0920 18:18:33.297177   74753 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem
	I0920 18:18:33.297211   74753 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem (1123 bytes)
	I0920 18:18:33.297287   74753 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem, removing ...
	I0920 18:18:33.297296   74753 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem
	I0920 18:18:33.297320   74753 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem (1675 bytes)
	I0920 18:18:33.297387   74753 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem org=jenkins.no-preload-956403 san=[127.0.0.1 192.168.50.47 localhost minikube no-preload-956403]
	I0920 18:18:33.366768   74753 provision.go:177] copyRemoteCerts
	I0920 18:18:33.366830   74753 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 18:18:33.366852   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:18:33.369441   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.369755   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:33.369787   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.369958   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHPort
	I0920 18:18:33.370127   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:33.370293   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHUsername
	I0920 18:18:33.370490   74753 sshutil.go:53] new ssh client: &{IP:192.168.50.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/no-preload-956403/id_rsa Username:docker}
	I0920 18:18:33.452070   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0920 18:18:33.477564   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0920 18:18:33.501002   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 18:18:33.526725   74753 provision.go:87] duration metric: took 235.975552ms to configureAuth
	I0920 18:18:33.526755   74753 buildroot.go:189] setting minikube options for container-runtime
	I0920 18:18:33.526943   74753 config.go:182] Loaded profile config "no-preload-956403": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:18:33.527011   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:18:33.529870   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.530338   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:33.530371   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.530485   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHPort
	I0920 18:18:33.530708   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:33.530897   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:33.531057   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHUsername
	I0920 18:18:33.531276   74753 main.go:141] libmachine: Using SSH client type: native
	I0920 18:18:33.531497   74753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.47 22 <nil> <nil>}
	I0920 18:18:33.531519   74753 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 18:18:33.758711   74753 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 18:18:33.758739   74753 machine.go:96] duration metric: took 804.122904ms to provisionDockerMachine
	I0920 18:18:33.758753   74753 start.go:293] postStartSetup for "no-preload-956403" (driver="kvm2")
	I0920 18:18:33.758771   74753 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 18:18:33.758796   74753 main.go:141] libmachine: (no-preload-956403) Calling .DriverName
	I0920 18:18:33.759207   74753 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 18:18:33.759262   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:18:33.762524   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.762991   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:33.763027   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.763207   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHPort
	I0920 18:18:33.763471   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:33.763643   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHUsername
	I0920 18:18:33.763861   74753 sshutil.go:53] new ssh client: &{IP:192.168.50.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/no-preload-956403/id_rsa Username:docker}
	I0920 18:18:33.844910   74753 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 18:18:33.849111   74753 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 18:18:33.849137   74753 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/addons for local assets ...
	I0920 18:18:33.849205   74753 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/files for local assets ...
	I0920 18:18:33.849280   74753 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem -> 159732.pem in /etc/ssl/certs
	I0920 18:18:33.849367   74753 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 18:18:33.858757   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /etc/ssl/certs/159732.pem (1708 bytes)
	I0920 18:18:33.883430   74753 start.go:296] duration metric: took 124.663757ms for postStartSetup
	I0920 18:18:33.883468   74753 fix.go:56] duration metric: took 19.104481875s for fixHost
	I0920 18:18:33.883488   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:18:33.886703   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.887047   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:33.887076   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.887276   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHPort
	I0920 18:18:33.887502   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:33.887685   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:33.887816   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHUsername
	I0920 18:18:33.887962   74753 main.go:141] libmachine: Using SSH client type: native
	I0920 18:18:33.888137   74753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.47 22 <nil> <nil>}
	I0920 18:18:33.888148   74753 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 18:18:33.994635   74753 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726856313.964367484
	
	I0920 18:18:33.994660   74753 fix.go:216] guest clock: 1726856313.964367484
	I0920 18:18:33.994670   74753 fix.go:229] Guest: 2024-09-20 18:18:33.964367484 +0000 UTC Remote: 2024-09-20 18:18:33.883472234 +0000 UTC m=+356.278282007 (delta=80.89525ms)
	I0920 18:18:33.994695   74753 fix.go:200] guest clock delta is within tolerance: 80.89525ms
	I0920 18:18:33.994701   74753 start.go:83] releasing machines lock for "no-preload-956403", held for 19.215743841s
	I0920 18:18:33.994726   74753 main.go:141] libmachine: (no-preload-956403) Calling .DriverName
	I0920 18:18:33.994976   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetIP
	I0920 18:18:33.997685   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.998089   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:33.998114   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.998291   74753 main.go:141] libmachine: (no-preload-956403) Calling .DriverName
	I0920 18:18:33.998765   74753 main.go:141] libmachine: (no-preload-956403) Calling .DriverName
	I0920 18:18:33.998923   74753 main.go:141] libmachine: (no-preload-956403) Calling .DriverName
	I0920 18:18:33.999007   74753 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 18:18:33.999054   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:18:33.999142   74753 ssh_runner.go:195] Run: cat /version.json
	I0920 18:18:33.999168   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:18:34.001725   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:34.001891   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:34.002197   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:34.002225   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:34.002369   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHPort
	I0920 18:18:34.002424   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:34.002452   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:34.002533   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:34.002625   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHPort
	I0920 18:18:34.002695   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHUsername
	I0920 18:18:34.002744   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:34.002813   74753 sshutil.go:53] new ssh client: &{IP:192.168.50.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/no-preload-956403/id_rsa Username:docker}
	I0920 18:18:34.002888   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHUsername
	I0920 18:18:34.003032   74753 sshutil.go:53] new ssh client: &{IP:192.168.50.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/no-preload-956403/id_rsa Username:docker}
	I0920 18:18:34.079092   74753 ssh_runner.go:195] Run: systemctl --version
	I0920 18:18:34.112066   74753 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 18:18:34.257205   74753 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 18:18:34.265774   74753 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 18:18:34.265871   74753 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 18:18:34.285222   74753 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 18:18:34.285247   74753 start.go:495] detecting cgroup driver to use...
	I0920 18:18:34.285320   74753 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 18:18:34.302192   74753 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 18:18:34.316624   74753 docker.go:217] disabling cri-docker service (if available) ...
	I0920 18:18:34.316695   74753 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 18:18:34.331098   74753 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 18:18:34.345433   74753 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 18:18:34.481886   74753 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 18:18:34.644442   74753 docker.go:233] disabling docker service ...
	I0920 18:18:34.644530   74753 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 18:18:34.658714   74753 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 18:18:34.671506   74753 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 18:18:34.809548   74753 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 18:18:34.957438   74753 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 18:18:34.972102   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 18:18:34.993129   74753 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 18:18:34.993199   74753 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:18:35.004196   74753 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 18:18:35.004273   74753 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:18:35.015258   74753 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:18:35.026240   74753 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:18:35.037658   74753 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 18:18:35.048637   74753 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:18:35.059494   74753 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:18:35.077264   74753 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:18:35.087777   74753 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 18:18:35.097812   74753 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 18:18:35.097947   74753 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 18:18:35.112200   74753 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 18:18:35.123381   74753 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:18:35.262386   74753 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 18:18:35.361152   74753 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 18:18:35.361257   74753 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 18:18:35.366321   74753 start.go:563] Will wait 60s for crictl version
	I0920 18:18:35.366390   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:18:35.370379   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 18:18:35.415080   74753 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 18:18:35.415182   74753 ssh_runner.go:195] Run: crio --version
	I0920 18:18:35.446934   74753 ssh_runner.go:195] Run: crio --version
	I0920 18:18:35.478823   74753 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 18:18:34.026138   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:36.525133   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:35.479956   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetIP
	I0920 18:18:35.483428   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:35.483820   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:35.483848   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:35.484079   74753 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0920 18:18:35.488686   74753 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:18:35.501184   74753 kubeadm.go:883] updating cluster {Name:no-preload-956403 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-956403 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.47 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 18:18:35.501348   74753 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:18:35.501391   74753 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:18:35.535377   74753 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 18:18:35.535400   74753 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0920 18:18:35.535466   74753 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 18:18:35.535499   74753 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0920 18:18:35.535510   74753 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0920 18:18:35.535534   74753 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0920 18:18:35.535538   74753 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0920 18:18:35.535513   74753 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0920 18:18:35.535625   74753 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0920 18:18:35.535483   74753 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:18:35.537100   74753 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 18:18:35.537104   74753 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0920 18:18:35.537126   74753 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0920 18:18:35.537091   74753 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0920 18:18:35.537164   74753 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0920 18:18:35.537180   74753 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0920 18:18:35.537191   74753 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0920 18:18:35.537105   74753 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:18:35.770046   74753 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0920 18:18:35.796364   74753 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0920 18:18:35.800776   74753 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0920 18:18:35.802291   74753 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0920 18:18:35.845969   74753 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0920 18:18:35.860263   74753 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0920 18:18:35.892349   74753 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 18:18:35.919269   74753 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0920 18:18:35.919323   74753 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0920 18:18:35.919375   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:18:35.929045   74753 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0920 18:18:35.929096   74753 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0920 18:18:35.929143   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:18:35.929181   74753 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0920 18:18:35.929235   74753 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0920 18:18:35.929299   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:18:35.968513   74753 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0920 18:18:35.968557   74753 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0920 18:18:35.968553   74753 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0920 18:18:35.968589   74753 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0920 18:18:35.968612   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:18:35.968636   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:18:35.984335   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0920 18:18:35.984366   74753 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0920 18:18:35.984379   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0920 18:18:35.984403   74753 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 18:18:35.984433   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0920 18:18:35.984450   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:18:35.984474   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0920 18:18:35.984505   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0920 18:18:36.106947   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0920 18:18:36.106964   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 18:18:36.106954   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0920 18:18:36.107025   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0920 18:18:36.107112   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0920 18:18:36.107142   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0920 18:18:36.231892   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0920 18:18:36.231989   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0920 18:18:36.232026   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0920 18:18:36.232143   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0920 18:18:36.232193   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0920 18:18:36.232281   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 18:18:36.355534   74753 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0920 18:18:36.355632   74753 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0920 18:18:36.355650   74753 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0920 18:18:36.355730   74753 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0920 18:18:36.358584   74753 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0920 18:18:36.358653   74753 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0920 18:18:36.358678   74753 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0920 18:18:36.358677   74753 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0920 18:18:36.358723   74753 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0920 18:18:36.358751   74753 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0920 18:18:36.367463   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 18:18:36.372389   74753 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0920 18:18:36.372412   74753 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0920 18:18:36.372457   74753 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0920 18:18:36.372580   74753 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I0920 18:18:36.374496   74753 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0920 18:18:36.374538   74753 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I0920 18:18:36.374638   74753 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I0920 18:18:36.410860   74753 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0920 18:18:36.410961   74753 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0920 18:18:36.738003   74753 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:18:32.749877   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:33.249391   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:33.749510   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:34.250003   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:34.749967   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:35.249953   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:35.749992   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:36.250161   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:36.750339   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:37.249600   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:34.192782   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:36.692485   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:38.692626   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:39.024337   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:41.025364   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:38.480788   74753 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.108302901s)
	I0920 18:18:38.480819   74753 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0920 18:18:38.480839   74753 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0920 18:18:38.480869   74753 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1: (2.069867717s)
	I0920 18:18:38.480893   74753 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0920 18:18:38.480896   74753 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I0920 18:18:38.480922   74753 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.742864605s)
	I0920 18:18:38.480970   74753 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0920 18:18:38.480993   74753 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:18:38.481031   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:18:40.340748   74753 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (1.85982838s)
	I0920 18:18:40.340782   74753 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0920 18:18:40.340790   74753 ssh_runner.go:235] Completed: which crictl: (1.859743083s)
	I0920 18:18:40.340812   74753 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0920 18:18:40.340850   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:18:40.340869   74753 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0920 18:18:37.750269   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:38.249855   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:38.749845   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:39.249342   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:39.750306   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:40.250103   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:40.750346   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:41.250134   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:41.750136   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:42.249945   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:41.191300   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:43.193536   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:43.025860   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:45.527119   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:44.427309   74753 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (4.086436233s)
	I0920 18:18:44.427365   74753 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (4.086473186s)
	I0920 18:18:44.427395   74753 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0920 18:18:44.427400   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:18:44.427430   74753 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0920 18:18:44.427510   74753 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0920 18:18:46.393094   74753 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (1.965557326s)
	I0920 18:18:46.393133   74753 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0920 18:18:46.393163   74753 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0920 18:18:46.393170   74753 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.965750119s)
	I0920 18:18:46.393216   74753 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0920 18:18:46.393241   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:18:42.749980   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:43.249416   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:43.750087   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:44.249972   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:44.749649   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:45.250128   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:45.750346   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:46.249611   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:46.749566   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:47.249814   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:45.193713   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:47.691882   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:48.027047   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:50.527076   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:47.771559   74753 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.37829872s)
	I0920 18:18:47.771595   74753 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0920 18:18:47.771607   74753 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.378351302s)
	I0920 18:18:47.771618   74753 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0920 18:18:47.771645   74753 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0920 18:18:47.771668   74753 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0920 18:18:47.771726   74753 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0920 18:18:49.544117   74753 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.772423068s)
	I0920 18:18:49.544147   74753 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0920 18:18:49.544159   74753 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.772401848s)
	I0920 18:18:49.544187   74753 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0920 18:18:49.544199   74753 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0920 18:18:49.544275   74753 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0920 18:18:50.198691   74753 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0920 18:18:50.198744   74753 cache_images.go:123] Successfully loaded all cached images
	I0920 18:18:50.198752   74753 cache_images.go:92] duration metric: took 14.66333409s to LoadCachedImages
	I0920 18:18:50.198766   74753 kubeadm.go:934] updating node { 192.168.50.47 8443 v1.31.1 crio true true} ...
	I0920 18:18:50.198900   74753 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-956403 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.47
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-956403 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 18:18:50.198988   74753 ssh_runner.go:195] Run: crio config
	I0920 18:18:50.249876   74753 cni.go:84] Creating CNI manager for ""
	I0920 18:18:50.249901   74753 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:18:50.249915   74753 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 18:18:50.249942   74753 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.47 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-956403 NodeName:no-preload-956403 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.47"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.47 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 18:18:50.250150   74753 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.47
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-956403"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.47
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.47"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 18:18:50.250264   74753 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 18:18:50.262805   74753 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 18:18:50.262886   74753 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 18:18:50.272958   74753 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0920 18:18:50.290269   74753 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 18:18:50.306981   74753 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0920 18:18:50.324360   74753 ssh_runner.go:195] Run: grep 192.168.50.47	control-plane.minikube.internal$ /etc/hosts
	I0920 18:18:50.328382   74753 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.47	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:18:50.341021   74753 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:18:50.462105   74753 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:18:50.478780   74753 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/no-preload-956403 for IP: 192.168.50.47
	I0920 18:18:50.478804   74753 certs.go:194] generating shared ca certs ...
	I0920 18:18:50.478824   74753 certs.go:226] acquiring lock for ca certs: {Name:mkc7ef6c737c6bdc3fdd9dcff8f57029c020d8f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:18:50.479010   74753 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key
	I0920 18:18:50.479069   74753 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key
	I0920 18:18:50.479082   74753 certs.go:256] generating profile certs ...
	I0920 18:18:50.479188   74753 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/no-preload-956403/client.key
	I0920 18:18:50.479270   74753 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/no-preload-956403/apiserver.key.859cf278
	I0920 18:18:50.479335   74753 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/no-preload-956403/proxy-client.key
	I0920 18:18:50.479491   74753 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem (1338 bytes)
	W0920 18:18:50.479534   74753 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973_empty.pem, impossibly tiny 0 bytes
	I0920 18:18:50.479549   74753 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 18:18:50.479596   74753 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem (1082 bytes)
	I0920 18:18:50.479633   74753 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem (1123 bytes)
	I0920 18:18:50.479668   74753 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem (1675 bytes)
	I0920 18:18:50.479771   74753 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem (1708 bytes)
	I0920 18:18:50.480696   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 18:18:50.514559   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 18:18:50.548790   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 18:18:50.581384   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0920 18:18:50.609138   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/no-preload-956403/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0920 18:18:50.641098   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/no-preload-956403/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 18:18:50.680479   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/no-preload-956403/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 18:18:50.705168   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/no-preload-956403/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 18:18:50.727603   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /usr/share/ca-certificates/159732.pem (1708 bytes)
	I0920 18:18:50.750272   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 18:18:50.776117   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem --> /usr/share/ca-certificates/15973.pem (1338 bytes)
	I0920 18:18:50.799799   74753 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 18:18:50.816250   74753 ssh_runner.go:195] Run: openssl version
	I0920 18:18:50.821680   74753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/159732.pem && ln -fs /usr/share/ca-certificates/159732.pem /etc/ssl/certs/159732.pem"
	I0920 18:18:50.832295   74753 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/159732.pem
	I0920 18:18:50.836833   74753 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:04 /usr/share/ca-certificates/159732.pem
	I0920 18:18:50.836896   74753 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/159732.pem
	I0920 18:18:50.842626   74753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/159732.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 18:18:50.853297   74753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 18:18:50.864400   74753 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:18:50.868951   74753 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:18:50.869011   74753 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:18:50.874547   74753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 18:18:50.885615   74753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15973.pem && ln -fs /usr/share/ca-certificates/15973.pem /etc/ssl/certs/15973.pem"
	I0920 18:18:50.896960   74753 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15973.pem
	I0920 18:18:50.901798   74753 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:04 /usr/share/ca-certificates/15973.pem
	I0920 18:18:50.901879   74753 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15973.pem
	I0920 18:18:50.907920   74753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15973.pem /etc/ssl/certs/51391683.0"
	I0920 18:18:50.919345   74753 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 18:18:50.923671   74753 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 18:18:50.929701   74753 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 18:18:50.935649   74753 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 18:18:50.942162   74753 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 18:18:50.948012   74753 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 18:18:50.954097   74753 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 18:18:50.960442   74753 kubeadm.go:392] StartCluster: {Name:no-preload-956403 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-956403 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.47 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:18:50.960535   74753 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 18:18:50.960600   74753 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:18:50.998528   74753 cri.go:89] found id: ""
	I0920 18:18:50.998608   74753 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 18:18:51.008964   74753 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 18:18:51.008985   74753 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 18:18:51.009043   74753 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 18:18:51.018457   74753 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 18:18:51.019553   74753 kubeconfig.go:125] found "no-preload-956403" server: "https://192.168.50.47:8443"
	I0920 18:18:51.021712   74753 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 18:18:51.033439   74753 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.47
	I0920 18:18:51.033470   74753 kubeadm.go:1160] stopping kube-system containers ...
	I0920 18:18:51.033481   74753 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0920 18:18:51.033538   74753 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:18:51.070049   74753 cri.go:89] found id: ""
	I0920 18:18:51.070137   74753 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 18:18:51.087472   74753 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 18:18:51.098582   74753 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 18:18:51.098606   74753 kubeadm.go:157] found existing configuration files:
	
	I0920 18:18:51.098654   74753 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 18:18:51.107201   74753 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 18:18:51.107276   74753 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 18:18:51.116058   74753 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 18:18:51.124563   74753 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 18:18:51.124630   74753 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 18:18:51.134174   74753 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 18:18:51.142880   74753 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 18:18:51.142944   74753 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 18:18:51.152181   74753 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 18:18:51.161942   74753 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 18:18:51.162012   74753 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 18:18:51.171615   74753 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 18:18:51.180728   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:51.292140   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:52.019018   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:52.239327   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:52.317900   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:52.433910   74753 api_server.go:52] waiting for apiserver process to appear ...
	I0920 18:18:52.433991   74753 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:47.749487   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:48.249612   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:48.750324   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:49.250006   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:49.749667   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:50.249802   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:50.749597   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:51.250236   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:51.750203   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:52.250132   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:49.693075   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:52.192547   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:53.024979   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:55.025225   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:52.934956   74753 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:53.434681   74753 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:53.451797   74753 api_server.go:72] duration metric: took 1.017867426s to wait for apiserver process to appear ...
	I0920 18:18:53.451828   74753 api_server.go:88] waiting for apiserver healthz status ...
	I0920 18:18:53.451851   74753 api_server.go:253] Checking apiserver healthz at https://192.168.50.47:8443/healthz ...
	I0920 18:18:53.452366   74753 api_server.go:269] stopped: https://192.168.50.47:8443/healthz: Get "https://192.168.50.47:8443/healthz": dial tcp 192.168.50.47:8443: connect: connection refused
	I0920 18:18:53.952175   74753 api_server.go:253] Checking apiserver healthz at https://192.168.50.47:8443/healthz ...
	I0920 18:18:55.972801   74753 api_server.go:279] https://192.168.50.47:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 18:18:55.972835   74753 api_server.go:103] status: https://192.168.50.47:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 18:18:55.972854   74753 api_server.go:253] Checking apiserver healthz at https://192.168.50.47:8443/healthz ...
	I0920 18:18:56.018532   74753 api_server.go:279] https://192.168.50.47:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 18:18:56.018563   74753 api_server.go:103] status: https://192.168.50.47:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 18:18:56.452127   74753 api_server.go:253] Checking apiserver healthz at https://192.168.50.47:8443/healthz ...
	I0920 18:18:56.459752   74753 api_server.go:279] https://192.168.50.47:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 18:18:56.459796   74753 api_server.go:103] status: https://192.168.50.47:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 18:18:56.952284   74753 api_server.go:253] Checking apiserver healthz at https://192.168.50.47:8443/healthz ...
	I0920 18:18:56.959496   74753 api_server.go:279] https://192.168.50.47:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 18:18:56.959553   74753 api_server.go:103] status: https://192.168.50.47:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 18:18:57.452049   74753 api_server.go:253] Checking apiserver healthz at https://192.168.50.47:8443/healthz ...
	I0920 18:18:57.456586   74753 api_server.go:279] https://192.168.50.47:8443/healthz returned 200:
	ok
	I0920 18:18:57.463754   74753 api_server.go:141] control plane version: v1.31.1
	I0920 18:18:57.463785   74753 api_server.go:131] duration metric: took 4.011948835s to wait for apiserver health ...
	I0920 18:18:57.463795   74753 cni.go:84] Creating CNI manager for ""
	I0920 18:18:57.463804   74753 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:18:57.465916   74753 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 18:18:57.467416   74753 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 18:18:57.485487   74753 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 18:18:57.529426   74753 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 18:18:57.540046   74753 system_pods.go:59] 8 kube-system pods found
	I0920 18:18:57.540100   74753 system_pods.go:61] "coredns-7c65d6cfc9-j2t5h" [dd2636f1-3200-4f22-957c-046277c9be8c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 18:18:57.540113   74753 system_pods.go:61] "etcd-no-preload-956403" [daf84be0-e673-4685-a998-47ec64664484] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0920 18:18:57.540127   74753 system_pods.go:61] "kube-apiserver-no-preload-956403" [416217e6-3c91-49ec-bd69-812a49b4afd9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0920 18:18:57.540139   74753 system_pods.go:61] "kube-controller-manager-no-preload-956403" [30fae740-b073-4619-bca5-010ca90b2667] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0920 18:18:57.540148   74753 system_pods.go:61] "kube-proxy-sz4bm" [269600fb-ef65-4b17-8c07-76c79e35f5a8] Running
	I0920 18:18:57.540160   74753 system_pods.go:61] "kube-scheduler-no-preload-956403" [46432927-e506-4665-8825-c5f6a2ed0458] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0920 18:18:57.540174   74753 system_pods.go:61] "metrics-server-6867b74b74-tfsff" [599ba06a-6d4d-483b-b390-a3595a814757] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 18:18:57.540184   74753 system_pods.go:61] "storage-provisioner" [df661627-7d32-4467-9805-1ae65d4fa35c] Running
	I0920 18:18:57.540196   74753 system_pods.go:74] duration metric: took 10.749595ms to wait for pod list to return data ...
	I0920 18:18:57.540208   74753 node_conditions.go:102] verifying NodePressure condition ...
	I0920 18:18:57.544401   74753 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 18:18:57.544438   74753 node_conditions.go:123] node cpu capacity is 2
	I0920 18:18:57.544451   74753 node_conditions.go:105] duration metric: took 4.237618ms to run NodePressure ...
	I0920 18:18:57.544471   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:52.749468   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:53.249519   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:53.750056   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:54.250286   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:54.750240   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:55.249980   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:55.750192   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:56.249940   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:56.749289   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:57.249615   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:54.193519   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:56.693152   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:57.818736   74753 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0920 18:18:57.823058   74753 kubeadm.go:739] kubelet initialised
	I0920 18:18:57.823082   74753 kubeadm.go:740] duration metric: took 4.316691ms waiting for restarted kubelet to initialise ...
	I0920 18:18:57.823091   74753 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:18:57.827594   74753 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-j2t5h" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:57.833207   74753 pod_ready.go:98] node "no-preload-956403" hosting pod "coredns-7c65d6cfc9-j2t5h" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:57.833240   74753 pod_ready.go:82] duration metric: took 5.620335ms for pod "coredns-7c65d6cfc9-j2t5h" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:57.833252   74753 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-956403" hosting pod "coredns-7c65d6cfc9-j2t5h" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:57.833269   74753 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:57.838687   74753 pod_ready.go:98] node "no-preload-956403" hosting pod "etcd-no-preload-956403" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:57.838723   74753 pod_ready.go:82] duration metric: took 5.441795ms for pod "etcd-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:57.838737   74753 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-956403" hosting pod "etcd-no-preload-956403" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:57.838747   74753 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:57.848481   74753 pod_ready.go:98] node "no-preload-956403" hosting pod "kube-apiserver-no-preload-956403" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:57.848510   74753 pod_ready.go:82] duration metric: took 9.754053ms for pod "kube-apiserver-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:57.848520   74753 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-956403" hosting pod "kube-apiserver-no-preload-956403" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:57.848530   74753 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:57.934356   74753 pod_ready.go:98] node "no-preload-956403" hosting pod "kube-controller-manager-no-preload-956403" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:57.934383   74753 pod_ready.go:82] duration metric: took 85.842265ms for pod "kube-controller-manager-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:57.934392   74753 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-956403" hosting pod "kube-controller-manager-no-preload-956403" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:57.934397   74753 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-sz4bm" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:58.332467   74753 pod_ready.go:98] node "no-preload-956403" hosting pod "kube-proxy-sz4bm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:58.332495   74753 pod_ready.go:82] duration metric: took 398.088821ms for pod "kube-proxy-sz4bm" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:58.332504   74753 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-956403" hosting pod "kube-proxy-sz4bm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:58.332510   74753 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:58.732836   74753 pod_ready.go:98] node "no-preload-956403" hosting pod "kube-scheduler-no-preload-956403" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:58.732859   74753 pod_ready.go:82] duration metric: took 400.343274ms for pod "kube-scheduler-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:58.732868   74753 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-956403" hosting pod "kube-scheduler-no-preload-956403" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:58.732874   74753 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:59.132465   74753 pod_ready.go:98] node "no-preload-956403" hosting pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:59.132489   74753 pod_ready.go:82] duration metric: took 399.606625ms for pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:59.132503   74753 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-956403" hosting pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:59.132510   74753 pod_ready.go:39] duration metric: took 1.309409155s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:18:59.132526   74753 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 18:18:59.147163   74753 ops.go:34] apiserver oom_adj: -16
	I0920 18:18:59.147190   74753 kubeadm.go:597] duration metric: took 8.138198351s to restartPrimaryControlPlane
	I0920 18:18:59.147200   74753 kubeadm.go:394] duration metric: took 8.186768244s to StartCluster
	I0920 18:18:59.147214   74753 settings.go:142] acquiring lock: {Name:mk2a13c58cdc0faf8cddca5d6716175d45db9bfd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:18:59.147287   74753 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19672-8777/kubeconfig
	I0920 18:18:59.149036   74753 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/kubeconfig: {Name:mkf32a4c736808e023459b2f0e40188618a38db1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:18:59.149303   74753 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.47 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:18:59.149381   74753 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 18:18:59.149479   74753 addons.go:69] Setting storage-provisioner=true in profile "no-preload-956403"
	I0920 18:18:59.149503   74753 addons.go:234] Setting addon storage-provisioner=true in "no-preload-956403"
	I0920 18:18:59.149501   74753 addons.go:69] Setting default-storageclass=true in profile "no-preload-956403"
	W0920 18:18:59.149515   74753 addons.go:243] addon storage-provisioner should already be in state true
	I0920 18:18:59.149526   74753 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-956403"
	I0920 18:18:59.149520   74753 addons.go:69] Setting metrics-server=true in profile "no-preload-956403"
	I0920 18:18:59.149546   74753 host.go:66] Checking if "no-preload-956403" exists ...
	I0920 18:18:59.149558   74753 addons.go:234] Setting addon metrics-server=true in "no-preload-956403"
	W0920 18:18:59.149575   74753 addons.go:243] addon metrics-server should already be in state true
	I0920 18:18:59.149588   74753 config.go:182] Loaded profile config "no-preload-956403": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:18:59.149610   74753 host.go:66] Checking if "no-preload-956403" exists ...
	I0920 18:18:59.149847   74753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:59.149889   74753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:59.149944   74753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:59.149954   74753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:59.150003   74753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:59.150075   74753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:59.151276   74753 out.go:177] * Verifying Kubernetes components...
	I0920 18:18:59.152577   74753 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:18:59.178294   74753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42565
	I0920 18:18:59.178917   74753 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:59.179414   74753 main.go:141] libmachine: Using API Version  1
	I0920 18:18:59.179435   74753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:59.179869   74753 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:59.180059   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetState
	I0920 18:18:59.183556   74753 addons.go:234] Setting addon default-storageclass=true in "no-preload-956403"
	W0920 18:18:59.183575   74753 addons.go:243] addon default-storageclass should already be in state true
	I0920 18:18:59.183606   74753 host.go:66] Checking if "no-preload-956403" exists ...
	I0920 18:18:59.183970   74753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:59.184011   74753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:59.195338   74753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43759
	I0920 18:18:59.195739   74753 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:59.196290   74753 main.go:141] libmachine: Using API Version  1
	I0920 18:18:59.196316   74753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:59.196742   74753 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:59.197211   74753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33531
	I0920 18:18:59.197327   74753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:59.197375   74753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:59.197552   74753 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:59.198055   74753 main.go:141] libmachine: Using API Version  1
	I0920 18:18:59.198080   74753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:59.198420   74753 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:59.198997   74753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:59.199037   74753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:59.203164   74753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40055
	I0920 18:18:59.203700   74753 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:59.204320   74753 main.go:141] libmachine: Using API Version  1
	I0920 18:18:59.204341   74753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:59.204745   74753 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:59.205399   74753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:59.205440   74753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:59.215457   74753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37991
	I0920 18:18:59.215953   74753 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:59.216312   74753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35461
	I0920 18:18:59.216521   74753 main.go:141] libmachine: Using API Version  1
	I0920 18:18:59.216723   74753 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:59.216826   74753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:59.217221   74753 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:59.217403   74753 main.go:141] libmachine: Using API Version  1
	I0920 18:18:59.217414   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetState
	I0920 18:18:59.217452   74753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:59.217942   74753 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:59.218403   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetState
	I0920 18:18:59.219340   74753 main.go:141] libmachine: (no-preload-956403) Calling .DriverName
	I0920 18:18:59.220536   74753 main.go:141] libmachine: (no-preload-956403) Calling .DriverName
	I0920 18:18:59.221934   74753 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0920 18:18:59.222788   74753 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:18:59.223744   74753 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 18:18:59.223766   74753 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 18:18:59.223788   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:18:59.224567   74753 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 18:18:59.224588   74753 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 18:18:59.224607   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:18:59.225333   74753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45207
	I0920 18:18:59.225992   74753 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:59.226975   74753 main.go:141] libmachine: Using API Version  1
	I0920 18:18:59.226991   74753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:59.227472   74753 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:59.227788   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetState
	I0920 18:18:59.227818   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:59.228234   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:59.228282   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:59.228441   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHPort
	I0920 18:18:59.228636   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:59.228671   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:59.228821   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHUsername
	I0920 18:18:59.228960   74753 sshutil.go:53] new ssh client: &{IP:192.168.50.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/no-preload-956403/id_rsa Username:docker}
	I0920 18:18:59.229280   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:59.229297   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:59.229478   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHPort
	I0920 18:18:59.229651   74753 main.go:141] libmachine: (no-preload-956403) Calling .DriverName
	I0920 18:18:59.229807   74753 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 18:18:59.229817   74753 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 18:18:59.229843   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:18:59.230195   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:59.230729   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHUsername
	I0920 18:18:59.230899   74753 sshutil.go:53] new ssh client: &{IP:192.168.50.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/no-preload-956403/id_rsa Username:docker}
	I0920 18:18:59.232419   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:59.232844   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:59.232877   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:59.233020   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHPort
	I0920 18:18:59.233201   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:59.233311   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHUsername
	I0920 18:18:59.233424   74753 sshutil.go:53] new ssh client: &{IP:192.168.50.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/no-preload-956403/id_rsa Username:docker}
	I0920 18:18:59.373284   74753 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:18:59.391114   74753 node_ready.go:35] waiting up to 6m0s for node "no-preload-956403" to be "Ready" ...
	I0920 18:18:59.475607   74753 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 18:18:59.530301   74753 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 18:18:59.530320   74753 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0920 18:18:59.530804   74753 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 18:18:59.607246   74753 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 18:18:59.607272   74753 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 18:18:59.685234   74753 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 18:18:59.685273   74753 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 18:18:59.753902   74753 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 18:19:00.029605   74753 main.go:141] libmachine: Making call to close driver server
	I0920 18:19:00.029630   74753 main.go:141] libmachine: (no-preload-956403) Calling .Close
	I0920 18:19:00.029968   74753 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:19:00.030016   74753 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:19:00.030032   74753 main.go:141] libmachine: Making call to close driver server
	I0920 18:19:00.030039   74753 main.go:141] libmachine: (no-preload-956403) Calling .Close
	I0920 18:19:00.030285   74753 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:19:00.030299   74753 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:19:00.049472   74753 main.go:141] libmachine: Making call to close driver server
	I0920 18:19:00.049500   74753 main.go:141] libmachine: (no-preload-956403) Calling .Close
	I0920 18:19:00.049765   74753 main.go:141] libmachine: (no-preload-956403) DBG | Closing plugin on server side
	I0920 18:19:00.049781   74753 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:19:00.049797   74753 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:19:00.790635   74753 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.259791892s)
	I0920 18:19:00.790694   74753 main.go:141] libmachine: Making call to close driver server
	I0920 18:19:00.790703   74753 main.go:141] libmachine: (no-preload-956403) Calling .Close
	I0920 18:19:00.791004   74753 main.go:141] libmachine: (no-preload-956403) DBG | Closing plugin on server side
	I0920 18:19:00.791007   74753 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:19:00.791075   74753 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:19:00.791100   74753 main.go:141] libmachine: Making call to close driver server
	I0920 18:19:00.791110   74753 main.go:141] libmachine: (no-preload-956403) Calling .Close
	I0920 18:19:00.791339   74753 main.go:141] libmachine: (no-preload-956403) DBG | Closing plugin on server side
	I0920 18:19:00.791347   74753 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:19:00.791358   74753 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:19:00.829250   74753 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.075301587s)
	I0920 18:19:00.829297   74753 main.go:141] libmachine: Making call to close driver server
	I0920 18:19:00.829311   74753 main.go:141] libmachine: (no-preload-956403) Calling .Close
	I0920 18:19:00.829607   74753 main.go:141] libmachine: (no-preload-956403) DBG | Closing plugin on server side
	I0920 18:19:00.829612   74753 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:19:00.829632   74753 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:19:00.829641   74753 main.go:141] libmachine: Making call to close driver server
	I0920 18:19:00.829647   74753 main.go:141] libmachine: (no-preload-956403) Calling .Close
	I0920 18:19:00.829905   74753 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:19:00.829927   74753 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:19:00.829926   74753 main.go:141] libmachine: (no-preload-956403) DBG | Closing plugin on server side
	I0920 18:19:00.829938   74753 addons.go:475] Verifying addon metrics-server=true in "no-preload-956403"
	I0920 18:19:00.831897   74753 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0920 18:18:57.524979   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:59.526500   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:02.024579   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:00.833093   74753 addons.go:510] duration metric: took 1.683722999s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0920 18:19:01.394507   74753 node_ready.go:53] node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:57.750113   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:58.250100   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:58.750133   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:59.249427   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:59.749350   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:00.250267   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:00.749723   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:01.249549   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:01.749698   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:02.250043   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:59.195097   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:01.692391   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:03.692510   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:04.024713   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:06.525591   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:03.395035   74753 node_ready.go:53] node "no-preload-956403" has status "Ready":"False"
	I0920 18:19:05.895477   74753 node_ready.go:53] node "no-preload-956403" has status "Ready":"False"
	I0920 18:19:06.395177   74753 node_ready.go:49] node "no-preload-956403" has status "Ready":"True"
	I0920 18:19:06.395211   74753 node_ready.go:38] duration metric: took 7.0040677s for node "no-preload-956403" to be "Ready" ...
	I0920 18:19:06.395224   74753 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:19:06.400929   74753 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-j2t5h" in "kube-system" namespace to be "Ready" ...
	I0920 18:19:06.406052   74753 pod_ready.go:93] pod "coredns-7c65d6cfc9-j2t5h" in "kube-system" namespace has status "Ready":"True"
	I0920 18:19:06.406076   74753 pod_ready.go:82] duration metric: took 5.118178ms for pod "coredns-7c65d6cfc9-j2t5h" in "kube-system" namespace to be "Ready" ...
	I0920 18:19:06.406088   74753 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	I0920 18:19:07.412930   74753 pod_ready.go:93] pod "etcd-no-preload-956403" in "kube-system" namespace has status "Ready":"True"
	I0920 18:19:07.412946   74753 pod_ready.go:82] duration metric: took 1.006851075s for pod "etcd-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	I0920 18:19:07.412955   74753 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	I0920 18:19:07.420312   74753 pod_ready.go:93] pod "kube-apiserver-no-preload-956403" in "kube-system" namespace has status "Ready":"True"
	I0920 18:19:07.420342   74753 pod_ready.go:82] duration metric: took 7.380815ms for pod "kube-apiserver-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	I0920 18:19:07.420354   74753 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	I0920 18:19:02.750082   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:03.249861   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:03.749400   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:04.250213   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:04.749806   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:05.249569   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:05.750115   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:06.250182   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:06.750188   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:07.250041   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:06.191328   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:08.192222   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:09.024510   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:11.025023   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:08.426149   74753 pod_ready.go:93] pod "kube-controller-manager-no-preload-956403" in "kube-system" namespace has status "Ready":"True"
	I0920 18:19:08.426173   74753 pod_ready.go:82] duration metric: took 1.005811855s for pod "kube-controller-manager-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	I0920 18:19:08.426182   74753 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-sz4bm" in "kube-system" namespace to be "Ready" ...
	I0920 18:19:08.431446   74753 pod_ready.go:93] pod "kube-proxy-sz4bm" in "kube-system" namespace has status "Ready":"True"
	I0920 18:19:08.431474   74753 pod_ready.go:82] duration metric: took 5.284488ms for pod "kube-proxy-sz4bm" in "kube-system" namespace to be "Ready" ...
	I0920 18:19:08.431486   74753 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	I0920 18:19:08.794973   74753 pod_ready.go:93] pod "kube-scheduler-no-preload-956403" in "kube-system" namespace has status "Ready":"True"
	I0920 18:19:08.794997   74753 pod_ready.go:82] duration metric: took 363.504269ms for pod "kube-scheduler-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	I0920 18:19:08.795005   74753 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace to be "Ready" ...
	I0920 18:19:10.802181   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:07.749479   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:08.249641   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:08.749637   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:09.249803   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:09.749534   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:10.249365   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:10.749366   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:11.250045   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:11.750298   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:12.249766   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:10.192631   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:12.692470   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:13.525217   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:16.025516   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:13.300755   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:15.301345   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:12.749447   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:13.249356   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:13.749514   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:14.249942   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:14.749988   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:15.249405   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:15.749620   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:16.249962   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:16.750325   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:17.250198   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:15.191379   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:17.692235   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:18.525176   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:21.023910   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:17.801315   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:20.302369   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:22.302578   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:17.749509   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:18.250076   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:18.749518   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:19.249440   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:19.750156   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:20.249686   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:20.750315   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:21.249755   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:21.749370   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:22.249767   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:20.192027   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:22.192925   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:23.024677   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:25.025216   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:24.801558   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:26.802796   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:22.749289   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:23.249386   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:23.749870   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:24.249424   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:24.750349   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:25.249582   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:25.249657   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:25.287604   75577 cri.go:89] found id: ""
	I0920 18:19:25.287633   75577 logs.go:276] 0 containers: []
	W0920 18:19:25.287641   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:25.287647   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:25.287692   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:25.322093   75577 cri.go:89] found id: ""
	I0920 18:19:25.322127   75577 logs.go:276] 0 containers: []
	W0920 18:19:25.322137   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:25.322144   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:25.322194   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:25.354812   75577 cri.go:89] found id: ""
	I0920 18:19:25.354840   75577 logs.go:276] 0 containers: []
	W0920 18:19:25.354851   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:25.354863   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:25.354928   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:25.387777   75577 cri.go:89] found id: ""
	I0920 18:19:25.387804   75577 logs.go:276] 0 containers: []
	W0920 18:19:25.387813   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:25.387819   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:25.387867   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:25.420325   75577 cri.go:89] found id: ""
	I0920 18:19:25.420354   75577 logs.go:276] 0 containers: []
	W0920 18:19:25.420364   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:25.420372   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:25.420437   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:25.454824   75577 cri.go:89] found id: ""
	I0920 18:19:25.454853   75577 logs.go:276] 0 containers: []
	W0920 18:19:25.454864   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:25.454871   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:25.454933   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:25.493520   75577 cri.go:89] found id: ""
	I0920 18:19:25.493548   75577 logs.go:276] 0 containers: []
	W0920 18:19:25.493559   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:25.493566   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:25.493639   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:25.528795   75577 cri.go:89] found id: ""
	I0920 18:19:25.528820   75577 logs.go:276] 0 containers: []
	W0920 18:19:25.528828   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:25.528836   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:25.528847   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:25.571411   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:25.571442   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:25.625648   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:25.625693   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:25.639891   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:25.639920   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:25.769263   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:25.769289   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:25.769303   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:19:24.691757   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:26.695197   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:27.524327   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:29.525192   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:32.024450   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:29.301400   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:31.301988   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:28.346894   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:28.359891   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:28.359967   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:28.396117   75577 cri.go:89] found id: ""
	I0920 18:19:28.396144   75577 logs.go:276] 0 containers: []
	W0920 18:19:28.396152   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:28.396158   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:28.396206   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:28.447888   75577 cri.go:89] found id: ""
	I0920 18:19:28.447923   75577 logs.go:276] 0 containers: []
	W0920 18:19:28.447933   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:28.447941   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:28.447998   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:28.485018   75577 cri.go:89] found id: ""
	I0920 18:19:28.485046   75577 logs.go:276] 0 containers: []
	W0920 18:19:28.485054   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:28.485060   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:28.485108   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:28.520904   75577 cri.go:89] found id: ""
	I0920 18:19:28.520934   75577 logs.go:276] 0 containers: []
	W0920 18:19:28.520942   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:28.520948   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:28.520996   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:28.561375   75577 cri.go:89] found id: ""
	I0920 18:19:28.561407   75577 logs.go:276] 0 containers: []
	W0920 18:19:28.561415   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:28.561421   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:28.561467   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:28.597643   75577 cri.go:89] found id: ""
	I0920 18:19:28.597671   75577 logs.go:276] 0 containers: []
	W0920 18:19:28.597679   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:28.597685   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:28.597747   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:28.632885   75577 cri.go:89] found id: ""
	I0920 18:19:28.632912   75577 logs.go:276] 0 containers: []
	W0920 18:19:28.632921   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:28.632926   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:28.632973   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:28.670639   75577 cri.go:89] found id: ""
	I0920 18:19:28.670665   75577 logs.go:276] 0 containers: []
	W0920 18:19:28.670675   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:28.670699   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:28.670714   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:19:28.749623   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:28.749659   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:28.790808   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:28.790831   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:28.842027   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:28.842070   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:28.855927   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:28.855954   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:28.935807   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:31.436321   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:31.450159   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:31.450221   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:31.485444   75577 cri.go:89] found id: ""
	I0920 18:19:31.485483   75577 logs.go:276] 0 containers: []
	W0920 18:19:31.485494   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:31.485502   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:31.485562   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:31.520888   75577 cri.go:89] found id: ""
	I0920 18:19:31.520918   75577 logs.go:276] 0 containers: []
	W0920 18:19:31.520927   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:31.520941   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:31.521007   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:31.555001   75577 cri.go:89] found id: ""
	I0920 18:19:31.555029   75577 logs.go:276] 0 containers: []
	W0920 18:19:31.555040   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:31.555047   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:31.555111   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:31.588766   75577 cri.go:89] found id: ""
	I0920 18:19:31.588794   75577 logs.go:276] 0 containers: []
	W0920 18:19:31.588802   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:31.588808   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:31.588872   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:31.622006   75577 cri.go:89] found id: ""
	I0920 18:19:31.622037   75577 logs.go:276] 0 containers: []
	W0920 18:19:31.622048   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:31.622056   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:31.622110   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:31.659475   75577 cri.go:89] found id: ""
	I0920 18:19:31.659508   75577 logs.go:276] 0 containers: []
	W0920 18:19:31.659519   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:31.659527   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:31.659589   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:31.695380   75577 cri.go:89] found id: ""
	I0920 18:19:31.695415   75577 logs.go:276] 0 containers: []
	W0920 18:19:31.695426   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:31.695436   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:31.695521   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:31.729507   75577 cri.go:89] found id: ""
	I0920 18:19:31.729540   75577 logs.go:276] 0 containers: []
	W0920 18:19:31.729550   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:31.729561   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:31.729574   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:31.781857   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:31.781908   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:31.795325   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:31.795356   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:31.868684   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:31.868708   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:31.868722   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:19:31.945334   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:31.945371   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:29.191780   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:31.192712   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:33.693996   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:34.024591   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:36.024999   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:33.801443   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:36.300715   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:34.485723   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:34.499039   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:34.499106   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:34.534664   75577 cri.go:89] found id: ""
	I0920 18:19:34.534697   75577 logs.go:276] 0 containers: []
	W0920 18:19:34.534709   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:34.534717   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:34.534777   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:34.567515   75577 cri.go:89] found id: ""
	I0920 18:19:34.567545   75577 logs.go:276] 0 containers: []
	W0920 18:19:34.567556   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:34.567564   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:34.567624   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:34.601653   75577 cri.go:89] found id: ""
	I0920 18:19:34.601693   75577 logs.go:276] 0 containers: []
	W0920 18:19:34.601704   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:34.601712   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:34.601775   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:34.635238   75577 cri.go:89] found id: ""
	I0920 18:19:34.635271   75577 logs.go:276] 0 containers: []
	W0920 18:19:34.635282   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:34.635291   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:34.635361   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:34.669665   75577 cri.go:89] found id: ""
	I0920 18:19:34.669689   75577 logs.go:276] 0 containers: []
	W0920 18:19:34.669697   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:34.669703   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:34.669751   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:34.705984   75577 cri.go:89] found id: ""
	I0920 18:19:34.706012   75577 logs.go:276] 0 containers: []
	W0920 18:19:34.706022   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:34.706032   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:34.706110   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:34.740052   75577 cri.go:89] found id: ""
	I0920 18:19:34.740079   75577 logs.go:276] 0 containers: []
	W0920 18:19:34.740087   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:34.740092   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:34.740139   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:34.774930   75577 cri.go:89] found id: ""
	I0920 18:19:34.774962   75577 logs.go:276] 0 containers: []
	W0920 18:19:34.774973   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:34.774984   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:34.775081   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:34.791674   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:34.791714   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:34.894988   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:34.895019   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:34.895035   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:19:34.973785   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:34.973823   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:35.010928   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:35.010964   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:37.561543   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:37.574865   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:37.574925   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:37.609145   75577 cri.go:89] found id: ""
	I0920 18:19:37.609170   75577 logs.go:276] 0 containers: []
	W0920 18:19:37.609178   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:37.609183   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:37.609247   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:37.645080   75577 cri.go:89] found id: ""
	I0920 18:19:37.645107   75577 logs.go:276] 0 containers: []
	W0920 18:19:37.645115   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:37.645121   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:37.645168   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:37.680060   75577 cri.go:89] found id: ""
	I0920 18:19:37.680094   75577 logs.go:276] 0 containers: []
	W0920 18:19:37.680105   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:37.680113   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:37.680173   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:37.713586   75577 cri.go:89] found id: ""
	I0920 18:19:37.713618   75577 logs.go:276] 0 containers: []
	W0920 18:19:37.713629   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:37.713636   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:37.713694   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:36.192483   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:38.692017   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:38.025974   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:40.524671   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:38.302403   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:40.802748   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:37.750249   75577 cri.go:89] found id: ""
	I0920 18:19:37.750274   75577 logs.go:276] 0 containers: []
	W0920 18:19:37.750282   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:37.750289   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:37.750351   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:37.785613   75577 cri.go:89] found id: ""
	I0920 18:19:37.785642   75577 logs.go:276] 0 containers: []
	W0920 18:19:37.785650   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:37.785656   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:37.785705   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:37.824240   75577 cri.go:89] found id: ""
	I0920 18:19:37.824267   75577 logs.go:276] 0 containers: []
	W0920 18:19:37.824278   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:37.824286   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:37.824348   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:37.858522   75577 cri.go:89] found id: ""
	I0920 18:19:37.858547   75577 logs.go:276] 0 containers: []
	W0920 18:19:37.858556   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:37.858564   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:37.858575   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:19:37.939852   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:37.939891   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:37.981029   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:37.981062   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:38.030606   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:38.030633   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:38.043914   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:38.043953   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:38.124846   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:40.625196   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:40.640638   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:40.640708   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:40.687107   75577 cri.go:89] found id: ""
	I0920 18:19:40.687131   75577 logs.go:276] 0 containers: []
	W0920 18:19:40.687140   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:40.687148   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:40.687206   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:40.725727   75577 cri.go:89] found id: ""
	I0920 18:19:40.725858   75577 logs.go:276] 0 containers: []
	W0920 18:19:40.725875   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:40.725889   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:40.725967   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:40.762965   75577 cri.go:89] found id: ""
	I0920 18:19:40.762988   75577 logs.go:276] 0 containers: []
	W0920 18:19:40.762996   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:40.763002   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:40.763049   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:40.797387   75577 cri.go:89] found id: ""
	I0920 18:19:40.797415   75577 logs.go:276] 0 containers: []
	W0920 18:19:40.797427   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:40.797439   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:40.797498   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:40.834482   75577 cri.go:89] found id: ""
	I0920 18:19:40.834513   75577 logs.go:276] 0 containers: []
	W0920 18:19:40.834521   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:40.834528   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:40.834578   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:40.870062   75577 cri.go:89] found id: ""
	I0920 18:19:40.870090   75577 logs.go:276] 0 containers: []
	W0920 18:19:40.870099   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:40.870106   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:40.870164   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:40.906613   75577 cri.go:89] found id: ""
	I0920 18:19:40.906642   75577 logs.go:276] 0 containers: []
	W0920 18:19:40.906653   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:40.906662   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:40.906721   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:40.953033   75577 cri.go:89] found id: ""
	I0920 18:19:40.953056   75577 logs.go:276] 0 containers: []
	W0920 18:19:40.953065   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:40.953073   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:40.953083   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:40.998490   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:40.998523   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:41.051488   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:41.051525   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:41.067908   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:41.067937   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:41.144854   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:41.144878   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:41.144890   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:19:40.692456   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:43.192532   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:43.024238   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:45.024998   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:47.025427   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:43.302334   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:45.801415   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:43.723613   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:43.736857   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:43.736924   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:43.772588   75577 cri.go:89] found id: ""
	I0920 18:19:43.772624   75577 logs.go:276] 0 containers: []
	W0920 18:19:43.772635   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:43.772643   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:43.772712   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:43.808580   75577 cri.go:89] found id: ""
	I0920 18:19:43.808611   75577 logs.go:276] 0 containers: []
	W0920 18:19:43.808622   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:43.808629   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:43.808695   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:43.847167   75577 cri.go:89] found id: ""
	I0920 18:19:43.847207   75577 logs.go:276] 0 containers: []
	W0920 18:19:43.847218   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:43.847243   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:43.847302   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:43.883614   75577 cri.go:89] found id: ""
	I0920 18:19:43.883646   75577 logs.go:276] 0 containers: []
	W0920 18:19:43.883658   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:43.883667   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:43.883738   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:43.917602   75577 cri.go:89] found id: ""
	I0920 18:19:43.917627   75577 logs.go:276] 0 containers: []
	W0920 18:19:43.917635   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:43.917641   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:43.917694   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:43.953232   75577 cri.go:89] found id: ""
	I0920 18:19:43.953259   75577 logs.go:276] 0 containers: []
	W0920 18:19:43.953268   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:43.953273   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:43.953325   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:43.991204   75577 cri.go:89] found id: ""
	I0920 18:19:43.991234   75577 logs.go:276] 0 containers: []
	W0920 18:19:43.991246   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:43.991253   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:43.991334   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:44.028117   75577 cri.go:89] found id: ""
	I0920 18:19:44.028140   75577 logs.go:276] 0 containers: []
	W0920 18:19:44.028147   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:44.028154   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:44.028164   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:44.068175   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:44.068203   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:44.119953   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:44.119993   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:44.134127   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:44.134154   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:44.211563   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:44.211592   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:44.211604   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:19:46.787328   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:46.803429   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:46.803516   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:46.844813   75577 cri.go:89] found id: ""
	I0920 18:19:46.844839   75577 logs.go:276] 0 containers: []
	W0920 18:19:46.844850   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:46.844856   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:46.844914   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:46.878441   75577 cri.go:89] found id: ""
	I0920 18:19:46.878483   75577 logs.go:276] 0 containers: []
	W0920 18:19:46.878497   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:46.878506   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:46.878580   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:46.913933   75577 cri.go:89] found id: ""
	I0920 18:19:46.913976   75577 logs.go:276] 0 containers: []
	W0920 18:19:46.913986   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:46.913993   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:46.914066   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:46.948577   75577 cri.go:89] found id: ""
	I0920 18:19:46.948609   75577 logs.go:276] 0 containers: []
	W0920 18:19:46.948618   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:46.948625   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:46.948689   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:46.982742   75577 cri.go:89] found id: ""
	I0920 18:19:46.982770   75577 logs.go:276] 0 containers: []
	W0920 18:19:46.982778   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:46.982785   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:46.982836   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:47.017075   75577 cri.go:89] found id: ""
	I0920 18:19:47.017107   75577 logs.go:276] 0 containers: []
	W0920 18:19:47.017120   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:47.017128   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:47.017190   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:47.055494   75577 cri.go:89] found id: ""
	I0920 18:19:47.055520   75577 logs.go:276] 0 containers: []
	W0920 18:19:47.055528   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:47.055534   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:47.055586   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:47.091966   75577 cri.go:89] found id: ""
	I0920 18:19:47.091998   75577 logs.go:276] 0 containers: []
	W0920 18:19:47.092006   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:47.092019   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:47.092035   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:47.142916   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:47.142955   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:47.158471   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:47.158502   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:47.241530   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:47.241553   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:47.241567   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:19:47.316958   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:47.316992   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:45.192724   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:47.692046   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:49.524915   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:52.026403   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:48.302289   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:50.303276   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:49.854403   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:49.867317   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:49.867431   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:49.903627   75577 cri.go:89] found id: ""
	I0920 18:19:49.903661   75577 logs.go:276] 0 containers: []
	W0920 18:19:49.903674   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:49.903682   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:49.903745   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:49.936857   75577 cri.go:89] found id: ""
	I0920 18:19:49.936892   75577 logs.go:276] 0 containers: []
	W0920 18:19:49.936902   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:49.936909   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:49.936975   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:49.974403   75577 cri.go:89] found id: ""
	I0920 18:19:49.974433   75577 logs.go:276] 0 containers: []
	W0920 18:19:49.974441   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:49.974447   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:49.974505   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:50.011810   75577 cri.go:89] found id: ""
	I0920 18:19:50.011839   75577 logs.go:276] 0 containers: []
	W0920 18:19:50.011850   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:50.011857   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:50.011921   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:50.050491   75577 cri.go:89] found id: ""
	I0920 18:19:50.050527   75577 logs.go:276] 0 containers: []
	W0920 18:19:50.050538   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:50.050546   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:50.050610   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:50.088270   75577 cri.go:89] found id: ""
	I0920 18:19:50.088299   75577 logs.go:276] 0 containers: []
	W0920 18:19:50.088308   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:50.088314   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:50.088375   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:50.126355   75577 cri.go:89] found id: ""
	I0920 18:19:50.126382   75577 logs.go:276] 0 containers: []
	W0920 18:19:50.126392   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:50.126399   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:50.126460   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:50.160770   75577 cri.go:89] found id: ""
	I0920 18:19:50.160798   75577 logs.go:276] 0 containers: []
	W0920 18:19:50.160808   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:50.160819   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:50.160834   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:50.212866   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:50.212914   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:50.227269   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:50.227295   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:50.301363   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:50.301392   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:50.301406   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:19:50.376293   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:50.376330   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:50.192002   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:52.192488   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:54.524436   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:56.526032   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:52.801396   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:54.802393   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:57.301066   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:52.916567   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:52.930445   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:52.930525   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:52.965360   75577 cri.go:89] found id: ""
	I0920 18:19:52.965392   75577 logs.go:276] 0 containers: []
	W0920 18:19:52.965403   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:52.965411   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:52.965468   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:53.000439   75577 cri.go:89] found id: ""
	I0920 18:19:53.000480   75577 logs.go:276] 0 containers: []
	W0920 18:19:53.000495   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:53.000503   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:53.000577   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:53.034598   75577 cri.go:89] found id: ""
	I0920 18:19:53.034630   75577 logs.go:276] 0 containers: []
	W0920 18:19:53.034640   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:53.034647   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:53.034744   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:53.068636   75577 cri.go:89] found id: ""
	I0920 18:19:53.068664   75577 logs.go:276] 0 containers: []
	W0920 18:19:53.068673   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:53.068678   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:53.068750   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:53.104381   75577 cri.go:89] found id: ""
	I0920 18:19:53.104408   75577 logs.go:276] 0 containers: []
	W0920 18:19:53.104416   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:53.104421   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:53.104474   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:53.137865   75577 cri.go:89] found id: ""
	I0920 18:19:53.137897   75577 logs.go:276] 0 containers: []
	W0920 18:19:53.137909   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:53.137922   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:53.137983   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:53.171825   75577 cri.go:89] found id: ""
	I0920 18:19:53.171861   75577 logs.go:276] 0 containers: []
	W0920 18:19:53.171874   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:53.171883   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:53.171952   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:53.210742   75577 cri.go:89] found id: ""
	I0920 18:19:53.210774   75577 logs.go:276] 0 containers: []
	W0920 18:19:53.210784   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:53.210796   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:53.210811   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:19:53.285597   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:53.285637   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:53.329821   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:53.329871   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:53.381362   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:53.381415   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:53.396044   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:53.396074   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:53.471582   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:55.972369   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:55.987205   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:55.987281   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:56.031322   75577 cri.go:89] found id: ""
	I0920 18:19:56.031355   75577 logs.go:276] 0 containers: []
	W0920 18:19:56.031363   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:56.031368   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:56.031420   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:56.066442   75577 cri.go:89] found id: ""
	I0920 18:19:56.066487   75577 logs.go:276] 0 containers: []
	W0920 18:19:56.066576   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:56.066617   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:56.066695   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:56.101818   75577 cri.go:89] found id: ""
	I0920 18:19:56.101864   75577 logs.go:276] 0 containers: []
	W0920 18:19:56.101876   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:56.101883   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:56.101947   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:56.139513   75577 cri.go:89] found id: ""
	I0920 18:19:56.139545   75577 logs.go:276] 0 containers: []
	W0920 18:19:56.139557   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:56.139565   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:56.139618   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:56.173638   75577 cri.go:89] found id: ""
	I0920 18:19:56.173669   75577 logs.go:276] 0 containers: []
	W0920 18:19:56.173680   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:56.173688   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:56.173752   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:56.210657   75577 cri.go:89] found id: ""
	I0920 18:19:56.210689   75577 logs.go:276] 0 containers: []
	W0920 18:19:56.210700   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:56.210709   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:56.210768   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:56.246035   75577 cri.go:89] found id: ""
	I0920 18:19:56.246063   75577 logs.go:276] 0 containers: []
	W0920 18:19:56.246071   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:56.246077   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:56.246123   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:56.280766   75577 cri.go:89] found id: ""
	I0920 18:19:56.280796   75577 logs.go:276] 0 containers: []
	W0920 18:19:56.280807   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:56.280818   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:56.280834   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:56.320511   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:56.320540   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:56.373746   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:56.373785   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:56.389294   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:56.389322   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:56.460079   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:56.460100   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:56.460112   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:19:54.692781   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:57.192706   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:59.025015   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:01.025196   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:59.302190   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:01.801923   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:59.044541   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:59.058395   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:59.058464   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:59.097442   75577 cri.go:89] found id: ""
	I0920 18:19:59.097482   75577 logs.go:276] 0 containers: []
	W0920 18:19:59.097495   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:59.097512   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:59.097593   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:59.133091   75577 cri.go:89] found id: ""
	I0920 18:19:59.133116   75577 logs.go:276] 0 containers: []
	W0920 18:19:59.133128   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:59.133135   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:59.133264   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:59.166902   75577 cri.go:89] found id: ""
	I0920 18:19:59.166927   75577 logs.go:276] 0 containers: []
	W0920 18:19:59.166938   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:59.166945   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:59.167008   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:59.202551   75577 cri.go:89] found id: ""
	I0920 18:19:59.202573   75577 logs.go:276] 0 containers: []
	W0920 18:19:59.202581   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:59.202586   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:59.202633   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:59.235197   75577 cri.go:89] found id: ""
	I0920 18:19:59.235222   75577 logs.go:276] 0 containers: []
	W0920 18:19:59.235230   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:59.235236   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:59.235286   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:59.269148   75577 cri.go:89] found id: ""
	I0920 18:19:59.269176   75577 logs.go:276] 0 containers: []
	W0920 18:19:59.269187   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:59.269196   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:59.269262   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:59.307670   75577 cri.go:89] found id: ""
	I0920 18:19:59.307692   75577 logs.go:276] 0 containers: []
	W0920 18:19:59.307699   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:59.307705   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:59.307753   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:59.340554   75577 cri.go:89] found id: ""
	I0920 18:19:59.340589   75577 logs.go:276] 0 containers: []
	W0920 18:19:59.340599   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:59.340610   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:59.340623   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:59.382339   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:59.382470   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:59.432176   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:59.432211   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:59.445889   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:59.445922   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:59.516564   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:59.516590   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:59.516609   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:02.101918   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:02.115064   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:02.115128   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:02.148718   75577 cri.go:89] found id: ""
	I0920 18:20:02.148749   75577 logs.go:276] 0 containers: []
	W0920 18:20:02.148757   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:02.148764   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:02.148822   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:02.182673   75577 cri.go:89] found id: ""
	I0920 18:20:02.182711   75577 logs.go:276] 0 containers: []
	W0920 18:20:02.182722   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:02.182729   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:02.182797   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:02.218734   75577 cri.go:89] found id: ""
	I0920 18:20:02.218771   75577 logs.go:276] 0 containers: []
	W0920 18:20:02.218782   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:02.218791   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:02.218856   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:02.258956   75577 cri.go:89] found id: ""
	I0920 18:20:02.258981   75577 logs.go:276] 0 containers: []
	W0920 18:20:02.258989   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:02.258995   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:02.259066   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:02.294879   75577 cri.go:89] found id: ""
	I0920 18:20:02.294910   75577 logs.go:276] 0 containers: []
	W0920 18:20:02.294919   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:02.294925   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:02.294971   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:02.331010   75577 cri.go:89] found id: ""
	I0920 18:20:02.331039   75577 logs.go:276] 0 containers: []
	W0920 18:20:02.331049   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:02.331057   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:02.331122   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:02.365023   75577 cri.go:89] found id: ""
	I0920 18:20:02.365056   75577 logs.go:276] 0 containers: []
	W0920 18:20:02.365066   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:02.365072   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:02.365122   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:02.400501   75577 cri.go:89] found id: ""
	I0920 18:20:02.400528   75577 logs.go:276] 0 containers: []
	W0920 18:20:02.400537   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:02.400545   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:02.400556   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:02.450597   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:02.450636   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:02.467637   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:02.467671   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:02.540668   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:02.540690   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:02.540706   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:02.629706   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:02.629752   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:59.192799   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:01.691537   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:03.692613   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:03.524517   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:05.525418   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:04.300845   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:06.301250   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:05.168119   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:05.182954   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:05.183043   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:05.221938   75577 cri.go:89] found id: ""
	I0920 18:20:05.221971   75577 logs.go:276] 0 containers: []
	W0920 18:20:05.221980   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:05.221990   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:05.222051   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:05.258454   75577 cri.go:89] found id: ""
	I0920 18:20:05.258479   75577 logs.go:276] 0 containers: []
	W0920 18:20:05.258487   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:05.258492   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:05.258540   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:05.293091   75577 cri.go:89] found id: ""
	I0920 18:20:05.293125   75577 logs.go:276] 0 containers: []
	W0920 18:20:05.293138   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:05.293146   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:05.293196   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:05.332992   75577 cri.go:89] found id: ""
	I0920 18:20:05.333025   75577 logs.go:276] 0 containers: []
	W0920 18:20:05.333034   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:05.333040   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:05.333088   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:05.368743   75577 cri.go:89] found id: ""
	I0920 18:20:05.368778   75577 logs.go:276] 0 containers: []
	W0920 18:20:05.368790   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:05.368798   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:05.368859   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:05.404913   75577 cri.go:89] found id: ""
	I0920 18:20:05.404941   75577 logs.go:276] 0 containers: []
	W0920 18:20:05.404948   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:05.404954   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:05.405003   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:05.442111   75577 cri.go:89] found id: ""
	I0920 18:20:05.442143   75577 logs.go:276] 0 containers: []
	W0920 18:20:05.442154   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:05.442163   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:05.442228   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:05.478808   75577 cri.go:89] found id: ""
	I0920 18:20:05.478842   75577 logs.go:276] 0 containers: []
	W0920 18:20:05.478853   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:05.478865   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:05.478879   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:05.531653   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:05.531691   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:05.545181   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:05.545210   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:05.615009   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:05.615041   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:05.615059   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:05.690842   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:05.690871   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:06.193177   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:08.691596   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:08.034009   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:10.524419   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:08.301457   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:10.801788   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:08.230851   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:08.244539   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:08.244609   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:08.281123   75577 cri.go:89] found id: ""
	I0920 18:20:08.281155   75577 logs.go:276] 0 containers: []
	W0920 18:20:08.281167   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:08.281174   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:08.281226   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:08.319704   75577 cri.go:89] found id: ""
	I0920 18:20:08.319740   75577 logs.go:276] 0 containers: []
	W0920 18:20:08.319754   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:08.319763   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:08.319828   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:08.354589   75577 cri.go:89] found id: ""
	I0920 18:20:08.354619   75577 logs.go:276] 0 containers: []
	W0920 18:20:08.354631   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:08.354638   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:08.354703   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:08.391580   75577 cri.go:89] found id: ""
	I0920 18:20:08.391603   75577 logs.go:276] 0 containers: []
	W0920 18:20:08.391612   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:08.391617   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:08.391666   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:08.425596   75577 cri.go:89] found id: ""
	I0920 18:20:08.425622   75577 logs.go:276] 0 containers: []
	W0920 18:20:08.425630   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:08.425638   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:08.425704   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:08.458720   75577 cri.go:89] found id: ""
	I0920 18:20:08.458747   75577 logs.go:276] 0 containers: []
	W0920 18:20:08.458758   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:08.458764   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:08.458812   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:08.496104   75577 cri.go:89] found id: ""
	I0920 18:20:08.496137   75577 logs.go:276] 0 containers: []
	W0920 18:20:08.496148   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:08.496155   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:08.496210   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:08.530961   75577 cri.go:89] found id: ""
	I0920 18:20:08.530989   75577 logs.go:276] 0 containers: []
	W0920 18:20:08.531000   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:08.531010   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:08.531023   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:08.568512   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:08.568541   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:08.619716   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:08.619754   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:08.634358   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:08.634390   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:08.721465   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:08.721488   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:08.721501   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:11.303942   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:11.316686   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:11.316759   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:11.353137   75577 cri.go:89] found id: ""
	I0920 18:20:11.353161   75577 logs.go:276] 0 containers: []
	W0920 18:20:11.353169   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:11.353176   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:11.353229   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:11.388271   75577 cri.go:89] found id: ""
	I0920 18:20:11.388298   75577 logs.go:276] 0 containers: []
	W0920 18:20:11.388315   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:11.388322   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:11.388388   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:11.422673   75577 cri.go:89] found id: ""
	I0920 18:20:11.422700   75577 logs.go:276] 0 containers: []
	W0920 18:20:11.422708   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:11.422714   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:11.422768   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:11.458869   75577 cri.go:89] found id: ""
	I0920 18:20:11.458900   75577 logs.go:276] 0 containers: []
	W0920 18:20:11.458910   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:11.458917   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:11.459068   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:11.494128   75577 cri.go:89] found id: ""
	I0920 18:20:11.494161   75577 logs.go:276] 0 containers: []
	W0920 18:20:11.494172   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:11.494180   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:11.494246   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:11.529111   75577 cri.go:89] found id: ""
	I0920 18:20:11.529135   75577 logs.go:276] 0 containers: []
	W0920 18:20:11.529150   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:11.529157   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:11.529223   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:11.562287   75577 cri.go:89] found id: ""
	I0920 18:20:11.562314   75577 logs.go:276] 0 containers: []
	W0920 18:20:11.562323   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:11.562329   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:11.562381   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:11.600066   75577 cri.go:89] found id: ""
	I0920 18:20:11.600106   75577 logs.go:276] 0 containers: []
	W0920 18:20:11.600117   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:11.600128   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:11.600143   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:11.681628   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:11.681665   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:11.722173   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:11.722205   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:11.773132   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:11.773171   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:11.787183   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:11.787215   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:11.856304   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:10.695174   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:13.191743   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:13.025069   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:15.525020   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:12.803677   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:15.301431   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:14.356881   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:14.371658   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:14.371729   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:14.406983   75577 cri.go:89] found id: ""
	I0920 18:20:14.407009   75577 logs.go:276] 0 containers: []
	W0920 18:20:14.407017   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:14.407022   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:14.407075   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:14.481048   75577 cri.go:89] found id: ""
	I0920 18:20:14.481075   75577 logs.go:276] 0 containers: []
	W0920 18:20:14.481086   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:14.481094   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:14.481156   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:14.513683   75577 cri.go:89] found id: ""
	I0920 18:20:14.513711   75577 logs.go:276] 0 containers: []
	W0920 18:20:14.513719   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:14.513725   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:14.513797   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:14.553330   75577 cri.go:89] found id: ""
	I0920 18:20:14.553363   75577 logs.go:276] 0 containers: []
	W0920 18:20:14.553375   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:14.553381   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:14.553446   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:14.590803   75577 cri.go:89] found id: ""
	I0920 18:20:14.590837   75577 logs.go:276] 0 containers: []
	W0920 18:20:14.590848   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:14.590856   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:14.590927   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:14.625100   75577 cri.go:89] found id: ""
	I0920 18:20:14.625130   75577 logs.go:276] 0 containers: []
	W0920 18:20:14.625141   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:14.625151   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:14.625219   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:14.659313   75577 cri.go:89] found id: ""
	I0920 18:20:14.659342   75577 logs.go:276] 0 containers: []
	W0920 18:20:14.659351   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:14.659357   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:14.659418   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:14.694901   75577 cri.go:89] found id: ""
	I0920 18:20:14.694931   75577 logs.go:276] 0 containers: []
	W0920 18:20:14.694939   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:14.694951   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:14.694966   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:14.708406   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:14.708437   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:14.785174   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:14.785200   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:14.785214   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:14.873622   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:14.873666   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:14.919130   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:14.919166   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:17.468595   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:17.483397   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:17.483473   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:17.522415   75577 cri.go:89] found id: ""
	I0920 18:20:17.522445   75577 logs.go:276] 0 containers: []
	W0920 18:20:17.522455   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:17.522463   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:17.522523   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:17.558957   75577 cri.go:89] found id: ""
	I0920 18:20:17.558991   75577 logs.go:276] 0 containers: []
	W0920 18:20:17.559002   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:17.559010   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:17.559174   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:17.595183   75577 cri.go:89] found id: ""
	I0920 18:20:17.595217   75577 logs.go:276] 0 containers: []
	W0920 18:20:17.595229   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:17.595237   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:17.595302   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:17.631727   75577 cri.go:89] found id: ""
	I0920 18:20:17.631757   75577 logs.go:276] 0 containers: []
	W0920 18:20:17.631768   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:17.631775   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:17.631894   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:17.666374   75577 cri.go:89] found id: ""
	I0920 18:20:17.666409   75577 logs.go:276] 0 containers: []
	W0920 18:20:17.666420   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:17.666427   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:17.666488   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:17.702256   75577 cri.go:89] found id: ""
	I0920 18:20:17.702281   75577 logs.go:276] 0 containers: []
	W0920 18:20:17.702291   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:17.702299   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:17.702370   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:15.191971   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:17.192990   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:17.525552   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:20.025229   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:17.802045   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:20.302538   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:17.742133   75577 cri.go:89] found id: ""
	I0920 18:20:17.742161   75577 logs.go:276] 0 containers: []
	W0920 18:20:17.742172   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:17.742179   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:17.742249   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:17.781269   75577 cri.go:89] found id: ""
	I0920 18:20:17.781300   75577 logs.go:276] 0 containers: []
	W0920 18:20:17.781311   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:17.781321   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:17.781336   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:17.835886   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:17.835923   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:17.851365   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:17.851396   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:17.932682   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:17.932711   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:17.932726   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:18.013680   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:18.013731   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:20.555074   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:20.568007   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:20.568071   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:20.602250   75577 cri.go:89] found id: ""
	I0920 18:20:20.602278   75577 logs.go:276] 0 containers: []
	W0920 18:20:20.602287   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:20.602293   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:20.602353   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:20.636711   75577 cri.go:89] found id: ""
	I0920 18:20:20.636739   75577 logs.go:276] 0 containers: []
	W0920 18:20:20.636748   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:20.636753   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:20.636807   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:20.669529   75577 cri.go:89] found id: ""
	I0920 18:20:20.669570   75577 logs.go:276] 0 containers: []
	W0920 18:20:20.669583   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:20.669593   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:20.669651   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:20.710330   75577 cri.go:89] found id: ""
	I0920 18:20:20.710364   75577 logs.go:276] 0 containers: []
	W0920 18:20:20.710372   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:20.710378   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:20.710427   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:20.747664   75577 cri.go:89] found id: ""
	I0920 18:20:20.747690   75577 logs.go:276] 0 containers: []
	W0920 18:20:20.747699   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:20.747705   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:20.747760   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:20.785680   75577 cri.go:89] found id: ""
	I0920 18:20:20.785717   75577 logs.go:276] 0 containers: []
	W0920 18:20:20.785726   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:20.785732   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:20.785794   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:20.822570   75577 cri.go:89] found id: ""
	I0920 18:20:20.822599   75577 logs.go:276] 0 containers: []
	W0920 18:20:20.822608   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:20.822613   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:20.822659   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:20.856739   75577 cri.go:89] found id: ""
	I0920 18:20:20.856772   75577 logs.go:276] 0 containers: []
	W0920 18:20:20.856786   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:20.856806   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:20.856822   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:20.896606   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:20.896643   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:20.948275   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:20.948313   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:20.962576   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:20.962606   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:21.033511   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:21.033534   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:21.033547   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:19.691723   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:21.694099   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:22.524982   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:24.527065   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:27.026374   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:22.803300   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:25.302416   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:23.615731   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:23.628619   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:23.628698   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:23.664344   75577 cri.go:89] found id: ""
	I0920 18:20:23.664367   75577 logs.go:276] 0 containers: []
	W0920 18:20:23.664375   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:23.664381   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:23.664431   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:23.698791   75577 cri.go:89] found id: ""
	I0920 18:20:23.698823   75577 logs.go:276] 0 containers: []
	W0920 18:20:23.698832   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:23.698839   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:23.698889   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:23.732552   75577 cri.go:89] found id: ""
	I0920 18:20:23.732590   75577 logs.go:276] 0 containers: []
	W0920 18:20:23.732602   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:23.732610   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:23.732676   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:23.769463   75577 cri.go:89] found id: ""
	I0920 18:20:23.769490   75577 logs.go:276] 0 containers: []
	W0920 18:20:23.769501   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:23.769508   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:23.769567   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:23.807329   75577 cri.go:89] found id: ""
	I0920 18:20:23.807361   75577 logs.go:276] 0 containers: []
	W0920 18:20:23.807374   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:23.807382   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:23.807442   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:23.847889   75577 cri.go:89] found id: ""
	I0920 18:20:23.847913   75577 logs.go:276] 0 containers: []
	W0920 18:20:23.847920   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:23.847927   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:23.847985   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:23.883287   75577 cri.go:89] found id: ""
	I0920 18:20:23.883314   75577 logs.go:276] 0 containers: []
	W0920 18:20:23.883322   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:23.883335   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:23.883389   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:23.920006   75577 cri.go:89] found id: ""
	I0920 18:20:23.920034   75577 logs.go:276] 0 containers: []
	W0920 18:20:23.920045   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:23.920057   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:23.920070   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:23.995572   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:23.995608   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:24.035953   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:24.035983   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:24.085803   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:24.085860   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:24.100226   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:24.100255   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:24.173555   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:26.673726   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:26.687372   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:26.687449   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:26.726544   75577 cri.go:89] found id: ""
	I0920 18:20:26.726574   75577 logs.go:276] 0 containers: []
	W0920 18:20:26.726583   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:26.726590   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:26.726651   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:26.761542   75577 cri.go:89] found id: ""
	I0920 18:20:26.761571   75577 logs.go:276] 0 containers: []
	W0920 18:20:26.761580   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:26.761587   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:26.761639   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:26.803869   75577 cri.go:89] found id: ""
	I0920 18:20:26.803896   75577 logs.go:276] 0 containers: []
	W0920 18:20:26.803904   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:26.803910   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:26.803970   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:26.854835   75577 cri.go:89] found id: ""
	I0920 18:20:26.854876   75577 logs.go:276] 0 containers: []
	W0920 18:20:26.854888   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:26.854896   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:26.854958   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:26.896113   75577 cri.go:89] found id: ""
	I0920 18:20:26.896151   75577 logs.go:276] 0 containers: []
	W0920 18:20:26.896162   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:26.896170   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:26.896255   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:26.942828   75577 cri.go:89] found id: ""
	I0920 18:20:26.942853   75577 logs.go:276] 0 containers: []
	W0920 18:20:26.942863   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:26.942930   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:26.943007   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:26.977246   75577 cri.go:89] found id: ""
	I0920 18:20:26.977278   75577 logs.go:276] 0 containers: []
	W0920 18:20:26.977289   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:26.977296   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:26.977367   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:27.012408   75577 cri.go:89] found id: ""
	I0920 18:20:27.012440   75577 logs.go:276] 0 containers: []
	W0920 18:20:27.012451   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:27.012462   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:27.012477   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:27.063970   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:27.064017   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:27.078082   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:27.078119   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:27.148050   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:27.148079   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:27.148094   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:27.230836   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:27.230880   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:24.192842   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:26.196350   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:28.692682   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:29.525508   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:32.025275   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:27.802268   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:30.301519   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:29.771845   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:29.785479   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:29.785553   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:29.823101   75577 cri.go:89] found id: ""
	I0920 18:20:29.823132   75577 logs.go:276] 0 containers: []
	W0920 18:20:29.823143   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:29.823150   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:29.823228   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:29.858679   75577 cri.go:89] found id: ""
	I0920 18:20:29.858713   75577 logs.go:276] 0 containers: []
	W0920 18:20:29.858724   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:29.858732   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:29.858796   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:29.901044   75577 cri.go:89] found id: ""
	I0920 18:20:29.901073   75577 logs.go:276] 0 containers: []
	W0920 18:20:29.901083   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:29.901091   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:29.901160   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:29.937743   75577 cri.go:89] found id: ""
	I0920 18:20:29.937775   75577 logs.go:276] 0 containers: []
	W0920 18:20:29.937792   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:29.937800   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:29.937884   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:29.972813   75577 cri.go:89] found id: ""
	I0920 18:20:29.972838   75577 logs.go:276] 0 containers: []
	W0920 18:20:29.972846   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:29.972852   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:29.972901   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:30.008024   75577 cri.go:89] found id: ""
	I0920 18:20:30.008053   75577 logs.go:276] 0 containers: []
	W0920 18:20:30.008062   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:30.008068   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:30.008117   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:30.048544   75577 cri.go:89] found id: ""
	I0920 18:20:30.048577   75577 logs.go:276] 0 containers: []
	W0920 18:20:30.048585   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:30.048591   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:30.048643   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:30.084501   75577 cri.go:89] found id: ""
	I0920 18:20:30.084525   75577 logs.go:276] 0 containers: []
	W0920 18:20:30.084534   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:30.084545   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:30.084559   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:30.136234   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:30.136279   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:30.149557   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:30.149591   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:30.229765   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:30.229792   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:30.229806   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:30.307786   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:30.307825   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:31.191475   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:33.192864   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:34.025515   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:36.027276   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:32.801952   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:34.802033   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:37.301059   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:32.845220   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:32.859734   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:32.859810   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:32.896390   75577 cri.go:89] found id: ""
	I0920 18:20:32.896434   75577 logs.go:276] 0 containers: []
	W0920 18:20:32.896446   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:32.896464   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:32.896538   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:32.934548   75577 cri.go:89] found id: ""
	I0920 18:20:32.934572   75577 logs.go:276] 0 containers: []
	W0920 18:20:32.934580   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:32.934587   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:32.934640   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:32.982987   75577 cri.go:89] found id: ""
	I0920 18:20:32.983013   75577 logs.go:276] 0 containers: []
	W0920 18:20:32.983020   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:32.983026   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:32.983079   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:33.019059   75577 cri.go:89] found id: ""
	I0920 18:20:33.019085   75577 logs.go:276] 0 containers: []
	W0920 18:20:33.019093   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:33.019099   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:33.019148   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:33.057704   75577 cri.go:89] found id: ""
	I0920 18:20:33.057738   75577 logs.go:276] 0 containers: []
	W0920 18:20:33.057750   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:33.057759   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:33.057821   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:33.093702   75577 cri.go:89] found id: ""
	I0920 18:20:33.093732   75577 logs.go:276] 0 containers: []
	W0920 18:20:33.093743   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:33.093751   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:33.093809   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:33.128476   75577 cri.go:89] found id: ""
	I0920 18:20:33.128504   75577 logs.go:276] 0 containers: []
	W0920 18:20:33.128516   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:33.128523   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:33.128591   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:33.165513   75577 cri.go:89] found id: ""
	I0920 18:20:33.165542   75577 logs.go:276] 0 containers: []
	W0920 18:20:33.165550   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:33.165559   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:33.165569   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:33.219613   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:33.219650   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:33.232291   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:33.232317   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:33.304172   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:33.304197   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:33.304212   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:33.380057   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:33.380094   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:35.920842   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:35.935500   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:35.935593   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:35.974445   75577 cri.go:89] found id: ""
	I0920 18:20:35.974471   75577 logs.go:276] 0 containers: []
	W0920 18:20:35.974479   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:35.974485   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:35.974548   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:36.009487   75577 cri.go:89] found id: ""
	I0920 18:20:36.009518   75577 logs.go:276] 0 containers: []
	W0920 18:20:36.009530   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:36.009538   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:36.009706   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:36.049428   75577 cri.go:89] found id: ""
	I0920 18:20:36.049451   75577 logs.go:276] 0 containers: []
	W0920 18:20:36.049463   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:36.049469   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:36.049518   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:36.083973   75577 cri.go:89] found id: ""
	I0920 18:20:36.084011   75577 logs.go:276] 0 containers: []
	W0920 18:20:36.084022   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:36.084030   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:36.084119   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:36.117952   75577 cri.go:89] found id: ""
	I0920 18:20:36.117985   75577 logs.go:276] 0 containers: []
	W0920 18:20:36.117993   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:36.117998   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:36.118052   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:36.151222   75577 cri.go:89] found id: ""
	I0920 18:20:36.151256   75577 logs.go:276] 0 containers: []
	W0920 18:20:36.151265   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:36.151271   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:36.151319   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:36.184665   75577 cri.go:89] found id: ""
	I0920 18:20:36.184697   75577 logs.go:276] 0 containers: []
	W0920 18:20:36.184708   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:36.184716   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:36.184771   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:36.222974   75577 cri.go:89] found id: ""
	I0920 18:20:36.223003   75577 logs.go:276] 0 containers: []
	W0920 18:20:36.223012   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:36.223021   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:36.223033   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:36.300403   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:36.300424   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:36.300437   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:36.381220   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:36.381260   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:36.419010   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:36.419042   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:36.471758   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:36.471799   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:35.192949   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:37.693384   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:38.526219   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:41.024309   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:39.301511   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:41.801558   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:38.985359   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:38.998817   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:38.998898   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:39.036705   75577 cri.go:89] found id: ""
	I0920 18:20:39.036737   75577 logs.go:276] 0 containers: []
	W0920 18:20:39.036747   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:39.036755   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:39.036815   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:39.074254   75577 cri.go:89] found id: ""
	I0920 18:20:39.074285   75577 logs.go:276] 0 containers: []
	W0920 18:20:39.074294   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:39.074300   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:39.074367   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:39.108385   75577 cri.go:89] found id: ""
	I0920 18:20:39.108420   75577 logs.go:276] 0 containers: []
	W0920 18:20:39.108493   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:39.108506   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:39.108561   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:39.142283   75577 cri.go:89] found id: ""
	I0920 18:20:39.142313   75577 logs.go:276] 0 containers: []
	W0920 18:20:39.142325   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:39.142332   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:39.142396   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:39.176881   75577 cri.go:89] found id: ""
	I0920 18:20:39.176918   75577 logs.go:276] 0 containers: []
	W0920 18:20:39.176933   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:39.176941   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:39.177002   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:39.217734   75577 cri.go:89] found id: ""
	I0920 18:20:39.217759   75577 logs.go:276] 0 containers: []
	W0920 18:20:39.217767   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:39.217773   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:39.217852   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:39.250823   75577 cri.go:89] found id: ""
	I0920 18:20:39.250852   75577 logs.go:276] 0 containers: []
	W0920 18:20:39.250860   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:39.250868   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:39.250936   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:39.287494   75577 cri.go:89] found id: ""
	I0920 18:20:39.287519   75577 logs.go:276] 0 containers: []
	W0920 18:20:39.287528   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:39.287540   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:39.287552   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:39.343091   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:39.343126   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:39.359028   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:39.359063   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:39.425293   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:39.425321   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:39.425336   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:39.503303   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:39.503350   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:42.044784   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:42.057764   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:42.057876   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:42.091798   75577 cri.go:89] found id: ""
	I0920 18:20:42.091825   75577 logs.go:276] 0 containers: []
	W0920 18:20:42.091833   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:42.091839   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:42.091888   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:42.125325   75577 cri.go:89] found id: ""
	I0920 18:20:42.125352   75577 logs.go:276] 0 containers: []
	W0920 18:20:42.125362   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:42.125370   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:42.125423   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:42.158641   75577 cri.go:89] found id: ""
	I0920 18:20:42.158680   75577 logs.go:276] 0 containers: []
	W0920 18:20:42.158692   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:42.158701   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:42.158771   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:42.194065   75577 cri.go:89] found id: ""
	I0920 18:20:42.194090   75577 logs.go:276] 0 containers: []
	W0920 18:20:42.194101   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:42.194109   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:42.194174   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:42.227237   75577 cri.go:89] found id: ""
	I0920 18:20:42.227262   75577 logs.go:276] 0 containers: []
	W0920 18:20:42.227270   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:42.227275   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:42.227324   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:42.260205   75577 cri.go:89] found id: ""
	I0920 18:20:42.260240   75577 logs.go:276] 0 containers: []
	W0920 18:20:42.260251   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:42.260258   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:42.260325   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:42.300124   75577 cri.go:89] found id: ""
	I0920 18:20:42.300159   75577 logs.go:276] 0 containers: []
	W0920 18:20:42.300169   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:42.300175   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:42.300273   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:42.335638   75577 cri.go:89] found id: ""
	I0920 18:20:42.335674   75577 logs.go:276] 0 containers: []
	W0920 18:20:42.335685   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:42.335695   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:42.335710   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:42.386176   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:42.386214   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:42.400113   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:42.400147   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:42.479877   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:42.479900   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:42.479912   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:42.554654   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:42.554701   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:40.191251   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:42.192910   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:43.026185   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:45.525362   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:43.801927   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:45.802309   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:45.093962   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:45.107747   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:45.107811   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:45.147091   75577 cri.go:89] found id: ""
	I0920 18:20:45.147120   75577 logs.go:276] 0 containers: []
	W0920 18:20:45.147132   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:45.147140   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:45.147205   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:45.185332   75577 cri.go:89] found id: ""
	I0920 18:20:45.185396   75577 logs.go:276] 0 containers: []
	W0920 18:20:45.185431   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:45.185446   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:45.185523   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:45.224496   75577 cri.go:89] found id: ""
	I0920 18:20:45.224525   75577 logs.go:276] 0 containers: []
	W0920 18:20:45.224535   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:45.224542   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:45.224612   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:45.260686   75577 cri.go:89] found id: ""
	I0920 18:20:45.260718   75577 logs.go:276] 0 containers: []
	W0920 18:20:45.260729   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:45.260737   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:45.260801   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:45.301231   75577 cri.go:89] found id: ""
	I0920 18:20:45.301259   75577 logs.go:276] 0 containers: []
	W0920 18:20:45.301269   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:45.301277   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:45.301343   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:45.346441   75577 cri.go:89] found id: ""
	I0920 18:20:45.346468   75577 logs.go:276] 0 containers: []
	W0920 18:20:45.346476   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:45.346482   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:45.346537   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:45.385042   75577 cri.go:89] found id: ""
	I0920 18:20:45.385071   75577 logs.go:276] 0 containers: []
	W0920 18:20:45.385082   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:45.385090   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:45.385149   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:45.422208   75577 cri.go:89] found id: ""
	I0920 18:20:45.422238   75577 logs.go:276] 0 containers: []
	W0920 18:20:45.422249   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:45.422260   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:45.422275   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:45.472692   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:45.472742   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:45.487981   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:45.488011   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:45.563012   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:45.563035   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:45.563051   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:45.640750   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:45.640786   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:44.691791   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:46.692359   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:48.693165   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:48.024959   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:50.025944   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:48.302699   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:50.801740   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:48.181093   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:48.194433   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:48.194516   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:48.234349   75577 cri.go:89] found id: ""
	I0920 18:20:48.234383   75577 logs.go:276] 0 containers: []
	W0920 18:20:48.234394   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:48.234403   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:48.234467   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:48.269424   75577 cri.go:89] found id: ""
	I0920 18:20:48.269450   75577 logs.go:276] 0 containers: []
	W0920 18:20:48.269457   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:48.269462   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:48.269514   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:48.303968   75577 cri.go:89] found id: ""
	I0920 18:20:48.303990   75577 logs.go:276] 0 containers: []
	W0920 18:20:48.303997   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:48.304002   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:48.304061   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:48.339149   75577 cri.go:89] found id: ""
	I0920 18:20:48.339180   75577 logs.go:276] 0 containers: []
	W0920 18:20:48.339191   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:48.339198   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:48.339259   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:48.374529   75577 cri.go:89] found id: ""
	I0920 18:20:48.374559   75577 logs.go:276] 0 containers: []
	W0920 18:20:48.374571   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:48.374578   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:48.374644   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:48.409170   75577 cri.go:89] found id: ""
	I0920 18:20:48.409203   75577 logs.go:276] 0 containers: []
	W0920 18:20:48.409211   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:48.409217   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:48.409292   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:48.443960   75577 cri.go:89] found id: ""
	I0920 18:20:48.443990   75577 logs.go:276] 0 containers: []
	W0920 18:20:48.444009   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:48.444017   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:48.444074   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:48.480934   75577 cri.go:89] found id: ""
	I0920 18:20:48.480965   75577 logs.go:276] 0 containers: []
	W0920 18:20:48.480978   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:48.480990   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:48.481006   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:48.533261   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:48.533295   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:48.546460   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:48.546488   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:48.615183   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:48.615206   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:48.615219   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:48.695299   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:48.695337   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:51.240343   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:51.255327   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:51.255411   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:51.294611   75577 cri.go:89] found id: ""
	I0920 18:20:51.294642   75577 logs.go:276] 0 containers: []
	W0920 18:20:51.294650   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:51.294656   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:51.294710   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:51.328569   75577 cri.go:89] found id: ""
	I0920 18:20:51.328598   75577 logs.go:276] 0 containers: []
	W0920 18:20:51.328613   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:51.328621   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:51.328675   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:51.364255   75577 cri.go:89] found id: ""
	I0920 18:20:51.364283   75577 logs.go:276] 0 containers: []
	W0920 18:20:51.364291   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:51.364297   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:51.364347   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:51.406178   75577 cri.go:89] found id: ""
	I0920 18:20:51.406204   75577 logs.go:276] 0 containers: []
	W0920 18:20:51.406215   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:51.406223   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:51.406284   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:51.449491   75577 cri.go:89] found id: ""
	I0920 18:20:51.449519   75577 logs.go:276] 0 containers: []
	W0920 18:20:51.449529   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:51.449536   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:51.449600   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:51.483243   75577 cri.go:89] found id: ""
	I0920 18:20:51.483269   75577 logs.go:276] 0 containers: []
	W0920 18:20:51.483278   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:51.483284   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:51.483334   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:51.517280   75577 cri.go:89] found id: ""
	I0920 18:20:51.517304   75577 logs.go:276] 0 containers: []
	W0920 18:20:51.517311   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:51.517316   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:51.517378   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:51.553517   75577 cri.go:89] found id: ""
	I0920 18:20:51.553545   75577 logs.go:276] 0 containers: []
	W0920 18:20:51.553556   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:51.553565   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:51.553576   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:51.607330   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:51.607369   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:51.620628   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:51.620662   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:51.687586   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:51.687618   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:51.687638   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:51.764882   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:51.764924   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:51.191732   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:53.193078   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:52.524529   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:54.526138   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:57.025365   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:52.802328   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:54.804955   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:57.301987   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:54.307276   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:54.321777   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:54.321860   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:54.355565   75577 cri.go:89] found id: ""
	I0920 18:20:54.355598   75577 logs.go:276] 0 containers: []
	W0920 18:20:54.355609   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:54.355618   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:54.355680   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:54.391184   75577 cri.go:89] found id: ""
	I0920 18:20:54.391216   75577 logs.go:276] 0 containers: []
	W0920 18:20:54.391227   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:54.391236   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:54.391301   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:54.425680   75577 cri.go:89] found id: ""
	I0920 18:20:54.425709   75577 logs.go:276] 0 containers: []
	W0920 18:20:54.425718   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:54.425725   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:54.425775   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:54.461514   75577 cri.go:89] found id: ""
	I0920 18:20:54.461541   75577 logs.go:276] 0 containers: []
	W0920 18:20:54.461552   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:54.461559   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:54.461625   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:54.495290   75577 cri.go:89] found id: ""
	I0920 18:20:54.495319   75577 logs.go:276] 0 containers: []
	W0920 18:20:54.495327   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:54.495333   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:54.495384   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:54.530014   75577 cri.go:89] found id: ""
	I0920 18:20:54.530038   75577 logs.go:276] 0 containers: []
	W0920 18:20:54.530046   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:54.530052   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:54.530103   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:54.562559   75577 cri.go:89] found id: ""
	I0920 18:20:54.562597   75577 logs.go:276] 0 containers: []
	W0920 18:20:54.562611   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:54.562621   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:54.562694   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:54.599894   75577 cri.go:89] found id: ""
	I0920 18:20:54.599920   75577 logs.go:276] 0 containers: []
	W0920 18:20:54.599928   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:54.599946   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:54.599982   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:54.636853   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:54.636880   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:54.687932   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:54.687978   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:54.701642   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:54.701673   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:54.769649   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:54.769678   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:54.769695   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:57.356015   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:57.368860   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:57.368923   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:57.409339   75577 cri.go:89] found id: ""
	I0920 18:20:57.409365   75577 logs.go:276] 0 containers: []
	W0920 18:20:57.409375   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:57.409382   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:57.409444   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:57.443055   75577 cri.go:89] found id: ""
	I0920 18:20:57.443085   75577 logs.go:276] 0 containers: []
	W0920 18:20:57.443095   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:57.443102   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:57.443158   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:57.481816   75577 cri.go:89] found id: ""
	I0920 18:20:57.481859   75577 logs.go:276] 0 containers: []
	W0920 18:20:57.481871   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:57.481879   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:57.481942   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:57.517327   75577 cri.go:89] found id: ""
	I0920 18:20:57.517361   75577 logs.go:276] 0 containers: []
	W0920 18:20:57.517372   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:57.517379   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:57.517442   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:57.555121   75577 cri.go:89] found id: ""
	I0920 18:20:57.555151   75577 logs.go:276] 0 containers: []
	W0920 18:20:57.555159   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:57.555164   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:57.555222   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:57.592632   75577 cri.go:89] found id: ""
	I0920 18:20:57.592666   75577 logs.go:276] 0 containers: []
	W0920 18:20:57.592679   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:57.592685   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:57.592734   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:57.627519   75577 cri.go:89] found id: ""
	I0920 18:20:57.627556   75577 logs.go:276] 0 containers: []
	W0920 18:20:57.627567   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:57.627574   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:57.627636   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:57.663817   75577 cri.go:89] found id: ""
	I0920 18:20:57.663844   75577 logs.go:276] 0 containers: []
	W0920 18:20:57.663853   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:57.663862   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:57.663877   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:57.704896   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:57.704924   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:55.692746   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:58.191973   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:59.524732   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:01.525346   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:59.801537   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:01.802088   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:57.757933   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:57.757972   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:57.772646   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:57.772673   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:57.850634   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:57.850665   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:57.850681   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:00.431504   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:00.445175   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:00.445241   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:00.482050   75577 cri.go:89] found id: ""
	I0920 18:21:00.482076   75577 logs.go:276] 0 containers: []
	W0920 18:21:00.482088   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:00.482095   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:00.482150   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:00.515793   75577 cri.go:89] found id: ""
	I0920 18:21:00.515822   75577 logs.go:276] 0 containers: []
	W0920 18:21:00.515833   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:00.515841   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:00.515903   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:00.550250   75577 cri.go:89] found id: ""
	I0920 18:21:00.550278   75577 logs.go:276] 0 containers: []
	W0920 18:21:00.550288   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:00.550296   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:00.550374   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:00.585977   75577 cri.go:89] found id: ""
	I0920 18:21:00.586011   75577 logs.go:276] 0 containers: []
	W0920 18:21:00.586034   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:00.586051   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:00.586118   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:00.621828   75577 cri.go:89] found id: ""
	I0920 18:21:00.621869   75577 logs.go:276] 0 containers: []
	W0920 18:21:00.621879   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:00.621886   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:00.621937   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:00.655492   75577 cri.go:89] found id: ""
	I0920 18:21:00.655528   75577 logs.go:276] 0 containers: []
	W0920 18:21:00.655540   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:00.655600   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:00.655673   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:00.709445   75577 cri.go:89] found id: ""
	I0920 18:21:00.709476   75577 logs.go:276] 0 containers: []
	W0920 18:21:00.709488   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:00.709496   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:00.709566   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:00.744098   75577 cri.go:89] found id: ""
	I0920 18:21:00.744123   75577 logs.go:276] 0 containers: []
	W0920 18:21:00.744134   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:00.744144   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:00.744164   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:00.792576   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:00.792612   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:00.807766   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:00.807794   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:00.883626   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:00.883649   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:00.883663   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:00.966233   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:00.966269   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:00.692181   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:03.192227   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:04.024582   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:06.029376   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:04.301078   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:06.301727   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:03.505502   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:03.520407   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:03.520534   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:03.557891   75577 cri.go:89] found id: ""
	I0920 18:21:03.557926   75577 logs.go:276] 0 containers: []
	W0920 18:21:03.557936   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:03.557944   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:03.558006   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:03.593869   75577 cri.go:89] found id: ""
	I0920 18:21:03.593898   75577 logs.go:276] 0 containers: []
	W0920 18:21:03.593908   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:03.593916   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:03.593982   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:03.629270   75577 cri.go:89] found id: ""
	I0920 18:21:03.629302   75577 logs.go:276] 0 containers: []
	W0920 18:21:03.629311   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:03.629317   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:03.629366   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:03.664658   75577 cri.go:89] found id: ""
	I0920 18:21:03.664688   75577 logs.go:276] 0 containers: []
	W0920 18:21:03.664699   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:03.664706   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:03.664769   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:03.700846   75577 cri.go:89] found id: ""
	I0920 18:21:03.700868   75577 logs.go:276] 0 containers: []
	W0920 18:21:03.700875   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:03.700882   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:03.700941   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:03.736311   75577 cri.go:89] found id: ""
	I0920 18:21:03.736345   75577 logs.go:276] 0 containers: []
	W0920 18:21:03.736355   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:03.736363   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:03.736421   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:03.770760   75577 cri.go:89] found id: ""
	I0920 18:21:03.770788   75577 logs.go:276] 0 containers: []
	W0920 18:21:03.770800   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:03.770808   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:03.770868   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:03.808724   75577 cri.go:89] found id: ""
	I0920 18:21:03.808749   75577 logs.go:276] 0 containers: []
	W0920 18:21:03.808756   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:03.808764   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:03.808775   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:03.851231   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:03.851265   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:03.899607   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:03.899641   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:03.915051   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:03.915079   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:03.984016   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:03.984038   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:03.984053   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:06.564776   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:06.578524   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:06.578604   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:06.614305   75577 cri.go:89] found id: ""
	I0920 18:21:06.614340   75577 logs.go:276] 0 containers: []
	W0920 18:21:06.614351   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:06.614365   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:06.614427   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:06.648929   75577 cri.go:89] found id: ""
	I0920 18:21:06.648958   75577 logs.go:276] 0 containers: []
	W0920 18:21:06.648968   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:06.648976   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:06.649036   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:06.689975   75577 cri.go:89] found id: ""
	I0920 18:21:06.690016   75577 logs.go:276] 0 containers: []
	W0920 18:21:06.690027   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:06.690034   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:06.690092   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:06.729721   75577 cri.go:89] found id: ""
	I0920 18:21:06.729747   75577 logs.go:276] 0 containers: []
	W0920 18:21:06.729755   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:06.729762   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:06.729808   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:06.766310   75577 cri.go:89] found id: ""
	I0920 18:21:06.766343   75577 logs.go:276] 0 containers: []
	W0920 18:21:06.766354   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:06.766361   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:06.766437   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:06.800801   75577 cri.go:89] found id: ""
	I0920 18:21:06.800829   75577 logs.go:276] 0 containers: []
	W0920 18:21:06.800839   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:06.800847   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:06.800909   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:06.836391   75577 cri.go:89] found id: ""
	I0920 18:21:06.836429   75577 logs.go:276] 0 containers: []
	W0920 18:21:06.836447   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:06.836455   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:06.836521   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:06.873014   75577 cri.go:89] found id: ""
	I0920 18:21:06.873041   75577 logs.go:276] 0 containers: []
	W0920 18:21:06.873049   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:06.873057   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:06.873070   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:06.953084   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:06.953110   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:06.953122   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:07.032398   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:07.032434   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:07.082196   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:07.082224   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:07.159604   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:07.159640   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:05.691350   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:07.692127   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:08.526757   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:10.526990   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:08.801736   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:11.301001   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:09.675022   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:09.687924   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:09.687999   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:09.722963   75577 cri.go:89] found id: ""
	I0920 18:21:09.722994   75577 logs.go:276] 0 containers: []
	W0920 18:21:09.723005   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:09.723017   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:09.723084   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:09.756433   75577 cri.go:89] found id: ""
	I0920 18:21:09.756458   75577 logs.go:276] 0 containers: []
	W0920 18:21:09.756472   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:09.756486   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:09.756549   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:09.792200   75577 cri.go:89] found id: ""
	I0920 18:21:09.792232   75577 logs.go:276] 0 containers: []
	W0920 18:21:09.792248   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:09.792256   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:09.792323   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:09.829058   75577 cri.go:89] found id: ""
	I0920 18:21:09.829081   75577 logs.go:276] 0 containers: []
	W0920 18:21:09.829098   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:09.829104   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:09.829150   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:09.863225   75577 cri.go:89] found id: ""
	I0920 18:21:09.863252   75577 logs.go:276] 0 containers: []
	W0920 18:21:09.863259   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:09.863265   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:09.863312   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:09.897682   75577 cri.go:89] found id: ""
	I0920 18:21:09.897708   75577 logs.go:276] 0 containers: []
	W0920 18:21:09.897725   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:09.897731   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:09.897789   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:09.936799   75577 cri.go:89] found id: ""
	I0920 18:21:09.936826   75577 logs.go:276] 0 containers: []
	W0920 18:21:09.936836   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:09.936843   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:09.936904   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:09.970502   75577 cri.go:89] found id: ""
	I0920 18:21:09.970539   75577 logs.go:276] 0 containers: []
	W0920 18:21:09.970550   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:09.970560   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:09.970574   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:09.983676   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:09.983703   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:10.058844   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:10.058865   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:10.058874   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:10.136998   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:10.137040   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:10.178902   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:10.178933   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:12.729619   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:09.692361   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:12.192257   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:13.025403   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:15.025667   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:13.302087   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:15.802019   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:12.743816   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:12.743876   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:12.779407   75577 cri.go:89] found id: ""
	I0920 18:21:12.779431   75577 logs.go:276] 0 containers: []
	W0920 18:21:12.779439   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:12.779446   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:12.779503   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:12.820700   75577 cri.go:89] found id: ""
	I0920 18:21:12.820730   75577 logs.go:276] 0 containers: []
	W0920 18:21:12.820740   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:12.820748   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:12.820812   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:12.862179   75577 cri.go:89] found id: ""
	I0920 18:21:12.862210   75577 logs.go:276] 0 containers: []
	W0920 18:21:12.862221   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:12.862228   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:12.862284   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:12.908976   75577 cri.go:89] found id: ""
	I0920 18:21:12.908999   75577 logs.go:276] 0 containers: []
	W0920 18:21:12.909007   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:12.909013   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:12.909072   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:12.953653   75577 cri.go:89] found id: ""
	I0920 18:21:12.953688   75577 logs.go:276] 0 containers: []
	W0920 18:21:12.953696   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:12.953702   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:12.953757   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:12.998519   75577 cri.go:89] found id: ""
	I0920 18:21:12.998546   75577 logs.go:276] 0 containers: []
	W0920 18:21:12.998557   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:12.998564   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:12.998640   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:13.035518   75577 cri.go:89] found id: ""
	I0920 18:21:13.035541   75577 logs.go:276] 0 containers: []
	W0920 18:21:13.035549   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:13.035555   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:13.035609   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:13.073551   75577 cri.go:89] found id: ""
	I0920 18:21:13.073591   75577 logs.go:276] 0 containers: []
	W0920 18:21:13.073605   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:13.073617   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:13.073638   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:13.125118   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:13.125155   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:13.139384   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:13.139415   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:13.204980   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:13.205006   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:13.205021   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:13.289400   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:13.289446   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:15.829405   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:15.842291   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:15.842354   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:15.875886   75577 cri.go:89] found id: ""
	I0920 18:21:15.875921   75577 logs.go:276] 0 containers: []
	W0920 18:21:15.875930   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:15.875937   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:15.875990   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:15.911706   75577 cri.go:89] found id: ""
	I0920 18:21:15.911746   75577 logs.go:276] 0 containers: []
	W0920 18:21:15.911760   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:15.911768   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:15.911831   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:15.950117   75577 cri.go:89] found id: ""
	I0920 18:21:15.950150   75577 logs.go:276] 0 containers: []
	W0920 18:21:15.950161   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:15.950170   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:15.950243   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:15.986029   75577 cri.go:89] found id: ""
	I0920 18:21:15.986066   75577 logs.go:276] 0 containers: []
	W0920 18:21:15.986083   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:15.986091   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:15.986159   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:16.020307   75577 cri.go:89] found id: ""
	I0920 18:21:16.020335   75577 logs.go:276] 0 containers: []
	W0920 18:21:16.020346   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:16.020354   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:16.020412   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:16.054744   75577 cri.go:89] found id: ""
	I0920 18:21:16.054769   75577 logs.go:276] 0 containers: []
	W0920 18:21:16.054777   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:16.054782   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:16.054831   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:16.091756   75577 cri.go:89] found id: ""
	I0920 18:21:16.091789   75577 logs.go:276] 0 containers: []
	W0920 18:21:16.091800   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:16.091807   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:16.091868   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:16.127818   75577 cri.go:89] found id: ""
	I0920 18:21:16.127843   75577 logs.go:276] 0 containers: []
	W0920 18:21:16.127851   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:16.127861   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:16.127877   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:16.200114   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:16.200138   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:16.200149   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:16.279473   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:16.279508   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:16.319139   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:16.319171   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:16.370721   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:16.370769   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:14.192782   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:16.192866   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:18.691599   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:17.524026   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:19.524385   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:21.525244   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:18.301696   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:20.301993   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:18.884966   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:18.900270   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:18.900355   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:18.938541   75577 cri.go:89] found id: ""
	I0920 18:21:18.938582   75577 logs.go:276] 0 containers: []
	W0920 18:21:18.938594   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:18.938602   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:18.938673   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:18.972343   75577 cri.go:89] found id: ""
	I0920 18:21:18.972380   75577 logs.go:276] 0 containers: []
	W0920 18:21:18.972391   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:18.972400   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:18.972458   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:19.011996   75577 cri.go:89] found id: ""
	I0920 18:21:19.012037   75577 logs.go:276] 0 containers: []
	W0920 18:21:19.012048   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:19.012054   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:19.012105   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:19.053789   75577 cri.go:89] found id: ""
	I0920 18:21:19.053818   75577 logs.go:276] 0 containers: []
	W0920 18:21:19.053839   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:19.053849   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:19.053907   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:19.094830   75577 cri.go:89] found id: ""
	I0920 18:21:19.094862   75577 logs.go:276] 0 containers: []
	W0920 18:21:19.094872   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:19.094881   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:19.094941   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:19.133894   75577 cri.go:89] found id: ""
	I0920 18:21:19.133923   75577 logs.go:276] 0 containers: []
	W0920 18:21:19.133934   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:19.133943   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:19.134001   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:19.171640   75577 cri.go:89] found id: ""
	I0920 18:21:19.171662   75577 logs.go:276] 0 containers: []
	W0920 18:21:19.171670   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:19.171676   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:19.171730   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:19.207821   75577 cri.go:89] found id: ""
	I0920 18:21:19.207852   75577 logs.go:276] 0 containers: []
	W0920 18:21:19.207861   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:19.207869   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:19.207880   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:19.292486   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:19.292530   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:19.332664   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:19.332695   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:19.383793   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:19.383828   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:19.399234   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:19.399269   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:19.470636   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:21.971772   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:21.985558   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:21.985630   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:22.018745   75577 cri.go:89] found id: ""
	I0920 18:21:22.018775   75577 logs.go:276] 0 containers: []
	W0920 18:21:22.018785   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:22.018795   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:22.018854   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:22.053586   75577 cri.go:89] found id: ""
	I0920 18:21:22.053617   75577 logs.go:276] 0 containers: []
	W0920 18:21:22.053627   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:22.053635   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:22.053697   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:22.088290   75577 cri.go:89] found id: ""
	I0920 18:21:22.088320   75577 logs.go:276] 0 containers: []
	W0920 18:21:22.088337   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:22.088344   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:22.088394   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:22.121026   75577 cri.go:89] found id: ""
	I0920 18:21:22.121050   75577 logs.go:276] 0 containers: []
	W0920 18:21:22.121057   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:22.121063   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:22.121117   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:22.153669   75577 cri.go:89] found id: ""
	I0920 18:21:22.153702   75577 logs.go:276] 0 containers: []
	W0920 18:21:22.153715   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:22.153725   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:22.153793   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:22.192179   75577 cri.go:89] found id: ""
	I0920 18:21:22.192208   75577 logs.go:276] 0 containers: []
	W0920 18:21:22.192218   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:22.192226   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:22.192294   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:22.226115   75577 cri.go:89] found id: ""
	I0920 18:21:22.226142   75577 logs.go:276] 0 containers: []
	W0920 18:21:22.226153   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:22.226161   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:22.226231   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:22.259032   75577 cri.go:89] found id: ""
	I0920 18:21:22.259059   75577 logs.go:276] 0 containers: []
	W0920 18:21:22.259070   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:22.259080   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:22.259094   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:22.308989   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:22.309020   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:22.322084   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:22.322113   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:22.397567   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:22.397587   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:22.397598   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:22.476551   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:22.476596   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:20.691837   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:23.191939   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:24.024785   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:26.024824   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:22.801102   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:24.801697   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:26.801789   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:25.017659   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:25.032637   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:25.032698   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:25.067671   75577 cri.go:89] found id: ""
	I0920 18:21:25.067701   75577 logs.go:276] 0 containers: []
	W0920 18:21:25.067711   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:25.067718   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:25.067774   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:25.109354   75577 cri.go:89] found id: ""
	I0920 18:21:25.109385   75577 logs.go:276] 0 containers: []
	W0920 18:21:25.109396   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:25.109403   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:25.109463   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:25.142888   75577 cri.go:89] found id: ""
	I0920 18:21:25.142924   75577 logs.go:276] 0 containers: []
	W0920 18:21:25.142935   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:25.142942   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:25.143004   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:25.179005   75577 cri.go:89] found id: ""
	I0920 18:21:25.179032   75577 logs.go:276] 0 containers: []
	W0920 18:21:25.179043   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:25.179050   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:25.179114   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:25.211639   75577 cri.go:89] found id: ""
	I0920 18:21:25.211662   75577 logs.go:276] 0 containers: []
	W0920 18:21:25.211669   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:25.211676   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:25.211729   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:25.246678   75577 cri.go:89] found id: ""
	I0920 18:21:25.246709   75577 logs.go:276] 0 containers: []
	W0920 18:21:25.246718   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:25.246724   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:25.246780   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:25.284187   75577 cri.go:89] found id: ""
	I0920 18:21:25.284216   75577 logs.go:276] 0 containers: []
	W0920 18:21:25.284240   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:25.284247   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:25.284309   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:25.318882   75577 cri.go:89] found id: ""
	I0920 18:21:25.318917   75577 logs.go:276] 0 containers: []
	W0920 18:21:25.318929   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:25.318940   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:25.318954   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:25.357017   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:25.357046   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:25.407584   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:25.407627   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:25.421405   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:25.421437   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:25.492849   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:25.492870   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:25.492881   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:25.192551   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:27.691730   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:28.025042   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:30.025806   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:29.301637   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:31.302080   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:28.076101   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:28.088741   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:28.088805   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:28.124889   75577 cri.go:89] found id: ""
	I0920 18:21:28.124925   75577 logs.go:276] 0 containers: []
	W0920 18:21:28.124944   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:28.124952   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:28.125013   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:28.158590   75577 cri.go:89] found id: ""
	I0920 18:21:28.158615   75577 logs.go:276] 0 containers: []
	W0920 18:21:28.158623   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:28.158629   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:28.158677   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:28.195636   75577 cri.go:89] found id: ""
	I0920 18:21:28.195670   75577 logs.go:276] 0 containers: []
	W0920 18:21:28.195683   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:28.195692   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:28.195762   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:28.228714   75577 cri.go:89] found id: ""
	I0920 18:21:28.228759   75577 logs.go:276] 0 containers: []
	W0920 18:21:28.228771   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:28.228780   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:28.228840   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:28.261488   75577 cri.go:89] found id: ""
	I0920 18:21:28.261519   75577 logs.go:276] 0 containers: []
	W0920 18:21:28.261531   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:28.261538   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:28.261606   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:28.293534   75577 cri.go:89] found id: ""
	I0920 18:21:28.293560   75577 logs.go:276] 0 containers: []
	W0920 18:21:28.293568   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:28.293573   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:28.293629   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:28.330095   75577 cri.go:89] found id: ""
	I0920 18:21:28.330120   75577 logs.go:276] 0 containers: []
	W0920 18:21:28.330128   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:28.330134   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:28.330191   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:28.365643   75577 cri.go:89] found id: ""
	I0920 18:21:28.365675   75577 logs.go:276] 0 containers: []
	W0920 18:21:28.365684   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:28.365693   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:28.365712   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:28.379982   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:28.380007   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:28.456355   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:28.456386   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:28.456402   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:28.538725   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:28.538759   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:28.576067   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:28.576104   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:31.129705   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:31.145814   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:31.145919   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:31.181469   75577 cri.go:89] found id: ""
	I0920 18:21:31.181497   75577 logs.go:276] 0 containers: []
	W0920 18:21:31.181507   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:31.181514   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:31.181562   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:31.216391   75577 cri.go:89] found id: ""
	I0920 18:21:31.216416   75577 logs.go:276] 0 containers: []
	W0920 18:21:31.216426   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:31.216433   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:31.216492   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:31.250236   75577 cri.go:89] found id: ""
	I0920 18:21:31.250266   75577 logs.go:276] 0 containers: []
	W0920 18:21:31.250277   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:31.250285   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:31.250357   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:31.283406   75577 cri.go:89] found id: ""
	I0920 18:21:31.283430   75577 logs.go:276] 0 containers: []
	W0920 18:21:31.283439   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:31.283446   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:31.283519   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:31.317276   75577 cri.go:89] found id: ""
	I0920 18:21:31.317308   75577 logs.go:276] 0 containers: []
	W0920 18:21:31.317327   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:31.317335   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:31.317400   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:31.351563   75577 cri.go:89] found id: ""
	I0920 18:21:31.351588   75577 logs.go:276] 0 containers: []
	W0920 18:21:31.351595   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:31.351600   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:31.351652   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:31.385932   75577 cri.go:89] found id: ""
	I0920 18:21:31.385972   75577 logs.go:276] 0 containers: []
	W0920 18:21:31.385984   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:31.385992   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:31.386056   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:31.422679   75577 cri.go:89] found id: ""
	I0920 18:21:31.422718   75577 logs.go:276] 0 containers: []
	W0920 18:21:31.422729   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:31.422740   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:31.422755   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:31.475827   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:31.475867   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:31.489324   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:31.489354   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:31.560357   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:31.560377   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:31.560390   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:31.642137   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:31.642171   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:30.191820   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:32.692178   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:32.524953   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:35.025736   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:33.801586   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:36.301145   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:34.179063   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:34.193077   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:34.193157   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:34.227451   75577 cri.go:89] found id: ""
	I0920 18:21:34.227484   75577 logs.go:276] 0 containers: []
	W0920 18:21:34.227495   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:34.227503   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:34.227571   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:34.263740   75577 cri.go:89] found id: ""
	I0920 18:21:34.263766   75577 logs.go:276] 0 containers: []
	W0920 18:21:34.263774   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:34.263779   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:34.263838   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:34.298094   75577 cri.go:89] found id: ""
	I0920 18:21:34.298121   75577 logs.go:276] 0 containers: []
	W0920 18:21:34.298132   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:34.298139   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:34.298202   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:34.334253   75577 cri.go:89] found id: ""
	I0920 18:21:34.334281   75577 logs.go:276] 0 containers: []
	W0920 18:21:34.334290   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:34.334297   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:34.334361   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:34.370656   75577 cri.go:89] found id: ""
	I0920 18:21:34.370688   75577 logs.go:276] 0 containers: []
	W0920 18:21:34.370699   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:34.370706   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:34.370759   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:34.405658   75577 cri.go:89] found id: ""
	I0920 18:21:34.405681   75577 logs.go:276] 0 containers: []
	W0920 18:21:34.405689   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:34.405695   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:34.405748   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:34.442284   75577 cri.go:89] found id: ""
	I0920 18:21:34.442311   75577 logs.go:276] 0 containers: []
	W0920 18:21:34.442319   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:34.442325   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:34.442394   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:34.477895   75577 cri.go:89] found id: ""
	I0920 18:21:34.477927   75577 logs.go:276] 0 containers: []
	W0920 18:21:34.477939   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:34.477950   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:34.477964   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:34.531225   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:34.531256   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:34.544325   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:34.544359   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:34.615914   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:34.615933   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:34.615948   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:34.692119   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:34.692160   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:37.228930   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:37.242070   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:37.242151   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:37.276489   75577 cri.go:89] found id: ""
	I0920 18:21:37.276531   75577 logs.go:276] 0 containers: []
	W0920 18:21:37.276543   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:37.276551   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:37.276617   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:37.311106   75577 cri.go:89] found id: ""
	I0920 18:21:37.311140   75577 logs.go:276] 0 containers: []
	W0920 18:21:37.311148   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:37.311156   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:37.311217   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:37.343147   75577 cri.go:89] found id: ""
	I0920 18:21:37.343173   75577 logs.go:276] 0 containers: []
	W0920 18:21:37.343181   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:37.343188   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:37.343237   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:37.378858   75577 cri.go:89] found id: ""
	I0920 18:21:37.378934   75577 logs.go:276] 0 containers: []
	W0920 18:21:37.378947   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:37.378955   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:37.379005   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:37.412321   75577 cri.go:89] found id: ""
	I0920 18:21:37.412355   75577 logs.go:276] 0 containers: []
	W0920 18:21:37.412366   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:37.412374   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:37.412443   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:37.447478   75577 cri.go:89] found id: ""
	I0920 18:21:37.447510   75577 logs.go:276] 0 containers: []
	W0920 18:21:37.447520   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:37.447526   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:37.447580   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:37.481172   75577 cri.go:89] found id: ""
	I0920 18:21:37.481201   75577 logs.go:276] 0 containers: []
	W0920 18:21:37.481209   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:37.481216   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:37.481269   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:37.518001   75577 cri.go:89] found id: ""
	I0920 18:21:37.518032   75577 logs.go:276] 0 containers: []
	W0920 18:21:37.518041   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:37.518050   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:37.518062   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:37.567675   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:37.567707   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:37.582279   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:37.582308   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:37.654514   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:37.654546   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:37.654563   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:37.735895   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:37.735929   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:35.192406   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:37.692460   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:37.524744   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:39.526068   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:42.024627   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:38.301432   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:40.302489   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:40.276416   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:40.291651   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:40.291713   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:40.328309   75577 cri.go:89] found id: ""
	I0920 18:21:40.328346   75577 logs.go:276] 0 containers: []
	W0920 18:21:40.328359   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:40.328378   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:40.328441   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:40.368126   75577 cri.go:89] found id: ""
	I0920 18:21:40.368154   75577 logs.go:276] 0 containers: []
	W0920 18:21:40.368162   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:40.368167   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:40.368217   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:40.408324   75577 cri.go:89] found id: ""
	I0920 18:21:40.408359   75577 logs.go:276] 0 containers: []
	W0920 18:21:40.408371   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:40.408380   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:40.408448   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:40.453867   75577 cri.go:89] found id: ""
	I0920 18:21:40.453892   75577 logs.go:276] 0 containers: []
	W0920 18:21:40.453900   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:40.453906   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:40.453969   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:40.500625   75577 cri.go:89] found id: ""
	I0920 18:21:40.500660   75577 logs.go:276] 0 containers: []
	W0920 18:21:40.500670   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:40.500678   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:40.500750   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:40.533998   75577 cri.go:89] found id: ""
	I0920 18:21:40.534028   75577 logs.go:276] 0 containers: []
	W0920 18:21:40.534039   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:40.534048   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:40.534111   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:40.566279   75577 cri.go:89] found id: ""
	I0920 18:21:40.566308   75577 logs.go:276] 0 containers: []
	W0920 18:21:40.566319   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:40.566326   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:40.566392   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:40.599154   75577 cri.go:89] found id: ""
	I0920 18:21:40.599179   75577 logs.go:276] 0 containers: []
	W0920 18:21:40.599186   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:40.599194   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:40.599210   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:40.668568   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:40.668596   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:40.668608   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:40.747895   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:40.747969   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:40.789568   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:40.789604   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:40.839816   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:40.839852   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:39.693537   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:42.192617   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:44.025059   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:46.524700   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:42.801135   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:44.802083   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:46.802537   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:43.354847   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:43.369077   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:43.369167   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:43.404667   75577 cri.go:89] found id: ""
	I0920 18:21:43.404698   75577 logs.go:276] 0 containers: []
	W0920 18:21:43.404710   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:43.404718   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:43.404778   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:43.440968   75577 cri.go:89] found id: ""
	I0920 18:21:43.441001   75577 logs.go:276] 0 containers: []
	W0920 18:21:43.441012   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:43.441021   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:43.441088   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:43.474481   75577 cri.go:89] found id: ""
	I0920 18:21:43.474511   75577 logs.go:276] 0 containers: []
	W0920 18:21:43.474520   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:43.474529   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:43.474592   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:43.508196   75577 cri.go:89] found id: ""
	I0920 18:21:43.508228   75577 logs.go:276] 0 containers: []
	W0920 18:21:43.508241   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:43.508248   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:43.508307   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:43.543679   75577 cri.go:89] found id: ""
	I0920 18:21:43.543710   75577 logs.go:276] 0 containers: []
	W0920 18:21:43.543721   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:43.543728   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:43.543788   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:43.577115   75577 cri.go:89] found id: ""
	I0920 18:21:43.577138   75577 logs.go:276] 0 containers: []
	W0920 18:21:43.577145   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:43.577152   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:43.577198   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:43.611552   75577 cri.go:89] found id: ""
	I0920 18:21:43.611588   75577 logs.go:276] 0 containers: []
	W0920 18:21:43.611602   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:43.611616   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:43.611685   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:43.647913   75577 cri.go:89] found id: ""
	I0920 18:21:43.647946   75577 logs.go:276] 0 containers: []
	W0920 18:21:43.647957   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:43.647969   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:43.647983   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:43.661014   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:43.661043   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:43.736584   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:43.736607   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:43.736620   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:43.814340   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:43.814380   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:43.855968   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:43.855996   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:46.410577   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:46.423733   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:46.423800   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:46.456819   75577 cri.go:89] found id: ""
	I0920 18:21:46.456847   75577 logs.go:276] 0 containers: []
	W0920 18:21:46.456855   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:46.456861   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:46.456927   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:46.493259   75577 cri.go:89] found id: ""
	I0920 18:21:46.493291   75577 logs.go:276] 0 containers: []
	W0920 18:21:46.493301   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:46.493307   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:46.493372   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:46.531200   75577 cri.go:89] found id: ""
	I0920 18:21:46.531233   75577 logs.go:276] 0 containers: []
	W0920 18:21:46.531241   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:46.531247   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:46.531294   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:46.567508   75577 cri.go:89] found id: ""
	I0920 18:21:46.567530   75577 logs.go:276] 0 containers: []
	W0920 18:21:46.567538   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:46.567544   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:46.567601   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:46.600257   75577 cri.go:89] found id: ""
	I0920 18:21:46.600290   75577 logs.go:276] 0 containers: []
	W0920 18:21:46.600303   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:46.600311   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:46.600375   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:46.635568   75577 cri.go:89] found id: ""
	I0920 18:21:46.635598   75577 logs.go:276] 0 containers: []
	W0920 18:21:46.635606   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:46.635612   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:46.635668   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:46.668257   75577 cri.go:89] found id: ""
	I0920 18:21:46.668304   75577 logs.go:276] 0 containers: []
	W0920 18:21:46.668316   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:46.668324   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:46.668392   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:46.703631   75577 cri.go:89] found id: ""
	I0920 18:21:46.703654   75577 logs.go:276] 0 containers: []
	W0920 18:21:46.703662   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:46.703671   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:46.703680   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:46.780232   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:46.780295   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:46.816623   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:46.816647   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:46.868911   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:46.868954   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:46.882716   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:46.882743   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:46.965166   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:44.692655   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:47.192969   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:48.524828   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:50.525045   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:49.301263   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:51.802343   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:49.466238   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:49.480167   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:49.480228   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:49.513507   75577 cri.go:89] found id: ""
	I0920 18:21:49.513537   75577 logs.go:276] 0 containers: []
	W0920 18:21:49.513549   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:49.513558   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:49.513620   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:49.548708   75577 cri.go:89] found id: ""
	I0920 18:21:49.548739   75577 logs.go:276] 0 containers: []
	W0920 18:21:49.548750   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:49.548758   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:49.548810   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:49.583586   75577 cri.go:89] found id: ""
	I0920 18:21:49.583614   75577 logs.go:276] 0 containers: []
	W0920 18:21:49.583625   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:49.583633   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:49.583695   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:49.617574   75577 cri.go:89] found id: ""
	I0920 18:21:49.617605   75577 logs.go:276] 0 containers: []
	W0920 18:21:49.617616   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:49.617623   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:49.617684   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:49.656517   75577 cri.go:89] found id: ""
	I0920 18:21:49.656552   75577 logs.go:276] 0 containers: []
	W0920 18:21:49.656563   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:49.656571   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:49.656634   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:49.692848   75577 cri.go:89] found id: ""
	I0920 18:21:49.692870   75577 logs.go:276] 0 containers: []
	W0920 18:21:49.692877   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:49.692883   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:49.692950   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:49.728593   75577 cri.go:89] found id: ""
	I0920 18:21:49.728620   75577 logs.go:276] 0 containers: []
	W0920 18:21:49.728630   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:49.728643   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:49.728705   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:49.764187   75577 cri.go:89] found id: ""
	I0920 18:21:49.764218   75577 logs.go:276] 0 containers: []
	W0920 18:21:49.764229   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:49.764242   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:49.764260   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:49.837741   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:49.837764   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:49.837777   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:49.914941   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:49.914978   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:49.957609   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:49.957639   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:50.012075   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:50.012115   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:52.527722   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:52.542125   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:52.542206   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:52.579884   75577 cri.go:89] found id: ""
	I0920 18:21:52.579910   75577 logs.go:276] 0 containers: []
	W0920 18:21:52.579919   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:52.579924   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:52.579982   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:52.619138   75577 cri.go:89] found id: ""
	I0920 18:21:52.619167   75577 logs.go:276] 0 containers: []
	W0920 18:21:52.619180   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:52.619188   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:52.619246   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:52.654460   75577 cri.go:89] found id: ""
	I0920 18:21:52.654498   75577 logs.go:276] 0 containers: []
	W0920 18:21:52.654508   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:52.654515   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:52.654578   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:52.708015   75577 cri.go:89] found id: ""
	I0920 18:21:52.708042   75577 logs.go:276] 0 containers: []
	W0920 18:21:52.708051   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:52.708057   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:52.708127   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:49.692309   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:52.192466   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:53.025700   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:55.026088   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:53.802620   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:55.802856   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:52.743917   75577 cri.go:89] found id: ""
	I0920 18:21:52.743952   75577 logs.go:276] 0 containers: []
	W0920 18:21:52.743964   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:52.743972   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:52.744025   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:52.783458   75577 cri.go:89] found id: ""
	I0920 18:21:52.783481   75577 logs.go:276] 0 containers: []
	W0920 18:21:52.783488   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:52.783495   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:52.783552   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:52.817716   75577 cri.go:89] found id: ""
	I0920 18:21:52.817749   75577 logs.go:276] 0 containers: []
	W0920 18:21:52.817762   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:52.817771   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:52.817882   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:52.857141   75577 cri.go:89] found id: ""
	I0920 18:21:52.857169   75577 logs.go:276] 0 containers: []
	W0920 18:21:52.857180   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:52.857190   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:52.857204   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:52.910555   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:52.910597   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:52.923843   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:52.923873   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:52.994263   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:52.994296   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:52.994313   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:53.079782   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:53.079829   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:55.619418   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:55.633854   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:55.633922   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:55.669778   75577 cri.go:89] found id: ""
	I0920 18:21:55.669813   75577 logs.go:276] 0 containers: []
	W0920 18:21:55.669824   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:55.669853   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:55.669920   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:55.705687   75577 cri.go:89] found id: ""
	I0920 18:21:55.705724   75577 logs.go:276] 0 containers: []
	W0920 18:21:55.705736   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:55.705746   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:55.705811   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:55.739249   75577 cri.go:89] found id: ""
	I0920 18:21:55.739288   75577 logs.go:276] 0 containers: []
	W0920 18:21:55.739296   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:55.739302   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:55.739438   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:55.775077   75577 cri.go:89] found id: ""
	I0920 18:21:55.775109   75577 logs.go:276] 0 containers: []
	W0920 18:21:55.775121   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:55.775130   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:55.775181   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:55.810291   75577 cri.go:89] found id: ""
	I0920 18:21:55.810329   75577 logs.go:276] 0 containers: []
	W0920 18:21:55.810340   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:55.810349   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:55.810415   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:55.844440   75577 cri.go:89] found id: ""
	I0920 18:21:55.844468   75577 logs.go:276] 0 containers: []
	W0920 18:21:55.844476   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:55.844482   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:55.844551   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:55.878611   75577 cri.go:89] found id: ""
	I0920 18:21:55.878647   75577 logs.go:276] 0 containers: []
	W0920 18:21:55.878659   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:55.878667   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:55.878733   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:55.922064   75577 cri.go:89] found id: ""
	I0920 18:21:55.922101   75577 logs.go:276] 0 containers: []
	W0920 18:21:55.922114   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:55.922127   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:55.922147   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:55.979406   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:55.979445   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:55.993082   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:55.993111   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:56.065650   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:56.065701   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:56.065717   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:56.142640   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:56.142684   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:54.691482   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:56.691912   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:57.525280   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:00.025224   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:02.025722   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:58.300359   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:00.301252   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:02.302209   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:58.683524   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:58.698775   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:58.698833   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:58.737369   75577 cri.go:89] found id: ""
	I0920 18:21:58.737398   75577 logs.go:276] 0 containers: []
	W0920 18:21:58.737409   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:58.737417   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:58.737476   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:58.779504   75577 cri.go:89] found id: ""
	I0920 18:21:58.779533   75577 logs.go:276] 0 containers: []
	W0920 18:21:58.779544   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:58.779552   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:58.779610   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:58.814363   75577 cri.go:89] found id: ""
	I0920 18:21:58.814393   75577 logs.go:276] 0 containers: []
	W0920 18:21:58.814401   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:58.814407   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:58.814454   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:58.846219   75577 cri.go:89] found id: ""
	I0920 18:21:58.846242   75577 logs.go:276] 0 containers: []
	W0920 18:21:58.846251   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:58.846256   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:58.846307   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:58.882387   75577 cri.go:89] found id: ""
	I0920 18:21:58.882423   75577 logs.go:276] 0 containers: []
	W0920 18:21:58.882431   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:58.882437   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:58.882497   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:58.918632   75577 cri.go:89] found id: ""
	I0920 18:21:58.918668   75577 logs.go:276] 0 containers: []
	W0920 18:21:58.918679   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:58.918686   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:58.918751   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:58.953521   75577 cri.go:89] found id: ""
	I0920 18:21:58.953547   75577 logs.go:276] 0 containers: []
	W0920 18:21:58.953557   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:58.953564   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:58.953624   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:58.987412   75577 cri.go:89] found id: ""
	I0920 18:21:58.987439   75577 logs.go:276] 0 containers: []
	W0920 18:21:58.987447   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:58.987457   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:58.987471   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:59.077108   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:59.077169   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:59.141723   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:59.141758   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:59.208035   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:59.208081   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:59.221380   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:59.221404   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:59.297151   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:22:01.797684   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:22:01.812275   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:22:01.812338   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:22:01.846431   75577 cri.go:89] found id: ""
	I0920 18:22:01.846459   75577 logs.go:276] 0 containers: []
	W0920 18:22:01.846477   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:22:01.846483   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:22:01.846535   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:22:01.889016   75577 cri.go:89] found id: ""
	I0920 18:22:01.889049   75577 logs.go:276] 0 containers: []
	W0920 18:22:01.889061   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:22:01.889069   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:22:01.889144   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:22:01.926048   75577 cri.go:89] found id: ""
	I0920 18:22:01.926078   75577 logs.go:276] 0 containers: []
	W0920 18:22:01.926090   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:22:01.926108   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:22:01.926185   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:22:01.964510   75577 cri.go:89] found id: ""
	I0920 18:22:01.964536   75577 logs.go:276] 0 containers: []
	W0920 18:22:01.964547   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:22:01.964555   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:22:01.964624   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:22:02.005549   75577 cri.go:89] found id: ""
	I0920 18:22:02.005575   75577 logs.go:276] 0 containers: []
	W0920 18:22:02.005583   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:22:02.005588   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:22:02.005642   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:22:02.047359   75577 cri.go:89] found id: ""
	I0920 18:22:02.047385   75577 logs.go:276] 0 containers: []
	W0920 18:22:02.047393   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:22:02.047399   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:22:02.047455   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:22:02.082961   75577 cri.go:89] found id: ""
	I0920 18:22:02.082999   75577 logs.go:276] 0 containers: []
	W0920 18:22:02.083008   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:22:02.083015   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:22:02.083062   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:22:02.117696   75577 cri.go:89] found id: ""
	I0920 18:22:02.117732   75577 logs.go:276] 0 containers: []
	W0920 18:22:02.117741   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:22:02.117753   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:22:02.117769   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:22:02.131282   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:22:02.131311   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:22:02.200738   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:22:02.200760   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:22:02.200776   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:22:02.281162   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:22:02.281200   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:22:02.321854   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:22:02.321888   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:59.191434   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:01.192475   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:01.685900   75086 pod_ready.go:82] duration metric: took 4m0.000752648s for pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace to be "Ready" ...
	E0920 18:22:01.685945   75086 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace to be "Ready" (will not retry!)
	I0920 18:22:01.685972   75086 pod_ready.go:39] duration metric: took 4m13.051752581s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:22:01.686033   75086 kubeadm.go:597] duration metric: took 4m20.498687791s to restartPrimaryControlPlane
	W0920 18:22:01.686114   75086 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0920 18:22:01.686150   75086 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0920 18:22:04.026729   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:06.524404   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:04.803249   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:07.301696   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:04.874353   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:22:04.889710   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:22:04.889773   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:22:04.945719   75577 cri.go:89] found id: ""
	I0920 18:22:04.945745   75577 logs.go:276] 0 containers: []
	W0920 18:22:04.945753   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:22:04.945759   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:22:04.945802   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:22:04.995492   75577 cri.go:89] found id: ""
	I0920 18:22:04.995535   75577 logs.go:276] 0 containers: []
	W0920 18:22:04.995547   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:22:04.995555   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:22:04.995615   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:22:05.037827   75577 cri.go:89] found id: ""
	I0920 18:22:05.037871   75577 logs.go:276] 0 containers: []
	W0920 18:22:05.037882   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:22:05.037888   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:22:05.037935   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:22:05.077652   75577 cri.go:89] found id: ""
	I0920 18:22:05.077691   75577 logs.go:276] 0 containers: []
	W0920 18:22:05.077704   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:22:05.077712   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:22:05.077772   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:22:05.114472   75577 cri.go:89] found id: ""
	I0920 18:22:05.114509   75577 logs.go:276] 0 containers: []
	W0920 18:22:05.114520   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:22:05.114527   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:22:05.114590   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:22:05.150809   75577 cri.go:89] found id: ""
	I0920 18:22:05.150841   75577 logs.go:276] 0 containers: []
	W0920 18:22:05.150853   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:22:05.150860   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:22:05.150908   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:22:05.188880   75577 cri.go:89] found id: ""
	I0920 18:22:05.188910   75577 logs.go:276] 0 containers: []
	W0920 18:22:05.188921   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:22:05.188929   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:22:05.188990   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:22:05.224990   75577 cri.go:89] found id: ""
	I0920 18:22:05.225015   75577 logs.go:276] 0 containers: []
	W0920 18:22:05.225023   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:22:05.225032   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:22:05.225043   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:22:05.298000   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:22:05.298023   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:22:05.298037   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:22:05.382969   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:22:05.383008   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:22:05.424911   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:22:05.424940   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:22:05.475988   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:22:05.476024   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:22:08.525098   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:10.525321   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:09.801936   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:12.301073   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:07.990660   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:22:08.005572   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:22:08.005637   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:22:08.044354   75577 cri.go:89] found id: ""
	I0920 18:22:08.044381   75577 logs.go:276] 0 containers: []
	W0920 18:22:08.044390   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:22:08.044401   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:22:08.044449   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:22:08.082899   75577 cri.go:89] found id: ""
	I0920 18:22:08.082928   75577 logs.go:276] 0 containers: []
	W0920 18:22:08.082939   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:22:08.082948   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:22:08.083009   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:22:08.129297   75577 cri.go:89] found id: ""
	I0920 18:22:08.129325   75577 logs.go:276] 0 containers: []
	W0920 18:22:08.129335   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:22:08.129343   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:22:08.129404   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:22:08.168744   75577 cri.go:89] found id: ""
	I0920 18:22:08.168774   75577 logs.go:276] 0 containers: []
	W0920 18:22:08.168787   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:22:08.168792   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:22:08.168849   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:22:08.203830   75577 cri.go:89] found id: ""
	I0920 18:22:08.203860   75577 logs.go:276] 0 containers: []
	W0920 18:22:08.203871   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:22:08.203879   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:22:08.203940   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:22:08.238226   75577 cri.go:89] found id: ""
	I0920 18:22:08.238250   75577 logs.go:276] 0 containers: []
	W0920 18:22:08.238258   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:22:08.238263   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:22:08.238331   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:22:08.271457   75577 cri.go:89] found id: ""
	I0920 18:22:08.271485   75577 logs.go:276] 0 containers: []
	W0920 18:22:08.271495   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:22:08.271502   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:22:08.271563   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:22:08.306661   75577 cri.go:89] found id: ""
	I0920 18:22:08.306692   75577 logs.go:276] 0 containers: []
	W0920 18:22:08.306703   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:22:08.306712   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:22:08.306730   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:22:08.357760   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:22:08.357793   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:22:08.372625   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:22:08.372660   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:22:08.442477   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:22:08.442501   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:22:08.442517   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:22:08.528381   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:22:08.528412   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:22:11.064842   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:22:11.078086   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:22:11.078158   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:22:11.113454   75577 cri.go:89] found id: ""
	I0920 18:22:11.113485   75577 logs.go:276] 0 containers: []
	W0920 18:22:11.113497   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:22:11.113511   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:22:11.113575   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:22:11.148039   75577 cri.go:89] found id: ""
	I0920 18:22:11.148064   75577 logs.go:276] 0 containers: []
	W0920 18:22:11.148072   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:22:11.148078   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:22:11.148127   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:22:11.182667   75577 cri.go:89] found id: ""
	I0920 18:22:11.182697   75577 logs.go:276] 0 containers: []
	W0920 18:22:11.182708   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:22:11.182715   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:22:11.182776   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:22:11.217022   75577 cri.go:89] found id: ""
	I0920 18:22:11.217055   75577 logs.go:276] 0 containers: []
	W0920 18:22:11.217067   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:22:11.217075   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:22:11.217141   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:22:11.253970   75577 cri.go:89] found id: ""
	I0920 18:22:11.254001   75577 logs.go:276] 0 containers: []
	W0920 18:22:11.254012   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:22:11.254019   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:22:11.254085   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:22:11.288070   75577 cri.go:89] found id: ""
	I0920 18:22:11.288103   75577 logs.go:276] 0 containers: []
	W0920 18:22:11.288114   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:22:11.288121   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:22:11.288189   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:22:11.324215   75577 cri.go:89] found id: ""
	I0920 18:22:11.324240   75577 logs.go:276] 0 containers: []
	W0920 18:22:11.324254   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:22:11.324261   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:22:11.324319   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:22:11.359377   75577 cri.go:89] found id: ""
	I0920 18:22:11.359406   75577 logs.go:276] 0 containers: []
	W0920 18:22:11.359414   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:22:11.359423   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:22:11.359433   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:22:11.416479   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:22:11.416520   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:22:11.430240   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:22:11.430269   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:22:11.501531   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:22:11.501553   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:22:11.501565   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:22:11.580748   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:22:11.580787   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:22:13.024330   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:15.025065   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:17.026642   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:14.301652   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:16.802017   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:14.119084   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:22:14.132428   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:22:14.132505   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:22:14.167674   75577 cri.go:89] found id: ""
	I0920 18:22:14.167699   75577 logs.go:276] 0 containers: []
	W0920 18:22:14.167710   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:22:14.167717   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:22:14.167772   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:22:14.204570   75577 cri.go:89] found id: ""
	I0920 18:22:14.204595   75577 logs.go:276] 0 containers: []
	W0920 18:22:14.204603   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:22:14.204608   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:22:14.204655   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:22:14.241687   75577 cri.go:89] found id: ""
	I0920 18:22:14.241716   75577 logs.go:276] 0 containers: []
	W0920 18:22:14.241724   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:22:14.241731   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:22:14.241789   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:22:14.275776   75577 cri.go:89] found id: ""
	I0920 18:22:14.275802   75577 logs.go:276] 0 containers: []
	W0920 18:22:14.275810   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:22:14.275816   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:22:14.275872   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:22:14.309565   75577 cri.go:89] found id: ""
	I0920 18:22:14.309589   75577 logs.go:276] 0 containers: []
	W0920 18:22:14.309596   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:22:14.309602   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:22:14.309662   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:22:14.341858   75577 cri.go:89] found id: ""
	I0920 18:22:14.341884   75577 logs.go:276] 0 containers: []
	W0920 18:22:14.341892   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:22:14.341898   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:22:14.341963   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:22:14.377863   75577 cri.go:89] found id: ""
	I0920 18:22:14.377896   75577 logs.go:276] 0 containers: []
	W0920 18:22:14.377906   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:22:14.377912   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:22:14.377988   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:22:14.413182   75577 cri.go:89] found id: ""
	I0920 18:22:14.413207   75577 logs.go:276] 0 containers: []
	W0920 18:22:14.413214   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:22:14.413236   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:22:14.413253   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:22:14.466782   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:22:14.466820   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:22:14.480627   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:22:14.480668   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:22:14.552071   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:22:14.552104   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:22:14.552121   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:22:14.626481   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:22:14.626518   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:22:17.163264   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:22:17.181609   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:22:17.181684   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:22:17.219871   75577 cri.go:89] found id: ""
	I0920 18:22:17.219899   75577 logs.go:276] 0 containers: []
	W0920 18:22:17.219923   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:22:17.219932   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:22:17.220013   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:22:17.256315   75577 cri.go:89] found id: ""
	I0920 18:22:17.256346   75577 logs.go:276] 0 containers: []
	W0920 18:22:17.256356   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:22:17.256364   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:22:17.256434   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:22:17.294326   75577 cri.go:89] found id: ""
	I0920 18:22:17.294352   75577 logs.go:276] 0 containers: []
	W0920 18:22:17.294360   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:22:17.294366   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:22:17.294421   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:22:17.336461   75577 cri.go:89] found id: ""
	I0920 18:22:17.336491   75577 logs.go:276] 0 containers: []
	W0920 18:22:17.336500   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:22:17.336506   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:22:17.336568   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:22:17.376242   75577 cri.go:89] found id: ""
	I0920 18:22:17.376283   75577 logs.go:276] 0 containers: []
	W0920 18:22:17.376295   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:22:17.376302   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:22:17.376363   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:22:17.409147   75577 cri.go:89] found id: ""
	I0920 18:22:17.409180   75577 logs.go:276] 0 containers: []
	W0920 18:22:17.409190   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:22:17.409198   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:22:17.409269   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:22:17.444698   75577 cri.go:89] found id: ""
	I0920 18:22:17.444725   75577 logs.go:276] 0 containers: []
	W0920 18:22:17.444734   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:22:17.444738   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:22:17.444791   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:22:17.485914   75577 cri.go:89] found id: ""
	I0920 18:22:17.485948   75577 logs.go:276] 0 containers: []
	W0920 18:22:17.485959   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:22:17.485970   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:22:17.485983   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:22:17.523567   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:22:17.523601   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:22:17.588864   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:22:17.588905   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:22:17.605052   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:22:17.605092   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:22:17.677012   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:22:17.677039   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:22:17.677056   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:22:19.525154   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:22.025422   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:18.802052   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:20.804024   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:20.252258   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:22:20.267607   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:22:20.267698   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:22:20.302260   75577 cri.go:89] found id: ""
	I0920 18:22:20.302291   75577 logs.go:276] 0 containers: []
	W0920 18:22:20.302301   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:22:20.302309   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:22:20.302373   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:22:20.335343   75577 cri.go:89] found id: ""
	I0920 18:22:20.335377   75577 logs.go:276] 0 containers: []
	W0920 18:22:20.335389   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:22:20.335397   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:22:20.335460   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:22:20.369612   75577 cri.go:89] found id: ""
	I0920 18:22:20.369641   75577 logs.go:276] 0 containers: []
	W0920 18:22:20.369649   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:22:20.369655   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:22:20.369703   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:22:20.403703   75577 cri.go:89] found id: ""
	I0920 18:22:20.403732   75577 logs.go:276] 0 containers: []
	W0920 18:22:20.403740   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:22:20.403746   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:22:20.403804   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:22:20.436288   75577 cri.go:89] found id: ""
	I0920 18:22:20.436316   75577 logs.go:276] 0 containers: []
	W0920 18:22:20.436328   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:22:20.436336   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:22:20.436399   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:22:20.471536   75577 cri.go:89] found id: ""
	I0920 18:22:20.471572   75577 logs.go:276] 0 containers: []
	W0920 18:22:20.471584   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:22:20.471593   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:22:20.471657   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:22:20.506012   75577 cri.go:89] found id: ""
	I0920 18:22:20.506107   75577 logs.go:276] 0 containers: []
	W0920 18:22:20.506134   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:22:20.506143   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:22:20.506199   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:22:20.542614   75577 cri.go:89] found id: ""
	I0920 18:22:20.542650   75577 logs.go:276] 0 containers: []
	W0920 18:22:20.542660   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:22:20.542671   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:22:20.542687   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:22:20.596316   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:22:20.596357   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:22:20.610438   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:22:20.610465   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:22:20.688061   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:22:20.688079   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:22:20.688091   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:22:20.768249   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:22:20.768296   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:22:24.525259   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:25.524857   75264 pod_ready.go:82] duration metric: took 4m0.006563805s for pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace to be "Ready" ...
	E0920 18:22:25.524883   75264 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0920 18:22:25.524891   75264 pod_ready.go:39] duration metric: took 4m8.542422056s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:22:25.524906   75264 api_server.go:52] waiting for apiserver process to appear ...
	I0920 18:22:25.524977   75264 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:22:25.525029   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:22:25.579894   75264 cri.go:89] found id: "0ea0cfbd9902ae8e3797c77592bfe05a5f8162f75a391f5c2dc992eb5154c4ad"
	I0920 18:22:25.579914   75264 cri.go:89] found id: ""
	I0920 18:22:25.579923   75264 logs.go:276] 1 containers: [0ea0cfbd9902ae8e3797c77592bfe05a5f8162f75a391f5c2dc992eb5154c4ad]
	I0920 18:22:25.579979   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:25.585095   75264 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:22:25.585176   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:22:25.621769   75264 cri.go:89] found id: "65da0bae1c849a395cc241962bb6cbdf9f3edf11cdc95f4495ac2e427f9602d4"
	I0920 18:22:25.621796   75264 cri.go:89] found id: ""
	I0920 18:22:25.621805   75264 logs.go:276] 1 containers: [65da0bae1c849a395cc241962bb6cbdf9f3edf11cdc95f4495ac2e427f9602d4]
	I0920 18:22:25.621881   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:25.626318   75264 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:22:25.626400   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:22:25.662732   75264 cri.go:89] found id: "606f7c8a9a095f499ae422de76f76105607db72e57b9b0a6db3e3c5a81656c5c"
	I0920 18:22:25.662753   75264 cri.go:89] found id: ""
	I0920 18:22:25.662760   75264 logs.go:276] 1 containers: [606f7c8a9a095f499ae422de76f76105607db72e57b9b0a6db3e3c5a81656c5c]
	I0920 18:22:25.662818   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:25.667212   75264 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:22:25.667299   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:22:25.703639   75264 cri.go:89] found id: "6ba313deffc6185e59dac50051e933f6b2ea70d630cf36b2b8c2c07042f793ba"
	I0920 18:22:25.703663   75264 cri.go:89] found id: ""
	I0920 18:22:25.703670   75264 logs.go:276] 1 containers: [6ba313deffc6185e59dac50051e933f6b2ea70d630cf36b2b8c2c07042f793ba]
	I0920 18:22:25.703721   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:25.708034   75264 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:22:25.708115   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:22:25.745358   75264 cri.go:89] found id: "702f7f440eb6016c2c758a6be4e1072274ab6dcd169d7a081e5464869a7c299c"
	I0920 18:22:25.745387   75264 cri.go:89] found id: ""
	I0920 18:22:25.745397   75264 logs.go:276] 1 containers: [702f7f440eb6016c2c758a6be4e1072274ab6dcd169d7a081e5464869a7c299c]
	I0920 18:22:25.745455   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:25.749702   75264 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:22:25.749787   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:22:25.790592   75264 cri.go:89] found id: "4ca303b795795bd920f261c88630f30fd51a91b9694a9a6e67af8164bab8242b"
	I0920 18:22:25.790615   75264 cri.go:89] found id: ""
	I0920 18:22:25.790623   75264 logs.go:276] 1 containers: [4ca303b795795bd920f261c88630f30fd51a91b9694a9a6e67af8164bab8242b]
	I0920 18:22:25.790686   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:25.794993   75264 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:22:25.795062   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:22:25.836621   75264 cri.go:89] found id: ""
	I0920 18:22:25.836646   75264 logs.go:276] 0 containers: []
	W0920 18:22:25.836654   75264 logs.go:278] No container was found matching "kindnet"
	I0920 18:22:25.836661   75264 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0920 18:22:25.836723   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0920 18:22:25.874191   75264 cri.go:89] found id: "001bdc98537f0917f021c14a5d0733bd31cf19be1b4862502458a7a90e4300da"
	I0920 18:22:25.874215   75264 cri.go:89] found id: "c42201b6e3d55b6fd8beaf04d416658b228bf627e519ddb022dad6d17219cc1c"
	I0920 18:22:25.874220   75264 cri.go:89] found id: ""
	I0920 18:22:25.874229   75264 logs.go:276] 2 containers: [001bdc98537f0917f021c14a5d0733bd31cf19be1b4862502458a7a90e4300da c42201b6e3d55b6fd8beaf04d416658b228bf627e519ddb022dad6d17219cc1c]
	I0920 18:22:25.874316   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:25.878788   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:25.882840   75264 logs.go:123] Gathering logs for kube-proxy [702f7f440eb6016c2c758a6be4e1072274ab6dcd169d7a081e5464869a7c299c] ...
	I0920 18:22:25.882869   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 702f7f440eb6016c2c758a6be4e1072274ab6dcd169d7a081e5464869a7c299c"
	I0920 18:22:25.921609   75264 logs.go:123] Gathering logs for kube-controller-manager [4ca303b795795bd920f261c88630f30fd51a91b9694a9a6e67af8164bab8242b] ...
	I0920 18:22:25.921640   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ca303b795795bd920f261c88630f30fd51a91b9694a9a6e67af8164bab8242b"
	I0920 18:22:25.987199   75264 logs.go:123] Gathering logs for kubelet ...
	I0920 18:22:25.987236   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:22:26.061857   75264 logs.go:123] Gathering logs for kube-apiserver [0ea0cfbd9902ae8e3797c77592bfe05a5f8162f75a391f5c2dc992eb5154c4ad] ...
	I0920 18:22:26.061897   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ea0cfbd9902ae8e3797c77592bfe05a5f8162f75a391f5c2dc992eb5154c4ad"
	I0920 18:22:26.104901   75264 logs.go:123] Gathering logs for etcd [65da0bae1c849a395cc241962bb6cbdf9f3edf11cdc95f4495ac2e427f9602d4] ...
	I0920 18:22:26.104935   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 65da0bae1c849a395cc241962bb6cbdf9f3edf11cdc95f4495ac2e427f9602d4"
	I0920 18:22:26.160795   75264 logs.go:123] Gathering logs for kube-scheduler [6ba313deffc6185e59dac50051e933f6b2ea70d630cf36b2b8c2c07042f793ba] ...
	I0920 18:22:26.160833   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ba313deffc6185e59dac50051e933f6b2ea70d630cf36b2b8c2c07042f793ba"
	I0920 18:22:26.199286   75264 logs.go:123] Gathering logs for storage-provisioner [001bdc98537f0917f021c14a5d0733bd31cf19be1b4862502458a7a90e4300da] ...
	I0920 18:22:26.199316   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 001bdc98537f0917f021c14a5d0733bd31cf19be1b4862502458a7a90e4300da"
	I0920 18:22:26.239560   75264 logs.go:123] Gathering logs for storage-provisioner [c42201b6e3d55b6fd8beaf04d416658b228bf627e519ddb022dad6d17219cc1c] ...
	I0920 18:22:26.239591   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c42201b6e3d55b6fd8beaf04d416658b228bf627e519ddb022dad6d17219cc1c"
	I0920 18:22:26.275424   75264 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:22:26.275450   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:22:26.786849   75264 logs.go:123] Gathering logs for container status ...
	I0920 18:22:26.786895   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:22:26.848751   75264 logs.go:123] Gathering logs for dmesg ...
	I0920 18:22:26.848783   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:22:26.867329   75264 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:22:26.867375   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 18:22:27.017385   75264 logs.go:123] Gathering logs for coredns [606f7c8a9a095f499ae422de76f76105607db72e57b9b0a6db3e3c5a81656c5c] ...
	I0920 18:22:27.017419   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 606f7c8a9a095f499ae422de76f76105607db72e57b9b0a6db3e3c5a81656c5c"
	I0920 18:22:23.300658   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:25.302131   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:27.303929   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:23.307149   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:22:23.319889   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:22:23.319972   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:22:23.356613   75577 cri.go:89] found id: ""
	I0920 18:22:23.356645   75577 logs.go:276] 0 containers: []
	W0920 18:22:23.356656   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:22:23.356663   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:22:23.356728   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:22:23.391632   75577 cri.go:89] found id: ""
	I0920 18:22:23.391663   75577 logs.go:276] 0 containers: []
	W0920 18:22:23.391675   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:22:23.391683   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:22:23.391748   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:22:23.426908   75577 cri.go:89] found id: ""
	I0920 18:22:23.426936   75577 logs.go:276] 0 containers: []
	W0920 18:22:23.426946   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:22:23.426952   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:22:23.427013   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:22:23.461890   75577 cri.go:89] found id: ""
	I0920 18:22:23.461925   75577 logs.go:276] 0 containers: []
	W0920 18:22:23.461938   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:22:23.461947   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:22:23.462014   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:22:23.503517   75577 cri.go:89] found id: ""
	I0920 18:22:23.503549   75577 logs.go:276] 0 containers: []
	W0920 18:22:23.503560   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:22:23.503566   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:22:23.503615   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:22:23.540699   75577 cri.go:89] found id: ""
	I0920 18:22:23.540722   75577 logs.go:276] 0 containers: []
	W0920 18:22:23.540731   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:22:23.540736   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:22:23.540783   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:22:23.575485   75577 cri.go:89] found id: ""
	I0920 18:22:23.575509   75577 logs.go:276] 0 containers: []
	W0920 18:22:23.575517   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:22:23.575523   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:22:23.575576   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:22:23.610998   75577 cri.go:89] found id: ""
	I0920 18:22:23.611028   75577 logs.go:276] 0 containers: []
	W0920 18:22:23.611039   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:22:23.611051   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:22:23.611065   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:22:23.687072   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:22:23.687109   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:22:23.724790   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:22:23.724824   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:22:23.778905   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:22:23.778945   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:22:23.794298   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:22:23.794329   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:22:23.873341   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:22:26.374541   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:22:26.388151   75577 kubeadm.go:597] duration metric: took 4m2.956358154s to restartPrimaryControlPlane
	W0920 18:22:26.388229   75577 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0920 18:22:26.388260   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0920 18:22:27.454521   75577 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.066231187s)
	I0920 18:22:27.454602   75577 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:22:27.469409   75577 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 18:22:27.480314   75577 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 18:22:27.491389   75577 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 18:22:27.491416   75577 kubeadm.go:157] found existing configuration files:
	
	I0920 18:22:27.491468   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 18:22:27.501448   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 18:22:27.501534   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 18:22:27.511370   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 18:22:27.521767   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 18:22:27.521855   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 18:22:27.531717   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 18:22:27.540517   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 18:22:27.540575   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 18:22:27.549644   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 18:22:27.558412   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 18:22:27.558480   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 18:22:27.567702   75577 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 18:22:28.070939   75086 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.384759009s)
	I0920 18:22:28.071025   75086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:22:28.087712   75086 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 18:22:28.097709   75086 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 18:22:28.107217   75086 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 18:22:28.107243   75086 kubeadm.go:157] found existing configuration files:
	
	I0920 18:22:28.107297   75086 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 18:22:28.116947   75086 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 18:22:28.117013   75086 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 18:22:28.126559   75086 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 18:22:28.135261   75086 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 18:22:28.135338   75086 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 18:22:28.144456   75086 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 18:22:28.153412   75086 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 18:22:28.153465   75086 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 18:22:28.165825   75086 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 18:22:28.175003   75086 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 18:22:28.175068   75086 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 18:22:28.184917   75086 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 18:22:28.236825   75086 kubeadm.go:310] W0920 18:22:28.216092    2540 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 18:22:28.237654   75086 kubeadm.go:310] W0920 18:22:28.216986    2540 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 18:22:28.352695   75086 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 18:22:29.557496   75264 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:22:29.578981   75264 api_server.go:72] duration metric: took 4m20.322735753s to wait for apiserver process to appear ...
	I0920 18:22:29.579009   75264 api_server.go:88] waiting for apiserver healthz status ...
	I0920 18:22:29.579044   75264 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:22:29.579093   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:22:29.616252   75264 cri.go:89] found id: "0ea0cfbd9902ae8e3797c77592bfe05a5f8162f75a391f5c2dc992eb5154c4ad"
	I0920 18:22:29.616281   75264 cri.go:89] found id: ""
	I0920 18:22:29.616292   75264 logs.go:276] 1 containers: [0ea0cfbd9902ae8e3797c77592bfe05a5f8162f75a391f5c2dc992eb5154c4ad]
	I0920 18:22:29.616345   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:29.620605   75264 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:22:29.620678   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:22:29.658077   75264 cri.go:89] found id: "65da0bae1c849a395cc241962bb6cbdf9f3edf11cdc95f4495ac2e427f9602d4"
	I0920 18:22:29.658106   75264 cri.go:89] found id: ""
	I0920 18:22:29.658114   75264 logs.go:276] 1 containers: [65da0bae1c849a395cc241962bb6cbdf9f3edf11cdc95f4495ac2e427f9602d4]
	I0920 18:22:29.658170   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:29.662227   75264 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:22:29.662288   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:22:29.695906   75264 cri.go:89] found id: "606f7c8a9a095f499ae422de76f76105607db72e57b9b0a6db3e3c5a81656c5c"
	I0920 18:22:29.695927   75264 cri.go:89] found id: ""
	I0920 18:22:29.695934   75264 logs.go:276] 1 containers: [606f7c8a9a095f499ae422de76f76105607db72e57b9b0a6db3e3c5a81656c5c]
	I0920 18:22:29.695979   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:29.699986   75264 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:22:29.700083   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:22:29.736560   75264 cri.go:89] found id: "6ba313deffc6185e59dac50051e933f6b2ea70d630cf36b2b8c2c07042f793ba"
	I0920 18:22:29.736589   75264 cri.go:89] found id: ""
	I0920 18:22:29.736600   75264 logs.go:276] 1 containers: [6ba313deffc6185e59dac50051e933f6b2ea70d630cf36b2b8c2c07042f793ba]
	I0920 18:22:29.736660   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:29.740751   75264 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:22:29.740803   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:22:29.778930   75264 cri.go:89] found id: "702f7f440eb6016c2c758a6be4e1072274ab6dcd169d7a081e5464869a7c299c"
	I0920 18:22:29.778952   75264 cri.go:89] found id: ""
	I0920 18:22:29.778959   75264 logs.go:276] 1 containers: [702f7f440eb6016c2c758a6be4e1072274ab6dcd169d7a081e5464869a7c299c]
	I0920 18:22:29.779011   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:29.783022   75264 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:22:29.783092   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:22:29.829491   75264 cri.go:89] found id: "4ca303b795795bd920f261c88630f30fd51a91b9694a9a6e67af8164bab8242b"
	I0920 18:22:29.829521   75264 cri.go:89] found id: ""
	I0920 18:22:29.829531   75264 logs.go:276] 1 containers: [4ca303b795795bd920f261c88630f30fd51a91b9694a9a6e67af8164bab8242b]
	I0920 18:22:29.829597   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:29.833853   75264 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:22:29.833924   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:22:29.872376   75264 cri.go:89] found id: ""
	I0920 18:22:29.872404   75264 logs.go:276] 0 containers: []
	W0920 18:22:29.872412   75264 logs.go:278] No container was found matching "kindnet"
	I0920 18:22:29.872419   75264 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0920 18:22:29.872482   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0920 18:22:29.923096   75264 cri.go:89] found id: "001bdc98537f0917f021c14a5d0733bd31cf19be1b4862502458a7a90e4300da"
	I0920 18:22:29.923136   75264 cri.go:89] found id: "c42201b6e3d55b6fd8beaf04d416658b228bf627e519ddb022dad6d17219cc1c"
	I0920 18:22:29.923143   75264 cri.go:89] found id: ""
	I0920 18:22:29.923152   75264 logs.go:276] 2 containers: [001bdc98537f0917f021c14a5d0733bd31cf19be1b4862502458a7a90e4300da c42201b6e3d55b6fd8beaf04d416658b228bf627e519ddb022dad6d17219cc1c]
	I0920 18:22:29.923226   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:29.930023   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:29.935091   75264 logs.go:123] Gathering logs for coredns [606f7c8a9a095f499ae422de76f76105607db72e57b9b0a6db3e3c5a81656c5c] ...
	I0920 18:22:29.935119   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 606f7c8a9a095f499ae422de76f76105607db72e57b9b0a6db3e3c5a81656c5c"
	I0920 18:22:29.974064   75264 logs.go:123] Gathering logs for kube-scheduler [6ba313deffc6185e59dac50051e933f6b2ea70d630cf36b2b8c2c07042f793ba] ...
	I0920 18:22:29.974101   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ba313deffc6185e59dac50051e933f6b2ea70d630cf36b2b8c2c07042f793ba"
	I0920 18:22:30.018770   75264 logs.go:123] Gathering logs for kube-controller-manager [4ca303b795795bd920f261c88630f30fd51a91b9694a9a6e67af8164bab8242b] ...
	I0920 18:22:30.018805   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ca303b795795bd920f261c88630f30fd51a91b9694a9a6e67af8164bab8242b"
	I0920 18:22:30.077578   75264 logs.go:123] Gathering logs for storage-provisioner [c42201b6e3d55b6fd8beaf04d416658b228bf627e519ddb022dad6d17219cc1c] ...
	I0920 18:22:30.077625   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c42201b6e3d55b6fd8beaf04d416658b228bf627e519ddb022dad6d17219cc1c"
	I0920 18:22:30.123874   75264 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:22:30.123910   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:22:30.553215   75264 logs.go:123] Gathering logs for kube-proxy [702f7f440eb6016c2c758a6be4e1072274ab6dcd169d7a081e5464869a7c299c] ...
	I0920 18:22:30.553261   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 702f7f440eb6016c2c758a6be4e1072274ab6dcd169d7a081e5464869a7c299c"
	I0920 18:22:30.597397   75264 logs.go:123] Gathering logs for storage-provisioner [001bdc98537f0917f021c14a5d0733bd31cf19be1b4862502458a7a90e4300da] ...
	I0920 18:22:30.597428   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 001bdc98537f0917f021c14a5d0733bd31cf19be1b4862502458a7a90e4300da"
	I0920 18:22:30.643777   75264 logs.go:123] Gathering logs for container status ...
	I0920 18:22:30.643807   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:22:30.688174   75264 logs.go:123] Gathering logs for kubelet ...
	I0920 18:22:30.688205   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:22:30.760547   75264 logs.go:123] Gathering logs for dmesg ...
	I0920 18:22:30.760591   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:22:30.776702   75264 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:22:30.776729   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 18:22:30.913200   75264 logs.go:123] Gathering logs for kube-apiserver [0ea0cfbd9902ae8e3797c77592bfe05a5f8162f75a391f5c2dc992eb5154c4ad] ...
	I0920 18:22:30.913240   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ea0cfbd9902ae8e3797c77592bfe05a5f8162f75a391f5c2dc992eb5154c4ad"
	I0920 18:22:30.957548   75264 logs.go:123] Gathering logs for etcd [65da0bae1c849a395cc241962bb6cbdf9f3edf11cdc95f4495ac2e427f9602d4] ...
	I0920 18:22:30.957589   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 65da0bae1c849a395cc241962bb6cbdf9f3edf11cdc95f4495ac2e427f9602d4"
	I0920 18:22:29.803036   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:32.301986   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:27.790386   75577 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 18:22:36.572379   75086 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 18:22:36.572452   75086 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 18:22:36.572558   75086 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 18:22:36.572702   75086 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 18:22:36.572826   75086 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 18:22:36.572926   75086 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 18:22:36.574903   75086 out.go:235]   - Generating certificates and keys ...
	I0920 18:22:36.575032   75086 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 18:22:36.575117   75086 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 18:22:36.575224   75086 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 18:22:36.575342   75086 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 18:22:36.575446   75086 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 18:22:36.575535   75086 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 18:22:36.575606   75086 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 18:22:36.575687   75086 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 18:22:36.575790   75086 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 18:22:36.575867   75086 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 18:22:36.575903   75086 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 18:22:36.575970   75086 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 18:22:36.576054   75086 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 18:22:36.576141   75086 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 18:22:36.576223   75086 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 18:22:36.576317   75086 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 18:22:36.576373   75086 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 18:22:36.576442   75086 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 18:22:36.576506   75086 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 18:22:36.577982   75086 out.go:235]   - Booting up control plane ...
	I0920 18:22:36.578071   75086 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 18:22:36.578147   75086 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 18:22:36.578204   75086 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 18:22:36.578312   75086 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 18:22:36.578400   75086 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 18:22:36.578438   75086 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 18:22:36.578568   75086 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 18:22:36.578660   75086 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 18:22:36.578719   75086 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.174126ms
	I0920 18:22:36.578788   75086 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 18:22:36.578839   75086 kubeadm.go:310] [api-check] The API server is healthy after 5.501599925s
	I0920 18:22:36.578933   75086 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 18:22:36.579037   75086 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 18:22:36.579090   75086 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 18:22:36.579268   75086 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-768431 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 18:22:36.579335   75086 kubeadm.go:310] [bootstrap-token] Using token: junoe1.zmowu95mn9yb1z1f
	I0920 18:22:36.580810   75086 out.go:235]   - Configuring RBAC rules ...
	I0920 18:22:36.580919   75086 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 18:22:36.580989   75086 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 18:22:36.581127   75086 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 18:22:36.581290   75086 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 18:22:36.581456   75086 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 18:22:36.581589   75086 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 18:22:36.581742   75086 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 18:22:36.581827   75086 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 18:22:36.581916   75086 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 18:22:36.581925   75086 kubeadm.go:310] 
	I0920 18:22:36.581980   75086 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 18:22:36.581986   75086 kubeadm.go:310] 
	I0920 18:22:36.582086   75086 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 18:22:36.582097   75086 kubeadm.go:310] 
	I0920 18:22:36.582137   75086 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 18:22:36.582204   75086 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 18:22:36.582287   75086 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 18:22:36.582297   75086 kubeadm.go:310] 
	I0920 18:22:36.582389   75086 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 18:22:36.582398   75086 kubeadm.go:310] 
	I0920 18:22:36.582479   75086 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 18:22:36.582495   75086 kubeadm.go:310] 
	I0920 18:22:36.582540   75086 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 18:22:36.582605   75086 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 18:22:36.582682   75086 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 18:22:36.582697   75086 kubeadm.go:310] 
	I0920 18:22:36.582796   75086 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 18:22:36.582892   75086 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 18:22:36.582901   75086 kubeadm.go:310] 
	I0920 18:22:36.583026   75086 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token junoe1.zmowu95mn9yb1z1f \
	I0920 18:22:36.583163   75086 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:736f3fb521855813decb4a7214f2b8beff5f81c3971b60decfa20a8807626a2d \
	I0920 18:22:36.583199   75086 kubeadm.go:310] 	--control-plane 
	I0920 18:22:36.583208   75086 kubeadm.go:310] 
	I0920 18:22:36.583316   75086 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 18:22:36.583335   75086 kubeadm.go:310] 
	I0920 18:22:36.583489   75086 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token junoe1.zmowu95mn9yb1z1f \
	I0920 18:22:36.583648   75086 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:736f3fb521855813decb4a7214f2b8beff5f81c3971b60decfa20a8807626a2d 
	I0920 18:22:36.583664   75086 cni.go:84] Creating CNI manager for ""
	I0920 18:22:36.583673   75086 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:22:36.585378   75086 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 18:22:33.523585   75264 api_server.go:253] Checking apiserver healthz at https://192.168.72.190:8444/healthz ...
	I0920 18:22:33.528658   75264 api_server.go:279] https://192.168.72.190:8444/healthz returned 200:
	ok
	I0920 18:22:33.529826   75264 api_server.go:141] control plane version: v1.31.1
	I0920 18:22:33.529861   75264 api_server.go:131] duration metric: took 3.95084435s to wait for apiserver health ...
	I0920 18:22:33.529870   75264 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 18:22:33.529894   75264 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:22:33.529938   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:22:33.571762   75264 cri.go:89] found id: "0ea0cfbd9902ae8e3797c77592bfe05a5f8162f75a391f5c2dc992eb5154c4ad"
	I0920 18:22:33.571783   75264 cri.go:89] found id: ""
	I0920 18:22:33.571789   75264 logs.go:276] 1 containers: [0ea0cfbd9902ae8e3797c77592bfe05a5f8162f75a391f5c2dc992eb5154c4ad]
	I0920 18:22:33.571840   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:33.576180   75264 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:22:33.576268   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:22:33.621887   75264 cri.go:89] found id: "65da0bae1c849a395cc241962bb6cbdf9f3edf11cdc95f4495ac2e427f9602d4"
	I0920 18:22:33.621915   75264 cri.go:89] found id: ""
	I0920 18:22:33.621926   75264 logs.go:276] 1 containers: [65da0bae1c849a395cc241962bb6cbdf9f3edf11cdc95f4495ac2e427f9602d4]
	I0920 18:22:33.622013   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:33.626552   75264 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:22:33.626609   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:22:33.664879   75264 cri.go:89] found id: "606f7c8a9a095f499ae422de76f76105607db72e57b9b0a6db3e3c5a81656c5c"
	I0920 18:22:33.664902   75264 cri.go:89] found id: ""
	I0920 18:22:33.664912   75264 logs.go:276] 1 containers: [606f7c8a9a095f499ae422de76f76105607db72e57b9b0a6db3e3c5a81656c5c]
	I0920 18:22:33.664980   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:33.669155   75264 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:22:33.669231   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:22:33.710930   75264 cri.go:89] found id: "6ba313deffc6185e59dac50051e933f6b2ea70d630cf36b2b8c2c07042f793ba"
	I0920 18:22:33.710957   75264 cri.go:89] found id: ""
	I0920 18:22:33.710968   75264 logs.go:276] 1 containers: [6ba313deffc6185e59dac50051e933f6b2ea70d630cf36b2b8c2c07042f793ba]
	I0920 18:22:33.711030   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:33.715618   75264 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:22:33.715689   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:22:33.751646   75264 cri.go:89] found id: "702f7f440eb6016c2c758a6be4e1072274ab6dcd169d7a081e5464869a7c299c"
	I0920 18:22:33.751672   75264 cri.go:89] found id: ""
	I0920 18:22:33.751690   75264 logs.go:276] 1 containers: [702f7f440eb6016c2c758a6be4e1072274ab6dcd169d7a081e5464869a7c299c]
	I0920 18:22:33.751758   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:33.756376   75264 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:22:33.756441   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:22:33.803246   75264 cri.go:89] found id: "4ca303b795795bd920f261c88630f30fd51a91b9694a9a6e67af8164bab8242b"
	I0920 18:22:33.803267   75264 cri.go:89] found id: ""
	I0920 18:22:33.803276   75264 logs.go:276] 1 containers: [4ca303b795795bd920f261c88630f30fd51a91b9694a9a6e67af8164bab8242b]
	I0920 18:22:33.803334   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:33.807825   75264 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:22:33.807892   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:22:33.849531   75264 cri.go:89] found id: ""
	I0920 18:22:33.849559   75264 logs.go:276] 0 containers: []
	W0920 18:22:33.849569   75264 logs.go:278] No container was found matching "kindnet"
	I0920 18:22:33.849583   75264 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0920 18:22:33.849645   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0920 18:22:33.891058   75264 cri.go:89] found id: "001bdc98537f0917f021c14a5d0733bd31cf19be1b4862502458a7a90e4300da"
	I0920 18:22:33.891084   75264 cri.go:89] found id: "c42201b6e3d55b6fd8beaf04d416658b228bf627e519ddb022dad6d17219cc1c"
	I0920 18:22:33.891090   75264 cri.go:89] found id: ""
	I0920 18:22:33.891099   75264 logs.go:276] 2 containers: [001bdc98537f0917f021c14a5d0733bd31cf19be1b4862502458a7a90e4300da c42201b6e3d55b6fd8beaf04d416658b228bf627e519ddb022dad6d17219cc1c]
	I0920 18:22:33.891157   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:33.896028   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:33.900174   75264 logs.go:123] Gathering logs for container status ...
	I0920 18:22:33.900209   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:22:33.944777   75264 logs.go:123] Gathering logs for dmesg ...
	I0920 18:22:33.944803   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:22:33.960062   75264 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:22:33.960094   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 18:22:34.087507   75264 logs.go:123] Gathering logs for etcd [65da0bae1c849a395cc241962bb6cbdf9f3edf11cdc95f4495ac2e427f9602d4] ...
	I0920 18:22:34.087556   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 65da0bae1c849a395cc241962bb6cbdf9f3edf11cdc95f4495ac2e427f9602d4"
	I0920 18:22:34.140051   75264 logs.go:123] Gathering logs for kube-proxy [702f7f440eb6016c2c758a6be4e1072274ab6dcd169d7a081e5464869a7c299c] ...
	I0920 18:22:34.140088   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 702f7f440eb6016c2c758a6be4e1072274ab6dcd169d7a081e5464869a7c299c"
	I0920 18:22:34.183540   75264 logs.go:123] Gathering logs for kube-controller-manager [4ca303b795795bd920f261c88630f30fd51a91b9694a9a6e67af8164bab8242b] ...
	I0920 18:22:34.183582   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ca303b795795bd920f261c88630f30fd51a91b9694a9a6e67af8164bab8242b"
	I0920 18:22:34.251978   75264 logs.go:123] Gathering logs for storage-provisioner [001bdc98537f0917f021c14a5d0733bd31cf19be1b4862502458a7a90e4300da] ...
	I0920 18:22:34.252025   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 001bdc98537f0917f021c14a5d0733bd31cf19be1b4862502458a7a90e4300da"
	I0920 18:22:34.296522   75264 logs.go:123] Gathering logs for storage-provisioner [c42201b6e3d55b6fd8beaf04d416658b228bf627e519ddb022dad6d17219cc1c] ...
	I0920 18:22:34.296562   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c42201b6e3d55b6fd8beaf04d416658b228bf627e519ddb022dad6d17219cc1c"
	I0920 18:22:34.338227   75264 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:22:34.338254   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:22:34.703986   75264 logs.go:123] Gathering logs for kubelet ...
	I0920 18:22:34.704037   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:22:34.788091   75264 logs.go:123] Gathering logs for kube-apiserver [0ea0cfbd9902ae8e3797c77592bfe05a5f8162f75a391f5c2dc992eb5154c4ad] ...
	I0920 18:22:34.788139   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ea0cfbd9902ae8e3797c77592bfe05a5f8162f75a391f5c2dc992eb5154c4ad"
	I0920 18:22:34.861403   75264 logs.go:123] Gathering logs for coredns [606f7c8a9a095f499ae422de76f76105607db72e57b9b0a6db3e3c5a81656c5c] ...
	I0920 18:22:34.861435   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 606f7c8a9a095f499ae422de76f76105607db72e57b9b0a6db3e3c5a81656c5c"
	I0920 18:22:34.906740   75264 logs.go:123] Gathering logs for kube-scheduler [6ba313deffc6185e59dac50051e933f6b2ea70d630cf36b2b8c2c07042f793ba] ...
	I0920 18:22:34.906773   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ba313deffc6185e59dac50051e933f6b2ea70d630cf36b2b8c2c07042f793ba"
	I0920 18:22:37.454205   75264 system_pods.go:59] 8 kube-system pods found
	I0920 18:22:37.454243   75264 system_pods.go:61] "coredns-7c65d6cfc9-dmdfb" [8e4ffdb3-37e0-4498-bdf5-d6e7dbcad020] Running
	I0920 18:22:37.454252   75264 system_pods.go:61] "etcd-default-k8s-diff-port-553719" [e8de0e96-5f3a-4f3f-ae69-c5da7d3c2eb7] Running
	I0920 18:22:37.454257   75264 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-553719" [5164e26c-7fcd-41dc-865f-429046a9ad61] Running
	I0920 18:22:37.454263   75264 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-553719" [b2c00a76-9645-4c63-9812-43e6ed9f1d4f] Running
	I0920 18:22:37.454268   75264 system_pods.go:61] "kube-proxy-p9crq" [83e0f53d-6960-42c4-904d-ea85ba9160f4] Running
	I0920 18:22:37.454273   75264 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-553719" [42155f7b-95bb-467b-acf5-89eb4f80cf76] Running
	I0920 18:22:37.454289   75264 system_pods.go:61] "metrics-server-6867b74b74-vtl79" [29e0b6eb-22a9-4e37-97f9-83b48cc38193] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 18:22:37.454296   75264 system_pods.go:61] "storage-provisioner" [6fad2d07-f99e-45ac-9657-bce6d73d7fce] Running
	I0920 18:22:37.454310   75264 system_pods.go:74] duration metric: took 3.924431145s to wait for pod list to return data ...
	I0920 18:22:37.454324   75264 default_sa.go:34] waiting for default service account to be created ...
	I0920 18:22:37.457599   75264 default_sa.go:45] found service account: "default"
	I0920 18:22:37.457619   75264 default_sa.go:55] duration metric: took 3.289936ms for default service account to be created ...
	I0920 18:22:37.457627   75264 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 18:22:37.461977   75264 system_pods.go:86] 8 kube-system pods found
	I0920 18:22:37.462001   75264 system_pods.go:89] "coredns-7c65d6cfc9-dmdfb" [8e4ffdb3-37e0-4498-bdf5-d6e7dbcad020] Running
	I0920 18:22:37.462006   75264 system_pods.go:89] "etcd-default-k8s-diff-port-553719" [e8de0e96-5f3a-4f3f-ae69-c5da7d3c2eb7] Running
	I0920 18:22:37.462011   75264 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-553719" [5164e26c-7fcd-41dc-865f-429046a9ad61] Running
	I0920 18:22:37.462015   75264 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-553719" [b2c00a76-9645-4c63-9812-43e6ed9f1d4f] Running
	I0920 18:22:37.462019   75264 system_pods.go:89] "kube-proxy-p9crq" [83e0f53d-6960-42c4-904d-ea85ba9160f4] Running
	I0920 18:22:37.462022   75264 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-553719" [42155f7b-95bb-467b-acf5-89eb4f80cf76] Running
	I0920 18:22:37.462028   75264 system_pods.go:89] "metrics-server-6867b74b74-vtl79" [29e0b6eb-22a9-4e37-97f9-83b48cc38193] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 18:22:37.462032   75264 system_pods.go:89] "storage-provisioner" [6fad2d07-f99e-45ac-9657-bce6d73d7fce] Running
	I0920 18:22:37.462040   75264 system_pods.go:126] duration metric: took 4.409042ms to wait for k8s-apps to be running ...
	I0920 18:22:37.462047   75264 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 18:22:37.462087   75264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:22:37.479350   75264 system_svc.go:56] duration metric: took 17.294926ms WaitForService to wait for kubelet
	I0920 18:22:37.479380   75264 kubeadm.go:582] duration metric: took 4m28.223143222s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:22:37.479402   75264 node_conditions.go:102] verifying NodePressure condition ...
	I0920 18:22:37.482497   75264 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 18:22:37.482517   75264 node_conditions.go:123] node cpu capacity is 2
	I0920 18:22:37.482532   75264 node_conditions.go:105] duration metric: took 3.125658ms to run NodePressure ...
	I0920 18:22:37.482543   75264 start.go:241] waiting for startup goroutines ...
	I0920 18:22:37.482549   75264 start.go:246] waiting for cluster config update ...
	I0920 18:22:37.482561   75264 start.go:255] writing updated cluster config ...
	I0920 18:22:37.482817   75264 ssh_runner.go:195] Run: rm -f paused
	I0920 18:22:37.532982   75264 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 18:22:37.534936   75264 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-553719" cluster and "default" namespace by default
	I0920 18:22:34.302204   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:36.802991   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:36.586809   75086 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 18:22:36.599904   75086 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 18:22:36.620050   75086 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 18:22:36.620133   75086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:22:36.620149   75086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-768431 minikube.k8s.io/updated_at=2024_09_20T18_22_36_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=0626f22cf0d915d75e291a5bce701f94395056e1 minikube.k8s.io/name=embed-certs-768431 minikube.k8s.io/primary=true
	I0920 18:22:36.636625   75086 ops.go:34] apiserver oom_adj: -16
	I0920 18:22:36.817434   75086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:22:37.318133   75086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:22:37.818515   75086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:22:38.318005   75086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:22:38.817759   75086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:22:39.318481   75086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:22:39.817943   75086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:22:40.317957   75086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:22:40.423342   75086 kubeadm.go:1113] duration metric: took 3.803277177s to wait for elevateKubeSystemPrivileges
	I0920 18:22:40.423389   75086 kubeadm.go:394] duration metric: took 4m59.290098215s to StartCluster
	I0920 18:22:40.423413   75086 settings.go:142] acquiring lock: {Name:mk2a13c58cdc0faf8cddca5d6716175d45db9bfd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:22:40.423516   75086 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19672-8777/kubeconfig
	I0920 18:22:40.426054   75086 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/kubeconfig: {Name:mkf32a4c736808e023459b2f0e40188618a38db1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:22:40.426360   75086 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.202 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:22:40.426456   75086 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 18:22:40.426546   75086 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-768431"
	I0920 18:22:40.426566   75086 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-768431"
	W0920 18:22:40.426578   75086 addons.go:243] addon storage-provisioner should already be in state true
	I0920 18:22:40.426576   75086 addons.go:69] Setting default-storageclass=true in profile "embed-certs-768431"
	I0920 18:22:40.426608   75086 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-768431"
	I0920 18:22:40.426622   75086 host.go:66] Checking if "embed-certs-768431" exists ...
	I0920 18:22:40.426621   75086 addons.go:69] Setting metrics-server=true in profile "embed-certs-768431"
	I0920 18:22:40.426662   75086 config.go:182] Loaded profile config "embed-certs-768431": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:22:40.426669   75086 addons.go:234] Setting addon metrics-server=true in "embed-certs-768431"
	W0920 18:22:40.426699   75086 addons.go:243] addon metrics-server should already be in state true
	I0920 18:22:40.426739   75086 host.go:66] Checking if "embed-certs-768431" exists ...
	I0920 18:22:40.427075   75086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:22:40.427116   75086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:22:40.427120   75086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:22:40.427155   75086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:22:40.427161   75086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:22:40.427251   75086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:22:40.427977   75086 out.go:177] * Verifying Kubernetes components...
	I0920 18:22:40.429647   75086 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:22:40.443468   75086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40103
	I0920 18:22:40.443663   75086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37871
	I0920 18:22:40.444055   75086 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:22:40.444062   75086 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:22:40.444562   75086 main.go:141] libmachine: Using API Version  1
	I0920 18:22:40.444586   75086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:22:40.444732   75086 main.go:141] libmachine: Using API Version  1
	I0920 18:22:40.444750   75086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:22:40.445016   75086 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:22:40.445110   75086 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:22:40.445584   75086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:22:40.445634   75086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:22:40.445652   75086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:22:40.445654   75086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38771
	I0920 18:22:40.445684   75086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:22:40.446227   75086 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:22:40.446872   75086 main.go:141] libmachine: Using API Version  1
	I0920 18:22:40.446892   75086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:22:40.447280   75086 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:22:40.447692   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetState
	I0920 18:22:40.451332   75086 addons.go:234] Setting addon default-storageclass=true in "embed-certs-768431"
	W0920 18:22:40.451355   75086 addons.go:243] addon default-storageclass should already be in state true
	I0920 18:22:40.451382   75086 host.go:66] Checking if "embed-certs-768431" exists ...
	I0920 18:22:40.451753   75086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:22:40.451803   75086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:22:40.463317   75086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39353
	I0920 18:22:40.463816   75086 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:22:40.464544   75086 main.go:141] libmachine: Using API Version  1
	I0920 18:22:40.464577   75086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:22:40.464961   75086 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:22:40.467445   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetState
	I0920 18:22:40.469290   75086 main.go:141] libmachine: (embed-certs-768431) Calling .DriverName
	I0920 18:22:40.471212   75086 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0920 18:22:40.471862   75086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40689
	I0920 18:22:40.472254   75086 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:22:40.472515   75086 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 18:22:40.472535   75086 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 18:22:40.472557   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHHostname
	I0920 18:22:40.472718   75086 main.go:141] libmachine: Using API Version  1
	I0920 18:22:40.472741   75086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:22:40.473259   75086 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:22:40.473477   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetState
	I0920 18:22:40.473753   75086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36495
	I0920 18:22:40.474173   75086 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:22:40.474778   75086 main.go:141] libmachine: Using API Version  1
	I0920 18:22:40.474796   75086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:22:40.475166   75086 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:22:40.475726   75086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:22:40.475766   75086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:22:40.475984   75086 main.go:141] libmachine: (embed-certs-768431) Calling .DriverName
	I0920 18:22:40.476244   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:22:40.476583   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:22:40.476605   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:22:40.476773   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHPort
	I0920 18:22:40.476953   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:22:40.477053   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHUsername
	I0920 18:22:40.477196   75086 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/embed-certs-768431/id_rsa Username:docker}
	I0920 18:22:40.478244   75086 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:22:40.479443   75086 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 18:22:40.479459   75086 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 18:22:40.479476   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHHostname
	I0920 18:22:40.482119   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:22:40.482400   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:22:40.482423   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:22:40.482639   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHPort
	I0920 18:22:40.482840   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:22:40.482968   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHUsername
	I0920 18:22:40.483145   75086 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/embed-certs-768431/id_rsa Username:docker}
	I0920 18:22:40.494454   75086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35971
	I0920 18:22:40.494948   75086 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:22:40.495425   75086 main.go:141] libmachine: Using API Version  1
	I0920 18:22:40.495444   75086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:22:40.495798   75086 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:22:40.495990   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetState
	I0920 18:22:40.497591   75086 main.go:141] libmachine: (embed-certs-768431) Calling .DriverName
	I0920 18:22:40.497878   75086 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 18:22:40.497894   75086 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 18:22:40.497913   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHHostname
	I0920 18:22:40.500755   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:22:40.501130   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:22:40.501153   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:22:40.501355   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHPort
	I0920 18:22:40.501548   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:22:40.501725   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHUsername
	I0920 18:22:40.502091   75086 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/embed-certs-768431/id_rsa Username:docker}
	I0920 18:22:40.646738   75086 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:22:40.673285   75086 node_ready.go:35] waiting up to 6m0s for node "embed-certs-768431" to be "Ready" ...
	I0920 18:22:40.684618   75086 node_ready.go:49] node "embed-certs-768431" has status "Ready":"True"
	I0920 18:22:40.684643   75086 node_ready.go:38] duration metric: took 11.298825ms for node "embed-certs-768431" to be "Ready" ...
	I0920 18:22:40.684653   75086 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:22:40.689151   75086 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:22:40.734882   75086 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 18:22:40.744479   75086 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 18:22:40.884534   75086 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 18:22:40.884556   75086 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0920 18:22:41.018287   75086 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 18:22:41.018313   75086 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 18:22:41.091216   75086 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 18:22:41.091247   75086 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 18:22:41.132887   75086 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 18:22:41.538445   75086 main.go:141] libmachine: Making call to close driver server
	I0920 18:22:41.538472   75086 main.go:141] libmachine: (embed-certs-768431) Calling .Close
	I0920 18:22:41.538774   75086 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:22:41.538795   75086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:22:41.538806   75086 main.go:141] libmachine: Making call to close driver server
	I0920 18:22:41.538815   75086 main.go:141] libmachine: (embed-certs-768431) Calling .Close
	I0920 18:22:41.539010   75086 main.go:141] libmachine: Making call to close driver server
	I0920 18:22:41.539027   75086 main.go:141] libmachine: (embed-certs-768431) Calling .Close
	I0920 18:22:41.539118   75086 main.go:141] libmachine: (embed-certs-768431) DBG | Closing plugin on server side
	I0920 18:22:41.539137   75086 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:22:41.539205   75086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:22:41.539273   75086 main.go:141] libmachine: (embed-certs-768431) DBG | Closing plugin on server side
	I0920 18:22:41.539286   75086 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:22:41.539300   75086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:22:41.539309   75086 main.go:141] libmachine: Making call to close driver server
	I0920 18:22:41.539348   75086 main.go:141] libmachine: (embed-certs-768431) Calling .Close
	I0920 18:22:41.539629   75086 main.go:141] libmachine: (embed-certs-768431) DBG | Closing plugin on server side
	I0920 18:22:41.539693   75086 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:22:41.539712   75086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:22:41.559164   75086 main.go:141] libmachine: Making call to close driver server
	I0920 18:22:41.559191   75086 main.go:141] libmachine: (embed-certs-768431) Calling .Close
	I0920 18:22:41.559517   75086 main.go:141] libmachine: (embed-certs-768431) DBG | Closing plugin on server side
	I0920 18:22:41.559580   75086 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:22:41.559596   75086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:22:41.880601   75086 main.go:141] libmachine: Making call to close driver server
	I0920 18:22:41.880640   75086 main.go:141] libmachine: (embed-certs-768431) Calling .Close
	I0920 18:22:41.880998   75086 main.go:141] libmachine: (embed-certs-768431) DBG | Closing plugin on server side
	I0920 18:22:41.881026   75086 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:22:41.881039   75086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:22:41.881047   75086 main.go:141] libmachine: Making call to close driver server
	I0920 18:22:41.881052   75086 main.go:141] libmachine: (embed-certs-768431) Calling .Close
	I0920 18:22:41.881304   75086 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:22:41.881322   75086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:22:41.881336   75086 addons.go:475] Verifying addon metrics-server=true in "embed-certs-768431"
	I0920 18:22:41.883315   75086 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0920 18:22:39.302453   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:41.802428   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:41.885367   75086 addons.go:510] duration metric: took 1.45891561s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0920 18:22:42.696266   75086 pod_ready.go:103] pod "etcd-embed-certs-768431" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:44.300965   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:46.301969   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:45.195160   75086 pod_ready.go:103] pod "etcd-embed-certs-768431" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:46.200142   75086 pod_ready.go:93] pod "etcd-embed-certs-768431" in "kube-system" namespace has status "Ready":"True"
	I0920 18:22:46.200166   75086 pod_ready.go:82] duration metric: took 5.510989222s for pod "etcd-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:22:46.200175   75086 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:22:46.205619   75086 pod_ready.go:93] pod "kube-apiserver-embed-certs-768431" in "kube-system" namespace has status "Ready":"True"
	I0920 18:22:46.205647   75086 pod_ready.go:82] duration metric: took 5.464252ms for pod "kube-apiserver-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:22:46.205659   75086 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:22:46.210040   75086 pod_ready.go:93] pod "kube-controller-manager-embed-certs-768431" in "kube-system" namespace has status "Ready":"True"
	I0920 18:22:46.210066   75086 pod_ready.go:82] duration metric: took 4.397776ms for pod "kube-controller-manager-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:22:46.210079   75086 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:22:48.216587   75086 pod_ready.go:103] pod "kube-scheduler-embed-certs-768431" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:49.217996   75086 pod_ready.go:93] pod "kube-scheduler-embed-certs-768431" in "kube-system" namespace has status "Ready":"True"
	I0920 18:22:49.218019   75086 pod_ready.go:82] duration metric: took 3.007931797s for pod "kube-scheduler-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:22:49.218025   75086 pod_ready.go:39] duration metric: took 8.533358652s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:22:49.218039   75086 api_server.go:52] waiting for apiserver process to appear ...
	I0920 18:22:49.218100   75086 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:22:49.235456   75086 api_server.go:72] duration metric: took 8.809056301s to wait for apiserver process to appear ...
	I0920 18:22:49.235480   75086 api_server.go:88] waiting for apiserver healthz status ...
	I0920 18:22:49.235499   75086 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0920 18:22:49.239562   75086 api_server.go:279] https://192.168.61.202:8443/healthz returned 200:
	ok
	I0920 18:22:49.240999   75086 api_server.go:141] control plane version: v1.31.1
	I0920 18:22:49.241024   75086 api_server.go:131] duration metric: took 5.537177ms to wait for apiserver health ...
	I0920 18:22:49.241033   75086 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 18:22:49.246297   75086 system_pods.go:59] 9 kube-system pods found
	I0920 18:22:49.246335   75086 system_pods.go:61] "coredns-7c65d6cfc9-g5tkc" [8877c0e8-6c8f-4a62-94bd-508982faee3c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 18:22:49.246342   75086 system_pods.go:61] "coredns-7c65d6cfc9-jkkdn" [f64288e4-009c-4ba8-93e7-30ca5296af46] Running
	I0920 18:22:49.246348   75086 system_pods.go:61] "etcd-embed-certs-768431" [f0ba7c9a-cc8b-43d4-aa48-a8f923d3c515] Running
	I0920 18:22:49.246352   75086 system_pods.go:61] "kube-apiserver-embed-certs-768431" [dba3b6cc-95db-47a0-ba41-3aa3ed3e8adb] Running
	I0920 18:22:49.246356   75086 system_pods.go:61] "kube-controller-manager-embed-certs-768431" [ff933e33-4c5c-4a8c-8ee4-6f0cb3ea4c3a] Running
	I0920 18:22:49.246359   75086 system_pods.go:61] "kube-proxy-c4527" [2e2d5102-0c42-4b87-8a27-dd53b8eb41f9] Running
	I0920 18:22:49.246362   75086 system_pods.go:61] "kube-scheduler-embed-certs-768431" [8cca3406-a395-4dcd-97ac-d8ce3e8deac6] Running
	I0920 18:22:49.246367   75086 system_pods.go:61] "metrics-server-6867b74b74-9snmf" [5fb654f5-5e73-436e-bc9d-04ef5077deb4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 18:22:49.246373   75086 system_pods.go:61] "storage-provisioner" [09227b45-ea2a-4fcc-b082-9978e5f00a4b] Running
	I0920 18:22:49.246379   75086 system_pods.go:74] duration metric: took 5.340615ms to wait for pod list to return data ...
	I0920 18:22:49.246388   75086 default_sa.go:34] waiting for default service account to be created ...
	I0920 18:22:49.249593   75086 default_sa.go:45] found service account: "default"
	I0920 18:22:49.249617   75086 default_sa.go:55] duration metric: took 3.222486ms for default service account to be created ...
	I0920 18:22:49.249626   75086 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 18:22:49.255747   75086 system_pods.go:86] 9 kube-system pods found
	I0920 18:22:49.255780   75086 system_pods.go:89] "coredns-7c65d6cfc9-g5tkc" [8877c0e8-6c8f-4a62-94bd-508982faee3c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 18:22:49.255790   75086 system_pods.go:89] "coredns-7c65d6cfc9-jkkdn" [f64288e4-009c-4ba8-93e7-30ca5296af46] Running
	I0920 18:22:49.255799   75086 system_pods.go:89] "etcd-embed-certs-768431" [f0ba7c9a-cc8b-43d4-aa48-a8f923d3c515] Running
	I0920 18:22:49.255805   75086 system_pods.go:89] "kube-apiserver-embed-certs-768431" [dba3b6cc-95db-47a0-ba41-3aa3ed3e8adb] Running
	I0920 18:22:49.255811   75086 system_pods.go:89] "kube-controller-manager-embed-certs-768431" [ff933e33-4c5c-4a8c-8ee4-6f0cb3ea4c3a] Running
	I0920 18:22:49.255817   75086 system_pods.go:89] "kube-proxy-c4527" [2e2d5102-0c42-4b87-8a27-dd53b8eb41f9] Running
	I0920 18:22:49.255826   75086 system_pods.go:89] "kube-scheduler-embed-certs-768431" [8cca3406-a395-4dcd-97ac-d8ce3e8deac6] Running
	I0920 18:22:49.255834   75086 system_pods.go:89] "metrics-server-6867b74b74-9snmf" [5fb654f5-5e73-436e-bc9d-04ef5077deb4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 18:22:49.255845   75086 system_pods.go:89] "storage-provisioner" [09227b45-ea2a-4fcc-b082-9978e5f00a4b] Running
	I0920 18:22:49.255852   75086 system_pods.go:126] duration metric: took 6.220419ms to wait for k8s-apps to be running ...
	I0920 18:22:49.255863   75086 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 18:22:49.255918   75086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:22:49.271916   75086 system_svc.go:56] duration metric: took 16.046857ms WaitForService to wait for kubelet
	I0920 18:22:49.271946   75086 kubeadm.go:582] duration metric: took 8.845551531s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:22:49.271962   75086 node_conditions.go:102] verifying NodePressure condition ...
	I0920 18:22:49.277150   75086 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 18:22:49.277190   75086 node_conditions.go:123] node cpu capacity is 2
	I0920 18:22:49.277203   75086 node_conditions.go:105] duration metric: took 5.234938ms to run NodePressure ...
	I0920 18:22:49.277216   75086 start.go:241] waiting for startup goroutines ...
	I0920 18:22:49.277226   75086 start.go:246] waiting for cluster config update ...
	I0920 18:22:49.277240   75086 start.go:255] writing updated cluster config ...
	I0920 18:22:49.278062   75086 ssh_runner.go:195] Run: rm -f paused
	I0920 18:22:49.329596   75086 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 18:22:49.331593   75086 out.go:177] * Done! kubectl is now configured to use "embed-certs-768431" cluster and "default" namespace by default
	I0920 18:22:48.802304   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:51.301288   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:53.801957   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:56.301310   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:58.302165   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:23:00.801931   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:23:03.301289   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:23:05.801777   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:23:08.301539   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:23:08.801990   74753 pod_ready.go:82] duration metric: took 4m0.006972987s for pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace to be "Ready" ...
	E0920 18:23:08.802010   74753 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0920 18:23:08.802019   74753 pod_ready.go:39] duration metric: took 4m2.40678087s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:23:08.802035   74753 api_server.go:52] waiting for apiserver process to appear ...
	I0920 18:23:08.802062   74753 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:23:08.802110   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:23:08.848687   74753 cri.go:89] found id: "334e4df5baa4f8f7d58af2308fab7f78de22f7e07d6dfbcbeaa0df2a52f6b207"
	I0920 18:23:08.848709   74753 cri.go:89] found id: ""
	I0920 18:23:08.848716   74753 logs.go:276] 1 containers: [334e4df5baa4f8f7d58af2308fab7f78de22f7e07d6dfbcbeaa0df2a52f6b207]
	I0920 18:23:08.848764   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:08.853180   74753 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:23:08.853239   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:23:08.889768   74753 cri.go:89] found id: "98aa96314cf8f863121b9953ad6335628481f536fcab49b7c00505a17eb35ff2"
	I0920 18:23:08.889793   74753 cri.go:89] found id: ""
	I0920 18:23:08.889803   74753 logs.go:276] 1 containers: [98aa96314cf8f863121b9953ad6335628481f536fcab49b7c00505a17eb35ff2]
	I0920 18:23:08.889869   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:08.893865   74753 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:23:08.893937   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:23:08.936592   74753 cri.go:89] found id: "35f0d8dd053d4c376ccf21215492d8509ff701e7d09b345fdbe06d505ba2d6b4"
	I0920 18:23:08.936619   74753 cri.go:89] found id: ""
	I0920 18:23:08.936629   74753 logs.go:276] 1 containers: [35f0d8dd053d4c376ccf21215492d8509ff701e7d09b345fdbe06d505ba2d6b4]
	I0920 18:23:08.936768   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:08.941222   74753 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:23:08.941311   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:23:08.978894   74753 cri.go:89] found id: "8153479cebb05353bec31c41778746c485cb294bf41c3f98e30559ad7da64c64"
	I0920 18:23:08.978918   74753 cri.go:89] found id: ""
	I0920 18:23:08.978929   74753 logs.go:276] 1 containers: [8153479cebb05353bec31c41778746c485cb294bf41c3f98e30559ad7da64c64]
	I0920 18:23:08.978977   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:08.982783   74753 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:23:08.982845   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:23:09.018400   74753 cri.go:89] found id: "6df198ca54e8004a532de9b15cfe7a48a659ae06623c00a78cad528762944497"
	I0920 18:23:09.018422   74753 cri.go:89] found id: ""
	I0920 18:23:09.018430   74753 logs.go:276] 1 containers: [6df198ca54e8004a532de9b15cfe7a48a659ae06623c00a78cad528762944497]
	I0920 18:23:09.018485   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:09.022403   74753 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:23:09.022470   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:23:09.062315   74753 cri.go:89] found id: "3ebf4c520d684127cd0b6146d44e863090b9c2db3ac866e9ce3ef2d10d2d6da1"
	I0920 18:23:09.062385   74753 cri.go:89] found id: ""
	I0920 18:23:09.062397   74753 logs.go:276] 1 containers: [3ebf4c520d684127cd0b6146d44e863090b9c2db3ac866e9ce3ef2d10d2d6da1]
	I0920 18:23:09.062453   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:09.066976   74753 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:23:09.067049   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:23:09.107467   74753 cri.go:89] found id: ""
	I0920 18:23:09.107504   74753 logs.go:276] 0 containers: []
	W0920 18:23:09.107515   74753 logs.go:278] No container was found matching "kindnet"
	I0920 18:23:09.107523   74753 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0920 18:23:09.107583   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0920 18:23:09.150489   74753 cri.go:89] found id: "179e4a02f3459a613d3307806b477d54581555b18babca62d0c5e553b9562442"
	I0920 18:23:09.150519   74753 cri.go:89] found id: "3eb9abdf57de51979118cfa77b613c22e28122fcb1da9e72fbf6f3a946ed3531"
	I0920 18:23:09.150526   74753 cri.go:89] found id: ""
	I0920 18:23:09.150536   74753 logs.go:276] 2 containers: [179e4a02f3459a613d3307806b477d54581555b18babca62d0c5e553b9562442 3eb9abdf57de51979118cfa77b613c22e28122fcb1da9e72fbf6f3a946ed3531]
	I0920 18:23:09.150589   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:09.154618   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:09.158682   74753 logs.go:123] Gathering logs for container status ...
	I0920 18:23:09.158708   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:23:09.198393   74753 logs.go:123] Gathering logs for kubelet ...
	I0920 18:23:09.198420   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:23:09.266161   74753 logs.go:123] Gathering logs for kube-apiserver [334e4df5baa4f8f7d58af2308fab7f78de22f7e07d6dfbcbeaa0df2a52f6b207] ...
	I0920 18:23:09.266197   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 334e4df5baa4f8f7d58af2308fab7f78de22f7e07d6dfbcbeaa0df2a52f6b207"
	I0920 18:23:09.311406   74753 logs.go:123] Gathering logs for etcd [98aa96314cf8f863121b9953ad6335628481f536fcab49b7c00505a17eb35ff2] ...
	I0920 18:23:09.311438   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 98aa96314cf8f863121b9953ad6335628481f536fcab49b7c00505a17eb35ff2"
	I0920 18:23:09.353233   74753 logs.go:123] Gathering logs for kube-controller-manager [3ebf4c520d684127cd0b6146d44e863090b9c2db3ac866e9ce3ef2d10d2d6da1] ...
	I0920 18:23:09.353271   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ebf4c520d684127cd0b6146d44e863090b9c2db3ac866e9ce3ef2d10d2d6da1"
	I0920 18:23:09.415966   74753 logs.go:123] Gathering logs for storage-provisioner [3eb9abdf57de51979118cfa77b613c22e28122fcb1da9e72fbf6f3a946ed3531] ...
	I0920 18:23:09.416010   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3eb9abdf57de51979118cfa77b613c22e28122fcb1da9e72fbf6f3a946ed3531"
	I0920 18:23:09.462018   74753 logs.go:123] Gathering logs for storage-provisioner [179e4a02f3459a613d3307806b477d54581555b18babca62d0c5e553b9562442] ...
	I0920 18:23:09.462054   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 179e4a02f3459a613d3307806b477d54581555b18babca62d0c5e553b9562442"
	I0920 18:23:09.499851   74753 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:23:09.499883   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:23:10.018536   74753 logs.go:123] Gathering logs for dmesg ...
	I0920 18:23:10.018586   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:23:10.033607   74753 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:23:10.033639   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 18:23:10.167598   74753 logs.go:123] Gathering logs for coredns [35f0d8dd053d4c376ccf21215492d8509ff701e7d09b345fdbe06d505ba2d6b4] ...
	I0920 18:23:10.167645   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35f0d8dd053d4c376ccf21215492d8509ff701e7d09b345fdbe06d505ba2d6b4"
	I0920 18:23:10.201819   74753 logs.go:123] Gathering logs for kube-scheduler [8153479cebb05353bec31c41778746c485cb294bf41c3f98e30559ad7da64c64] ...
	I0920 18:23:10.201856   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8153479cebb05353bec31c41778746c485cb294bf41c3f98e30559ad7da64c64"
	I0920 18:23:10.237079   74753 logs.go:123] Gathering logs for kube-proxy [6df198ca54e8004a532de9b15cfe7a48a659ae06623c00a78cad528762944497] ...
	I0920 18:23:10.237108   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6df198ca54e8004a532de9b15cfe7a48a659ae06623c00a78cad528762944497"
	I0920 18:23:12.778360   74753 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:23:12.796411   74753 api_server.go:72] duration metric: took 4m13.647076581s to wait for apiserver process to appear ...
	I0920 18:23:12.796438   74753 api_server.go:88] waiting for apiserver healthz status ...
	I0920 18:23:12.796475   74753 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:23:12.796525   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:23:12.833259   74753 cri.go:89] found id: "334e4df5baa4f8f7d58af2308fab7f78de22f7e07d6dfbcbeaa0df2a52f6b207"
	I0920 18:23:12.833279   74753 cri.go:89] found id: ""
	I0920 18:23:12.833287   74753 logs.go:276] 1 containers: [334e4df5baa4f8f7d58af2308fab7f78de22f7e07d6dfbcbeaa0df2a52f6b207]
	I0920 18:23:12.833334   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:12.837497   74753 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:23:12.837568   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:23:12.877602   74753 cri.go:89] found id: "98aa96314cf8f863121b9953ad6335628481f536fcab49b7c00505a17eb35ff2"
	I0920 18:23:12.877628   74753 cri.go:89] found id: ""
	I0920 18:23:12.877639   74753 logs.go:276] 1 containers: [98aa96314cf8f863121b9953ad6335628481f536fcab49b7c00505a17eb35ff2]
	I0920 18:23:12.877688   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:12.881869   74753 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:23:12.881951   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:23:12.924617   74753 cri.go:89] found id: "35f0d8dd053d4c376ccf21215492d8509ff701e7d09b345fdbe06d505ba2d6b4"
	I0920 18:23:12.924641   74753 cri.go:89] found id: ""
	I0920 18:23:12.924650   74753 logs.go:276] 1 containers: [35f0d8dd053d4c376ccf21215492d8509ff701e7d09b345fdbe06d505ba2d6b4]
	I0920 18:23:12.924710   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:12.929317   74753 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:23:12.929381   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:23:12.968642   74753 cri.go:89] found id: "8153479cebb05353bec31c41778746c485cb294bf41c3f98e30559ad7da64c64"
	I0920 18:23:12.968672   74753 cri.go:89] found id: ""
	I0920 18:23:12.968682   74753 logs.go:276] 1 containers: [8153479cebb05353bec31c41778746c485cb294bf41c3f98e30559ad7da64c64]
	I0920 18:23:12.968745   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:12.974588   74753 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:23:12.974665   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:23:13.010036   74753 cri.go:89] found id: "6df198ca54e8004a532de9b15cfe7a48a659ae06623c00a78cad528762944497"
	I0920 18:23:13.010067   74753 cri.go:89] found id: ""
	I0920 18:23:13.010078   74753 logs.go:276] 1 containers: [6df198ca54e8004a532de9b15cfe7a48a659ae06623c00a78cad528762944497]
	I0920 18:23:13.010136   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:13.014300   74753 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:23:13.014358   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:23:13.055560   74753 cri.go:89] found id: "3ebf4c520d684127cd0b6146d44e863090b9c2db3ac866e9ce3ef2d10d2d6da1"
	I0920 18:23:13.055591   74753 cri.go:89] found id: ""
	I0920 18:23:13.055601   74753 logs.go:276] 1 containers: [3ebf4c520d684127cd0b6146d44e863090b9c2db3ac866e9ce3ef2d10d2d6da1]
	I0920 18:23:13.055663   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:13.059828   74753 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:23:13.059887   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:23:13.100161   74753 cri.go:89] found id: ""
	I0920 18:23:13.100189   74753 logs.go:276] 0 containers: []
	W0920 18:23:13.100199   74753 logs.go:278] No container was found matching "kindnet"
	I0920 18:23:13.100207   74753 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0920 18:23:13.100269   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0920 18:23:13.136788   74753 cri.go:89] found id: "179e4a02f3459a613d3307806b477d54581555b18babca62d0c5e553b9562442"
	I0920 18:23:13.136818   74753 cri.go:89] found id: "3eb9abdf57de51979118cfa77b613c22e28122fcb1da9e72fbf6f3a946ed3531"
	I0920 18:23:13.136824   74753 cri.go:89] found id: ""
	I0920 18:23:13.136831   74753 logs.go:276] 2 containers: [179e4a02f3459a613d3307806b477d54581555b18babca62d0c5e553b9562442 3eb9abdf57de51979118cfa77b613c22e28122fcb1da9e72fbf6f3a946ed3531]
	I0920 18:23:13.136894   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:13.140956   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:13.145991   74753 logs.go:123] Gathering logs for container status ...
	I0920 18:23:13.146023   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:23:13.187274   74753 logs.go:123] Gathering logs for kubelet ...
	I0920 18:23:13.187303   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:23:13.257111   74753 logs.go:123] Gathering logs for dmesg ...
	I0920 18:23:13.257147   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:23:13.271678   74753 logs.go:123] Gathering logs for kube-controller-manager [3ebf4c520d684127cd0b6146d44e863090b9c2db3ac866e9ce3ef2d10d2d6da1] ...
	I0920 18:23:13.271704   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ebf4c520d684127cd0b6146d44e863090b9c2db3ac866e9ce3ef2d10d2d6da1"
	I0920 18:23:13.327163   74753 logs.go:123] Gathering logs for storage-provisioner [179e4a02f3459a613d3307806b477d54581555b18babca62d0c5e553b9562442] ...
	I0920 18:23:13.327191   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 179e4a02f3459a613d3307806b477d54581555b18babca62d0c5e553b9562442"
	I0920 18:23:13.362002   74753 logs.go:123] Gathering logs for storage-provisioner [3eb9abdf57de51979118cfa77b613c22e28122fcb1da9e72fbf6f3a946ed3531] ...
	I0920 18:23:13.362039   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3eb9abdf57de51979118cfa77b613c22e28122fcb1da9e72fbf6f3a946ed3531"
	I0920 18:23:13.409907   74753 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:23:13.409938   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:23:13.840233   74753 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:23:13.840275   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 18:23:13.956466   74753 logs.go:123] Gathering logs for kube-apiserver [334e4df5baa4f8f7d58af2308fab7f78de22f7e07d6dfbcbeaa0df2a52f6b207] ...
	I0920 18:23:13.956499   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 334e4df5baa4f8f7d58af2308fab7f78de22f7e07d6dfbcbeaa0df2a52f6b207"
	I0920 18:23:14.006028   74753 logs.go:123] Gathering logs for etcd [98aa96314cf8f863121b9953ad6335628481f536fcab49b7c00505a17eb35ff2] ...
	I0920 18:23:14.006063   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 98aa96314cf8f863121b9953ad6335628481f536fcab49b7c00505a17eb35ff2"
	I0920 18:23:14.051354   74753 logs.go:123] Gathering logs for coredns [35f0d8dd053d4c376ccf21215492d8509ff701e7d09b345fdbe06d505ba2d6b4] ...
	I0920 18:23:14.051382   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35f0d8dd053d4c376ccf21215492d8509ff701e7d09b345fdbe06d505ba2d6b4"
	I0920 18:23:14.086486   74753 logs.go:123] Gathering logs for kube-scheduler [8153479cebb05353bec31c41778746c485cb294bf41c3f98e30559ad7da64c64] ...
	I0920 18:23:14.086518   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8153479cebb05353bec31c41778746c485cb294bf41c3f98e30559ad7da64c64"
	I0920 18:23:14.128119   74753 logs.go:123] Gathering logs for kube-proxy [6df198ca54e8004a532de9b15cfe7a48a659ae06623c00a78cad528762944497] ...
	I0920 18:23:14.128149   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6df198ca54e8004a532de9b15cfe7a48a659ae06623c00a78cad528762944497"
	I0920 18:23:16.668901   74753 api_server.go:253] Checking apiserver healthz at https://192.168.50.47:8443/healthz ...
	I0920 18:23:16.674235   74753 api_server.go:279] https://192.168.50.47:8443/healthz returned 200:
	ok
	I0920 18:23:16.675431   74753 api_server.go:141] control plane version: v1.31.1
	I0920 18:23:16.675449   74753 api_server.go:131] duration metric: took 3.879005012s to wait for apiserver health ...
	I0920 18:23:16.675456   74753 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 18:23:16.675481   74753 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:23:16.675527   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:23:16.711645   74753 cri.go:89] found id: "334e4df5baa4f8f7d58af2308fab7f78de22f7e07d6dfbcbeaa0df2a52f6b207"
	I0920 18:23:16.711673   74753 cri.go:89] found id: ""
	I0920 18:23:16.711683   74753 logs.go:276] 1 containers: [334e4df5baa4f8f7d58af2308fab7f78de22f7e07d6dfbcbeaa0df2a52f6b207]
	I0920 18:23:16.711743   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:16.716466   74753 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:23:16.716536   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:23:16.763061   74753 cri.go:89] found id: "98aa96314cf8f863121b9953ad6335628481f536fcab49b7c00505a17eb35ff2"
	I0920 18:23:16.763085   74753 cri.go:89] found id: ""
	I0920 18:23:16.763095   74753 logs.go:276] 1 containers: [98aa96314cf8f863121b9953ad6335628481f536fcab49b7c00505a17eb35ff2]
	I0920 18:23:16.763155   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:16.767018   74753 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:23:16.767075   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:23:16.801111   74753 cri.go:89] found id: "35f0d8dd053d4c376ccf21215492d8509ff701e7d09b345fdbe06d505ba2d6b4"
	I0920 18:23:16.801131   74753 cri.go:89] found id: ""
	I0920 18:23:16.801138   74753 logs.go:276] 1 containers: [35f0d8dd053d4c376ccf21215492d8509ff701e7d09b345fdbe06d505ba2d6b4]
	I0920 18:23:16.801192   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:16.805372   74753 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:23:16.805436   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:23:16.843368   74753 cri.go:89] found id: "8153479cebb05353bec31c41778746c485cb294bf41c3f98e30559ad7da64c64"
	I0920 18:23:16.843399   74753 cri.go:89] found id: ""
	I0920 18:23:16.843410   74753 logs.go:276] 1 containers: [8153479cebb05353bec31c41778746c485cb294bf41c3f98e30559ad7da64c64]
	I0920 18:23:16.843461   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:16.847284   74753 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:23:16.847341   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:23:16.882749   74753 cri.go:89] found id: "6df198ca54e8004a532de9b15cfe7a48a659ae06623c00a78cad528762944497"
	I0920 18:23:16.882770   74753 cri.go:89] found id: ""
	I0920 18:23:16.882777   74753 logs.go:276] 1 containers: [6df198ca54e8004a532de9b15cfe7a48a659ae06623c00a78cad528762944497]
	I0920 18:23:16.882838   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:16.887003   74753 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:23:16.887083   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:23:16.923261   74753 cri.go:89] found id: "3ebf4c520d684127cd0b6146d44e863090b9c2db3ac866e9ce3ef2d10d2d6da1"
	I0920 18:23:16.923289   74753 cri.go:89] found id: ""
	I0920 18:23:16.923303   74753 logs.go:276] 1 containers: [3ebf4c520d684127cd0b6146d44e863090b9c2db3ac866e9ce3ef2d10d2d6da1]
	I0920 18:23:16.923357   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:16.927722   74753 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:23:16.927782   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:23:16.963753   74753 cri.go:89] found id: ""
	I0920 18:23:16.963780   74753 logs.go:276] 0 containers: []
	W0920 18:23:16.963791   74753 logs.go:278] No container was found matching "kindnet"
	I0920 18:23:16.963799   74753 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0920 18:23:16.963866   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0920 18:23:17.000548   74753 cri.go:89] found id: "179e4a02f3459a613d3307806b477d54581555b18babca62d0c5e553b9562442"
	I0920 18:23:17.000566   74753 cri.go:89] found id: "3eb9abdf57de51979118cfa77b613c22e28122fcb1da9e72fbf6f3a946ed3531"
	I0920 18:23:17.000571   74753 cri.go:89] found id: ""
	I0920 18:23:17.000578   74753 logs.go:276] 2 containers: [179e4a02f3459a613d3307806b477d54581555b18babca62d0c5e553b9562442 3eb9abdf57de51979118cfa77b613c22e28122fcb1da9e72fbf6f3a946ed3531]
	I0920 18:23:17.000627   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:17.004708   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:17.008489   74753 logs.go:123] Gathering logs for coredns [35f0d8dd053d4c376ccf21215492d8509ff701e7d09b345fdbe06d505ba2d6b4] ...
	I0920 18:23:17.008518   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35f0d8dd053d4c376ccf21215492d8509ff701e7d09b345fdbe06d505ba2d6b4"
	I0920 18:23:17.045971   74753 logs.go:123] Gathering logs for kube-scheduler [8153479cebb05353bec31c41778746c485cb294bf41c3f98e30559ad7da64c64] ...
	I0920 18:23:17.046005   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8153479cebb05353bec31c41778746c485cb294bf41c3f98e30559ad7da64c64"
	I0920 18:23:17.083976   74753 logs.go:123] Gathering logs for kube-controller-manager [3ebf4c520d684127cd0b6146d44e863090b9c2db3ac866e9ce3ef2d10d2d6da1] ...
	I0920 18:23:17.084005   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ebf4c520d684127cd0b6146d44e863090b9c2db3ac866e9ce3ef2d10d2d6da1"
	I0920 18:23:17.137192   74753 logs.go:123] Gathering logs for storage-provisioner [179e4a02f3459a613d3307806b477d54581555b18babca62d0c5e553b9562442] ...
	I0920 18:23:17.137228   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 179e4a02f3459a613d3307806b477d54581555b18babca62d0c5e553b9562442"
	I0920 18:23:17.177203   74753 logs.go:123] Gathering logs for dmesg ...
	I0920 18:23:17.177234   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:23:17.192612   74753 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:23:17.192639   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 18:23:17.300681   74753 logs.go:123] Gathering logs for kube-apiserver [334e4df5baa4f8f7d58af2308fab7f78de22f7e07d6dfbcbeaa0df2a52f6b207] ...
	I0920 18:23:17.300714   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 334e4df5baa4f8f7d58af2308fab7f78de22f7e07d6dfbcbeaa0df2a52f6b207"
	I0920 18:23:17.346835   74753 logs.go:123] Gathering logs for storage-provisioner [3eb9abdf57de51979118cfa77b613c22e28122fcb1da9e72fbf6f3a946ed3531] ...
	I0920 18:23:17.346866   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3eb9abdf57de51979118cfa77b613c22e28122fcb1da9e72fbf6f3a946ed3531"
	I0920 18:23:17.382625   74753 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:23:17.382650   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:23:17.750803   74753 logs.go:123] Gathering logs for container status ...
	I0920 18:23:17.750853   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:23:17.801114   74753 logs.go:123] Gathering logs for kubelet ...
	I0920 18:23:17.801142   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:23:17.866547   74753 logs.go:123] Gathering logs for etcd [98aa96314cf8f863121b9953ad6335628481f536fcab49b7c00505a17eb35ff2] ...
	I0920 18:23:17.866589   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 98aa96314cf8f863121b9953ad6335628481f536fcab49b7c00505a17eb35ff2"
	I0920 18:23:17.909489   74753 logs.go:123] Gathering logs for kube-proxy [6df198ca54e8004a532de9b15cfe7a48a659ae06623c00a78cad528762944497] ...
	I0920 18:23:17.909519   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6df198ca54e8004a532de9b15cfe7a48a659ae06623c00a78cad528762944497"
	I0920 18:23:20.456014   74753 system_pods.go:59] 8 kube-system pods found
	I0920 18:23:20.456055   74753 system_pods.go:61] "coredns-7c65d6cfc9-j2t5h" [dd2636f1-3200-4f22-957c-046277c9be8c] Running
	I0920 18:23:20.456063   74753 system_pods.go:61] "etcd-no-preload-956403" [daf84be0-e673-4685-a998-47ec64664484] Running
	I0920 18:23:20.456068   74753 system_pods.go:61] "kube-apiserver-no-preload-956403" [416217e6-3c91-49ec-bd69-812a49b4afd9] Running
	I0920 18:23:20.456074   74753 system_pods.go:61] "kube-controller-manager-no-preload-956403" [30fae740-b073-4619-bca5-010ca90b2667] Running
	I0920 18:23:20.456079   74753 system_pods.go:61] "kube-proxy-sz4bm" [269600fb-ef65-4b17-8c07-76c79e35f5a8] Running
	I0920 18:23:20.456085   74753 system_pods.go:61] "kube-scheduler-no-preload-956403" [46432927-e506-4665-8825-c5f6a2ed0458] Running
	I0920 18:23:20.456095   74753 system_pods.go:61] "metrics-server-6867b74b74-tfsff" [599ba06a-6d4d-483b-b390-a3595a814757] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 18:23:20.456101   74753 system_pods.go:61] "storage-provisioner" [df661627-7d32-4467-9805-1ae65d4fa35c] Running
	I0920 18:23:20.456109   74753 system_pods.go:74] duration metric: took 3.780646412s to wait for pod list to return data ...
	I0920 18:23:20.456116   74753 default_sa.go:34] waiting for default service account to be created ...
	I0920 18:23:20.459571   74753 default_sa.go:45] found service account: "default"
	I0920 18:23:20.459598   74753 default_sa.go:55] duration metric: took 3.475906ms for default service account to be created ...
	I0920 18:23:20.459607   74753 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 18:23:20.464453   74753 system_pods.go:86] 8 kube-system pods found
	I0920 18:23:20.464489   74753 system_pods.go:89] "coredns-7c65d6cfc9-j2t5h" [dd2636f1-3200-4f22-957c-046277c9be8c] Running
	I0920 18:23:20.464495   74753 system_pods.go:89] "etcd-no-preload-956403" [daf84be0-e673-4685-a998-47ec64664484] Running
	I0920 18:23:20.464499   74753 system_pods.go:89] "kube-apiserver-no-preload-956403" [416217e6-3c91-49ec-bd69-812a49b4afd9] Running
	I0920 18:23:20.464544   74753 system_pods.go:89] "kube-controller-manager-no-preload-956403" [30fae740-b073-4619-bca5-010ca90b2667] Running
	I0920 18:23:20.464559   74753 system_pods.go:89] "kube-proxy-sz4bm" [269600fb-ef65-4b17-8c07-76c79e35f5a8] Running
	I0920 18:23:20.464563   74753 system_pods.go:89] "kube-scheduler-no-preload-956403" [46432927-e506-4665-8825-c5f6a2ed0458] Running
	I0920 18:23:20.464570   74753 system_pods.go:89] "metrics-server-6867b74b74-tfsff" [599ba06a-6d4d-483b-b390-a3595a814757] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 18:23:20.464577   74753 system_pods.go:89] "storage-provisioner" [df661627-7d32-4467-9805-1ae65d4fa35c] Running
	I0920 18:23:20.464583   74753 system_pods.go:126] duration metric: took 4.971732ms to wait for k8s-apps to be running ...
	I0920 18:23:20.464592   74753 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 18:23:20.464637   74753 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:23:20.483425   74753 system_svc.go:56] duration metric: took 18.826162ms WaitForService to wait for kubelet
	I0920 18:23:20.483452   74753 kubeadm.go:582] duration metric: took 4m21.334124064s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:23:20.483469   74753 node_conditions.go:102] verifying NodePressure condition ...
	I0920 18:23:20.486703   74753 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 18:23:20.486725   74753 node_conditions.go:123] node cpu capacity is 2
	I0920 18:23:20.486736   74753 node_conditions.go:105] duration metric: took 3.263035ms to run NodePressure ...
	I0920 18:23:20.486745   74753 start.go:241] waiting for startup goroutines ...
	I0920 18:23:20.486752   74753 start.go:246] waiting for cluster config update ...
	I0920 18:23:20.486763   74753 start.go:255] writing updated cluster config ...
	I0920 18:23:20.487023   74753 ssh_runner.go:195] Run: rm -f paused
	I0920 18:23:20.538569   74753 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 18:23:20.540783   74753 out.go:177] * Done! kubectl is now configured to use "no-preload-956403" cluster and "default" namespace by default
	I0920 18:24:23.891635   75577 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0920 18:24:23.891735   75577 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0920 18:24:23.893591   75577 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0920 18:24:23.893681   75577 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 18:24:23.893785   75577 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 18:24:23.893913   75577 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 18:24:23.894056   75577 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0920 18:24:23.894134   75577 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 18:24:23.895929   75577 out.go:235]   - Generating certificates and keys ...
	I0920 18:24:23.896024   75577 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 18:24:23.896127   75577 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 18:24:23.896245   75577 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 18:24:23.896329   75577 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 18:24:23.896426   75577 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 18:24:23.896510   75577 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 18:24:23.896603   75577 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 18:24:23.896686   75577 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 18:24:23.896783   75577 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 18:24:23.896865   75577 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 18:24:23.896916   75577 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 18:24:23.896984   75577 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 18:24:23.897054   75577 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 18:24:23.897112   75577 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 18:24:23.897174   75577 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 18:24:23.897237   75577 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 18:24:23.897367   75577 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 18:24:23.897488   75577 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 18:24:23.897554   75577 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 18:24:23.897660   75577 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 18:24:23.899056   75577 out.go:235]   - Booting up control plane ...
	I0920 18:24:23.899173   75577 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 18:24:23.899287   75577 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 18:24:23.899371   75577 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 18:24:23.899460   75577 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 18:24:23.899639   75577 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0920 18:24:23.899719   75577 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0920 18:24:23.899823   75577 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:24:23.900047   75577 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:24:23.900124   75577 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:24:23.900322   75577 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:24:23.900392   75577 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:24:23.900556   75577 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:24:23.900634   75577 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:24:23.900826   75577 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:24:23.900884   75577 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:24:23.901044   75577 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:24:23.901052   75577 kubeadm.go:310] 
	I0920 18:24:23.901094   75577 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0920 18:24:23.901129   75577 kubeadm.go:310] 		timed out waiting for the condition
	I0920 18:24:23.901135   75577 kubeadm.go:310] 
	I0920 18:24:23.901179   75577 kubeadm.go:310] 	This error is likely caused by:
	I0920 18:24:23.901209   75577 kubeadm.go:310] 		- The kubelet is not running
	I0920 18:24:23.901314   75577 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0920 18:24:23.901335   75577 kubeadm.go:310] 
	I0920 18:24:23.901458   75577 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0920 18:24:23.901515   75577 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0920 18:24:23.901547   75577 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0920 18:24:23.901556   75577 kubeadm.go:310] 
	I0920 18:24:23.901719   75577 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0920 18:24:23.901859   75577 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0920 18:24:23.901870   75577 kubeadm.go:310] 
	I0920 18:24:23.902030   75577 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0920 18:24:23.902122   75577 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0920 18:24:23.902186   75577 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0920 18:24:23.902264   75577 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0920 18:24:23.902299   75577 kubeadm.go:310] 
	W0920 18:24:23.902388   75577 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0920 18:24:23.902435   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0920 18:24:24.370490   75577 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:24:24.385383   75577 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 18:24:24.395443   75577 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 18:24:24.395463   75577 kubeadm.go:157] found existing configuration files:
	
	I0920 18:24:24.395505   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 18:24:24.404947   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 18:24:24.405021   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 18:24:24.415145   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 18:24:24.424199   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 18:24:24.424279   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 18:24:24.435108   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 18:24:24.446684   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 18:24:24.446742   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 18:24:24.457199   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 18:24:24.467240   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 18:24:24.467315   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 18:24:24.479293   75577 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 18:24:24.704601   75577 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 18:26:21.167677   75577 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0920 18:26:21.167760   75577 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0920 18:26:21.170176   75577 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0920 18:26:21.170249   75577 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 18:26:21.170353   75577 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 18:26:21.170485   75577 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 18:26:21.170653   75577 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0920 18:26:21.170778   75577 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 18:26:21.172479   75577 out.go:235]   - Generating certificates and keys ...
	I0920 18:26:21.172559   75577 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 18:26:21.172623   75577 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 18:26:21.172734   75577 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 18:26:21.172820   75577 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 18:26:21.172901   75577 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 18:26:21.172948   75577 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 18:26:21.173054   75577 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 18:26:21.173145   75577 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 18:26:21.173238   75577 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 18:26:21.173325   75577 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 18:26:21.173381   75577 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 18:26:21.173468   75577 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 18:26:21.173517   75577 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 18:26:21.173562   75577 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 18:26:21.173657   75577 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 18:26:21.173753   75577 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 18:26:21.173931   75577 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 18:26:21.174042   75577 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 18:26:21.174120   75577 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 18:26:21.174226   75577 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 18:26:21.175785   75577 out.go:235]   - Booting up control plane ...
	I0920 18:26:21.175897   75577 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 18:26:21.176062   75577 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 18:26:21.176179   75577 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 18:26:21.176283   75577 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 18:26:21.176482   75577 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0920 18:26:21.176567   75577 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0920 18:26:21.176654   75577 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:26:21.176850   75577 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:26:21.176952   75577 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:26:21.177146   75577 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:26:21.177255   75577 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:26:21.177460   75577 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:26:21.177527   75577 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:26:21.177681   75577 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:26:21.177758   75577 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:26:21.177952   75577 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:26:21.177966   75577 kubeadm.go:310] 
	I0920 18:26:21.178001   75577 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0920 18:26:21.178035   75577 kubeadm.go:310] 		timed out waiting for the condition
	I0920 18:26:21.178041   75577 kubeadm.go:310] 
	I0920 18:26:21.178070   75577 kubeadm.go:310] 	This error is likely caused by:
	I0920 18:26:21.178099   75577 kubeadm.go:310] 		- The kubelet is not running
	I0920 18:26:21.178239   75577 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0920 18:26:21.178251   75577 kubeadm.go:310] 
	I0920 18:26:21.178348   75577 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0920 18:26:21.178378   75577 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0920 18:26:21.178415   75577 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0920 18:26:21.178421   75577 kubeadm.go:310] 
	I0920 18:26:21.178505   75577 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0920 18:26:21.178581   75577 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0920 18:26:21.178590   75577 kubeadm.go:310] 
	I0920 18:26:21.178695   75577 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0920 18:26:21.178770   75577 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0920 18:26:21.178832   75577 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0920 18:26:21.178893   75577 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0920 18:26:21.178950   75577 kubeadm.go:394] duration metric: took 7m57.80706374s to StartCluster
	I0920 18:26:21.178961   75577 kubeadm.go:310] 
	I0920 18:26:21.178989   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:26:21.179047   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:26:21.229528   75577 cri.go:89] found id: ""
	I0920 18:26:21.229554   75577 logs.go:276] 0 containers: []
	W0920 18:26:21.229564   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:26:21.229572   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:26:21.229644   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:26:21.265997   75577 cri.go:89] found id: ""
	I0920 18:26:21.266021   75577 logs.go:276] 0 containers: []
	W0920 18:26:21.266029   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:26:21.266034   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:26:21.266081   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:26:21.309358   75577 cri.go:89] found id: ""
	I0920 18:26:21.309391   75577 logs.go:276] 0 containers: []
	W0920 18:26:21.309402   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:26:21.309409   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:26:21.309469   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:26:21.349418   75577 cri.go:89] found id: ""
	I0920 18:26:21.349442   75577 logs.go:276] 0 containers: []
	W0920 18:26:21.349451   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:26:21.349457   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:26:21.349537   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:26:21.388067   75577 cri.go:89] found id: ""
	I0920 18:26:21.388092   75577 logs.go:276] 0 containers: []
	W0920 18:26:21.388105   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:26:21.388111   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:26:21.388165   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:26:21.428474   75577 cri.go:89] found id: ""
	I0920 18:26:21.428501   75577 logs.go:276] 0 containers: []
	W0920 18:26:21.428512   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:26:21.428519   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:26:21.428580   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:26:21.466188   75577 cri.go:89] found id: ""
	I0920 18:26:21.466215   75577 logs.go:276] 0 containers: []
	W0920 18:26:21.466225   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:26:21.466232   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:26:21.466291   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:26:21.504396   75577 cri.go:89] found id: ""
	I0920 18:26:21.504422   75577 logs.go:276] 0 containers: []
	W0920 18:26:21.504432   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:26:21.504443   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:26:21.504457   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:26:21.608057   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:26:21.608098   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:26:21.672536   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:26:21.672564   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:26:21.723219   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:26:21.723251   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:26:21.736436   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:26:21.736463   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:26:21.809055   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0920 18:26:21.809079   75577 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0920 18:26:21.809128   75577 out.go:270] * 
	W0920 18:26:21.809197   75577 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0920 18:26:21.809217   75577 out.go:270] * 
	W0920 18:26:21.810193   75577 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 18:26:21.814385   75577 out.go:201] 
	W0920 18:26:21.816118   75577 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0920 18:26:21.816186   75577 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0920 18:26:21.816225   75577 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0920 18:26:21.818478   75577 out.go:201] 
	
	
	==> CRI-O <==
	Sep 20 18:31:39 default-k8s-diff-port-553719 crio[709]: time="2024-09-20 18:31:39.779410445Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:ed8e8a17964af0e9308b4e677d4be559514ab91dea1b7a13613a5ccec8532128,Metadata:&PodSandboxMetadata{Name:busybox,Uid:03376c58-8368-41cb-8d71-ec5f2ff84ab5,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726856302573585693,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 03376c58-8368-41cb-8d71-ec5f2ff84ab5,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-20T18:18:06.719399239Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:03bb196d26977bda9b4de0f31d33c8af500a378ed1b27a610d9c2ce2ca86f7d4,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-dmdfb,Uid:8e4ffdb3-37e0-4498-bdf5-d6e7dbcad020,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:172685
6302571440425,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-dmdfb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e4ffdb3-37e0-4498-bdf5-d6e7dbcad020,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-20T18:18:06.719378279Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:248365a77cdfed6ba5be353b372dde0209f3506d07dae8aab7ba94cdbcad3c99,Metadata:&PodSandboxMetadata{Name:metrics-server-6867b74b74-vtl79,Uid:29e0b6eb-22a9-4e37-97f9-83b48cc38193,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726856294778345576,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-6867b74b74-vtl79,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29e0b6eb-22a9-4e37-97f9-83b48cc38193,k8s-app: metrics-server,pod-template-hash: 6867b74b74,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-20
T18:18:06.719397409Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8ea9454041ee2bbfdc6b68119211b17ce2e62e4bf161dccd83d58506e3f24b7e,Metadata:&PodSandboxMetadata{Name:kube-proxy-p9crq,Uid:83e0f53d-6960-42c4-904d-ea85ba9160f4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726856287036579593,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-p9crq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83e0f53d-6960-42c4-904d-ea85ba9160f4,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-20T18:18:06.719393411Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f599733da12b5d939c2c63ceb1397a444f3aa903f568890ba5592522225fb5ce,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:6fad2d07-f99e-45ac-9657-bce6d73d7fce,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726856287034911364,Labels:
map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fad2d07-f99e-45ac-9657-bce6d73d7fce,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,ku
bernetes.io/config.seen: 2024-09-20T18:18:06.719400944Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:63f1d4cd5ff76e0d41b21c738ea89441a068893bacd49ef216250266d36bae56,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-default-k8s-diff-port-553719,Uid:48787ec85035644941355902d7fc180b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726856283313607887,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-553719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48787ec85035644941355902d7fc180b,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 48787ec85035644941355902d7fc180b,kubernetes.io/config.seen: 2024-09-20T18:18:02.721705848Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3783189e1d427c6344484a14cff656ae057adde4f6c123f5e345f36ce2911459,Metadata:&PodSandboxMetadata{Name:etcd-default-k8s-diff
-port-553719,Uid:3ed85fac33b111d1c67965836593508e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726856283310634140,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-default-k8s-diff-port-553719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ed85fac33b111d1c67965836593508e,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.72.190:2379,kubernetes.io/config.hash: 3ed85fac33b111d1c67965836593508e,kubernetes.io/config.seen: 2024-09-20T18:18:02.721700212Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e6416e3470b426953107537063c2a9ee93b73ef1aacf09515dcd20ea9dda51b0,Metadata:&PodSandboxMetadata{Name:kube-scheduler-default-k8s-diff-port-553719,Uid:739557da412bdc7964815ab846378cab,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726856283307265418,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name
: POD,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-553719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 739557da412bdc7964815ab846378cab,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 739557da412bdc7964815ab846378cab,kubernetes.io/config.seen: 2024-09-20T18:18:02.721706993Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:becbb7375ce81ceda866b0f82a38d38a458ad272af5fa7787c5079ea228a565a,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-553719,Uid:58340272c9cd93d514ace52e4571f9c1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726856283291950520,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-553719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58340272c9cd93d514ace52e4571f9c1,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-a
ddress.endpoint: 192.168.72.190:8444,kubernetes.io/config.hash: 58340272c9cd93d514ace52e4571f9c1,kubernetes.io/config.seen: 2024-09-20T18:18:02.721704350Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=b4281684-a0ac-434b-b373-805663797c5a name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 20 18:31:39 default-k8s-diff-port-553719 crio[709]: time="2024-09-20 18:31:39.780388437Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3ed16daa-b593-4cb8-a1eb-f5a881707288 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:31:39 default-k8s-diff-port-553719 crio[709]: time="2024-09-20 18:31:39.780477468Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3ed16daa-b593-4cb8-a1eb-f5a881707288 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:31:39 default-k8s-diff-port-553719 crio[709]: time="2024-09-20 18:31:39.780698256Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:001bdc98537f0917f021c14a5d0733bd31cf19be1b4862502458a7a90e4300da,PodSandboxId:f599733da12b5d939c2c63ceb1397a444f3aa903f568890ba5592522225fb5ce,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726856317980466357,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fad2d07-f99e-45ac-9657-bce6d73d7fce,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b63423dedcc6b7a32bfef5f6d1056defef2c63f67e7da30aae922692de71bdf,PodSandboxId:ed8e8a17964af0e9308b4e677d4be559514ab91dea1b7a13613a5ccec8532128,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726856306828572138,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 03376c58-8368-41cb-8d71-ec5f2ff84ab5,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:606f7c8a9a095f499ae422de76f76105607db72e57b9b0a6db3e3c5a81656c5c,PodSandboxId:03bb196d26977bda9b4de0f31d33c8af500a378ed1b27a610d9c2ce2ca86f7d4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726856303441887270,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dmdfb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e4ffdb3-37e0-4498-bdf5-d6e7dbcad020,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:702f7f440eb6016c2c758a6be4e1072274ab6dcd169d7a081e5464869a7c299c,PodSandboxId:8ea9454041ee2bbfdc6b68119211b17ce2e62e4bf161dccd83d58506e3f24b7e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726856287158667233,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p9crq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83e0f53d-6
960-42c4-904d-ea85ba9160f4,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c42201b6e3d55b6fd8beaf04d416658b228bf627e519ddb022dad6d17219cc1c,PodSandboxId:f599733da12b5d939c2c63ceb1397a444f3aa903f568890ba5592522225fb5ce,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726856287149197810,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fad2d07-f99e-45ac-9657
-bce6d73d7fce,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65da0bae1c849a395cc241962bb6cbdf9f3edf11cdc95f4495ac2e427f9602d4,PodSandboxId:3783189e1d427c6344484a14cff656ae057adde4f6c123f5e345f36ce2911459,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726856283616687970,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-553719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ed85fac33b111d1c67965836593508e,},Annotations:map[
string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ba313deffc6185e59dac50051e933f6b2ea70d630cf36b2b8c2c07042f793ba,PodSandboxId:e6416e3470b426953107537063c2a9ee93b73ef1aacf09515dcd20ea9dda51b0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726856283624475103,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-553719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 739557da412bdc7964815ab846378cab,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ca303b795795bd920f261c88630f30fd51a91b9694a9a6e67af8164bab8242b,PodSandboxId:63f1d4cd5ff76e0d41b21c738ea89441a068893bacd49ef216250266d36bae56,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726856283617569700,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-553719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48787ec85035644941355902d7fc
180b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ea0cfbd9902ae8e3797c77592bfe05a5f8162f75a391f5c2dc992eb5154c4ad,PodSandboxId:becbb7375ce81ceda866b0f82a38d38a458ad272af5fa7787c5079ea228a565a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726856283606463117,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-553719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58340272c9cd93d514ace52e4571f9
c1,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3ed16daa-b593-4cb8-a1eb-f5a881707288 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:31:39 default-k8s-diff-port-553719 crio[709]: time="2024-09-20 18:31:39.814381709Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=42aa2978-34c8-4769-9ef8-c719f9e4e4f4 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:31:39 default-k8s-diff-port-553719 crio[709]: time="2024-09-20 18:31:39.814507874Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=42aa2978-34c8-4769-9ef8-c719f9e4e4f4 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:31:39 default-k8s-diff-port-553719 crio[709]: time="2024-09-20 18:31:39.816508358Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d1bc91b3-2c9f-4463-9be4-802291f027b1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:31:39 default-k8s-diff-port-553719 crio[709]: time="2024-09-20 18:31:39.817130595Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857099817096424,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d1bc91b3-2c9f-4463-9be4-802291f027b1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:31:39 default-k8s-diff-port-553719 crio[709]: time="2024-09-20 18:31:39.818003117Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=306e6074-7a0d-40cb-958a-b26cc59471da name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:31:39 default-k8s-diff-port-553719 crio[709]: time="2024-09-20 18:31:39.818146147Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=306e6074-7a0d-40cb-958a-b26cc59471da name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:31:39 default-k8s-diff-port-553719 crio[709]: time="2024-09-20 18:31:39.818463094Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:001bdc98537f0917f021c14a5d0733bd31cf19be1b4862502458a7a90e4300da,PodSandboxId:f599733da12b5d939c2c63ceb1397a444f3aa903f568890ba5592522225fb5ce,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726856317980466357,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fad2d07-f99e-45ac-9657-bce6d73d7fce,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b63423dedcc6b7a32bfef5f6d1056defef2c63f67e7da30aae922692de71bdf,PodSandboxId:ed8e8a17964af0e9308b4e677d4be559514ab91dea1b7a13613a5ccec8532128,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726856306828572138,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 03376c58-8368-41cb-8d71-ec5f2ff84ab5,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:606f7c8a9a095f499ae422de76f76105607db72e57b9b0a6db3e3c5a81656c5c,PodSandboxId:03bb196d26977bda9b4de0f31d33c8af500a378ed1b27a610d9c2ce2ca86f7d4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726856303441887270,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dmdfb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e4ffdb3-37e0-4498-bdf5-d6e7dbcad020,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:702f7f440eb6016c2c758a6be4e1072274ab6dcd169d7a081e5464869a7c299c,PodSandboxId:8ea9454041ee2bbfdc6b68119211b17ce2e62e4bf161dccd83d58506e3f24b7e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726856287158667233,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p9crq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83e0f53d-6
960-42c4-904d-ea85ba9160f4,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c42201b6e3d55b6fd8beaf04d416658b228bf627e519ddb022dad6d17219cc1c,PodSandboxId:f599733da12b5d939c2c63ceb1397a444f3aa903f568890ba5592522225fb5ce,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726856287149197810,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fad2d07-f99e-45ac-9657
-bce6d73d7fce,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65da0bae1c849a395cc241962bb6cbdf9f3edf11cdc95f4495ac2e427f9602d4,PodSandboxId:3783189e1d427c6344484a14cff656ae057adde4f6c123f5e345f36ce2911459,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726856283616687970,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-553719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ed85fac33b111d1c67965836593508e,},Annotations:map[
string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ba313deffc6185e59dac50051e933f6b2ea70d630cf36b2b8c2c07042f793ba,PodSandboxId:e6416e3470b426953107537063c2a9ee93b73ef1aacf09515dcd20ea9dda51b0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726856283624475103,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-553719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 739557da412bdc7964815ab846378cab,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ca303b795795bd920f261c88630f30fd51a91b9694a9a6e67af8164bab8242b,PodSandboxId:63f1d4cd5ff76e0d41b21c738ea89441a068893bacd49ef216250266d36bae56,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726856283617569700,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-553719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48787ec85035644941355902d7fc
180b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ea0cfbd9902ae8e3797c77592bfe05a5f8162f75a391f5c2dc992eb5154c4ad,PodSandboxId:becbb7375ce81ceda866b0f82a38d38a458ad272af5fa7787c5079ea228a565a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726856283606463117,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-553719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58340272c9cd93d514ace52e4571f9
c1,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=306e6074-7a0d-40cb-958a-b26cc59471da name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:31:39 default-k8s-diff-port-553719 crio[709]: time="2024-09-20 18:31:39.856931201Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6c5ad4be-9c40-4064-a1be-1573c228befd name=/runtime.v1.RuntimeService/Version
	Sep 20 18:31:39 default-k8s-diff-port-553719 crio[709]: time="2024-09-20 18:31:39.857018535Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6c5ad4be-9c40-4064-a1be-1573c228befd name=/runtime.v1.RuntimeService/Version
	Sep 20 18:31:39 default-k8s-diff-port-553719 crio[709]: time="2024-09-20 18:31:39.858866701Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a16fb371-9b90-4053-8816-08fcd0476acc name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:31:39 default-k8s-diff-port-553719 crio[709]: time="2024-09-20 18:31:39.859403440Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857099859377280,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a16fb371-9b90-4053-8816-08fcd0476acc name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:31:39 default-k8s-diff-port-553719 crio[709]: time="2024-09-20 18:31:39.859976903Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ac33fc31-a79f-4b8a-ac15-b9d6376f6fcd name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:31:39 default-k8s-diff-port-553719 crio[709]: time="2024-09-20 18:31:39.860051036Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ac33fc31-a79f-4b8a-ac15-b9d6376f6fcd name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:31:39 default-k8s-diff-port-553719 crio[709]: time="2024-09-20 18:31:39.860287029Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:001bdc98537f0917f021c14a5d0733bd31cf19be1b4862502458a7a90e4300da,PodSandboxId:f599733da12b5d939c2c63ceb1397a444f3aa903f568890ba5592522225fb5ce,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726856317980466357,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fad2d07-f99e-45ac-9657-bce6d73d7fce,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b63423dedcc6b7a32bfef5f6d1056defef2c63f67e7da30aae922692de71bdf,PodSandboxId:ed8e8a17964af0e9308b4e677d4be559514ab91dea1b7a13613a5ccec8532128,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726856306828572138,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 03376c58-8368-41cb-8d71-ec5f2ff84ab5,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:606f7c8a9a095f499ae422de76f76105607db72e57b9b0a6db3e3c5a81656c5c,PodSandboxId:03bb196d26977bda9b4de0f31d33c8af500a378ed1b27a610d9c2ce2ca86f7d4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726856303441887270,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dmdfb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e4ffdb3-37e0-4498-bdf5-d6e7dbcad020,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:702f7f440eb6016c2c758a6be4e1072274ab6dcd169d7a081e5464869a7c299c,PodSandboxId:8ea9454041ee2bbfdc6b68119211b17ce2e62e4bf161dccd83d58506e3f24b7e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726856287158667233,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p9crq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83e0f53d-6
960-42c4-904d-ea85ba9160f4,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c42201b6e3d55b6fd8beaf04d416658b228bf627e519ddb022dad6d17219cc1c,PodSandboxId:f599733da12b5d939c2c63ceb1397a444f3aa903f568890ba5592522225fb5ce,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726856287149197810,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fad2d07-f99e-45ac-9657
-bce6d73d7fce,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65da0bae1c849a395cc241962bb6cbdf9f3edf11cdc95f4495ac2e427f9602d4,PodSandboxId:3783189e1d427c6344484a14cff656ae057adde4f6c123f5e345f36ce2911459,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726856283616687970,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-553719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ed85fac33b111d1c67965836593508e,},Annotations:map[
string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ba313deffc6185e59dac50051e933f6b2ea70d630cf36b2b8c2c07042f793ba,PodSandboxId:e6416e3470b426953107537063c2a9ee93b73ef1aacf09515dcd20ea9dda51b0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726856283624475103,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-553719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 739557da412bdc7964815ab846378cab,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ca303b795795bd920f261c88630f30fd51a91b9694a9a6e67af8164bab8242b,PodSandboxId:63f1d4cd5ff76e0d41b21c738ea89441a068893bacd49ef216250266d36bae56,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726856283617569700,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-553719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48787ec85035644941355902d7fc
180b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ea0cfbd9902ae8e3797c77592bfe05a5f8162f75a391f5c2dc992eb5154c4ad,PodSandboxId:becbb7375ce81ceda866b0f82a38d38a458ad272af5fa7787c5079ea228a565a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726856283606463117,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-553719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58340272c9cd93d514ace52e4571f9
c1,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ac33fc31-a79f-4b8a-ac15-b9d6376f6fcd name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:31:39 default-k8s-diff-port-553719 crio[709]: time="2024-09-20 18:31:39.893715784Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9137db78-1ebd-41c4-9ce1-239e0fe12f75 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:31:39 default-k8s-diff-port-553719 crio[709]: time="2024-09-20 18:31:39.893803008Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9137db78-1ebd-41c4-9ce1-239e0fe12f75 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:31:39 default-k8s-diff-port-553719 crio[709]: time="2024-09-20 18:31:39.894704813Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8358c06e-4fb9-4495-b3cf-0bb9b717782a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:31:39 default-k8s-diff-port-553719 crio[709]: time="2024-09-20 18:31:39.895412163Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857099895380599,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8358c06e-4fb9-4495-b3cf-0bb9b717782a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:31:39 default-k8s-diff-port-553719 crio[709]: time="2024-09-20 18:31:39.896108415Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=de52b19a-849e-4f30-8b30-5b5097335683 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:31:39 default-k8s-diff-port-553719 crio[709]: time="2024-09-20 18:31:39.896192613Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=de52b19a-849e-4f30-8b30-5b5097335683 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:31:39 default-k8s-diff-port-553719 crio[709]: time="2024-09-20 18:31:39.896539280Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:001bdc98537f0917f021c14a5d0733bd31cf19be1b4862502458a7a90e4300da,PodSandboxId:f599733da12b5d939c2c63ceb1397a444f3aa903f568890ba5592522225fb5ce,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726856317980466357,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fad2d07-f99e-45ac-9657-bce6d73d7fce,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b63423dedcc6b7a32bfef5f6d1056defef2c63f67e7da30aae922692de71bdf,PodSandboxId:ed8e8a17964af0e9308b4e677d4be559514ab91dea1b7a13613a5ccec8532128,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726856306828572138,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 03376c58-8368-41cb-8d71-ec5f2ff84ab5,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:606f7c8a9a095f499ae422de76f76105607db72e57b9b0a6db3e3c5a81656c5c,PodSandboxId:03bb196d26977bda9b4de0f31d33c8af500a378ed1b27a610d9c2ce2ca86f7d4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726856303441887270,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dmdfb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e4ffdb3-37e0-4498-bdf5-d6e7dbcad020,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:702f7f440eb6016c2c758a6be4e1072274ab6dcd169d7a081e5464869a7c299c,PodSandboxId:8ea9454041ee2bbfdc6b68119211b17ce2e62e4bf161dccd83d58506e3f24b7e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726856287158667233,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p9crq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83e0f53d-6
960-42c4-904d-ea85ba9160f4,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c42201b6e3d55b6fd8beaf04d416658b228bf627e519ddb022dad6d17219cc1c,PodSandboxId:f599733da12b5d939c2c63ceb1397a444f3aa903f568890ba5592522225fb5ce,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726856287149197810,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fad2d07-f99e-45ac-9657
-bce6d73d7fce,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65da0bae1c849a395cc241962bb6cbdf9f3edf11cdc95f4495ac2e427f9602d4,PodSandboxId:3783189e1d427c6344484a14cff656ae057adde4f6c123f5e345f36ce2911459,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726856283616687970,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-553719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ed85fac33b111d1c67965836593508e,},Annotations:map[
string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ba313deffc6185e59dac50051e933f6b2ea70d630cf36b2b8c2c07042f793ba,PodSandboxId:e6416e3470b426953107537063c2a9ee93b73ef1aacf09515dcd20ea9dda51b0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726856283624475103,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-553719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 739557da412bdc7964815ab846378cab,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ca303b795795bd920f261c88630f30fd51a91b9694a9a6e67af8164bab8242b,PodSandboxId:63f1d4cd5ff76e0d41b21c738ea89441a068893bacd49ef216250266d36bae56,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726856283617569700,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-553719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48787ec85035644941355902d7fc
180b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ea0cfbd9902ae8e3797c77592bfe05a5f8162f75a391f5c2dc992eb5154c4ad,PodSandboxId:becbb7375ce81ceda866b0f82a38d38a458ad272af5fa7787c5079ea228a565a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726856283606463117,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-553719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58340272c9cd93d514ace52e4571f9
c1,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=de52b19a-849e-4f30-8b30-5b5097335683 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	001bdc98537f0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Running             storage-provisioner       2                   f599733da12b5       storage-provisioner
	8b63423dedcc6       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   ed8e8a17964af       busybox
	606f7c8a9a095       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      13 minutes ago      Running             coredns                   1                   03bb196d26977       coredns-7c65d6cfc9-dmdfb
	702f7f440eb60       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      13 minutes ago      Running             kube-proxy                1                   8ea9454041ee2       kube-proxy-p9crq
	c42201b6e3d55       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   f599733da12b5       storage-provisioner
	6ba313deffc61       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      13 minutes ago      Running             kube-scheduler            1                   e6416e3470b42       kube-scheduler-default-k8s-diff-port-553719
	4ca303b795795       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      13 minutes ago      Running             kube-controller-manager   1                   63f1d4cd5ff76       kube-controller-manager-default-k8s-diff-port-553719
	65da0bae1c849       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago      Running             etcd                      1                   3783189e1d427       etcd-default-k8s-diff-port-553719
	0ea0cfbd9902a       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      13 minutes ago      Running             kube-apiserver            1                   becbb7375ce81       kube-apiserver-default-k8s-diff-port-553719
	
	
	==> coredns [606f7c8a9a095f499ae422de76f76105607db72e57b9b0a6db3e3c5a81656c5c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:57797 - 33834 "HINFO IN 4768601586295699900.8450345759229431803. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01441496s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-553719
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-553719
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0626f22cf0d915d75e291a5bce701f94395056e1
	                    minikube.k8s.io/name=default-k8s-diff-port-553719
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T18_10_14_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 18:10:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-553719
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 18:31:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 18:28:50 +0000   Fri, 20 Sep 2024 18:10:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 18:28:50 +0000   Fri, 20 Sep 2024 18:10:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 18:28:50 +0000   Fri, 20 Sep 2024 18:10:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 18:28:50 +0000   Fri, 20 Sep 2024 18:18:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.190
	  Hostname:    default-k8s-diff-port-553719
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 409942eb36b647cf8b7f3a958cd0853b
	  System UUID:                409942eb-36b6-47cf-8b7f-3a958cd0853b
	  Boot ID:                    2ad31e65-3c83-4e1f-8488-097cec36a556
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 coredns-7c65d6cfc9-dmdfb                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 etcd-default-k8s-diff-port-553719                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         21m
	  kube-system                 kube-apiserver-default-k8s-diff-port-553719             250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-553719    200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-p9crq                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-default-k8s-diff-port-553719             100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 metrics-server-6867b74b74-vtl79                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         20m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     21m                kubelet          Node default-k8s-diff-port-553719 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node default-k8s-diff-port-553719 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node default-k8s-diff-port-553719 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeReady                21m                kubelet          Node default-k8s-diff-port-553719 status is now: NodeReady
	  Normal  RegisteredNode           21m                node-controller  Node default-k8s-diff-port-553719 event: Registered Node default-k8s-diff-port-553719 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-553719 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-553719 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node default-k8s-diff-port-553719 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node default-k8s-diff-port-553719 event: Registered Node default-k8s-diff-port-553719 in Controller
	
	
	==> dmesg <==
	[Sep20 18:17] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.048616] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038045] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.227511] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.104786] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.698927] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.243018] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +0.067872] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067359] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.239803] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +0.150943] systemd-fstab-generator[669]: Ignoring "noauto" option for root device
	[  +0.305895] systemd-fstab-generator[699]: Ignoring "noauto" option for root device
	[Sep20 18:18] systemd-fstab-generator[792]: Ignoring "noauto" option for root device
	[  +0.064154] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.684288] systemd-fstab-generator[911]: Ignoring "noauto" option for root device
	[  +4.753737] kauditd_printk_skb: 97 callbacks suppressed
	[  +2.199444] systemd-fstab-generator[1527]: Ignoring "noauto" option for root device
	[  +5.516050] kauditd_printk_skb: 64 callbacks suppressed
	[  +7.768687] kauditd_printk_skb: 11 callbacks suppressed
	[ +15.433149] kauditd_printk_skb: 28 callbacks suppressed
	
	
	==> etcd [65da0bae1c849a395cc241962bb6cbdf9f3edf11cdc95f4495ac2e427f9602d4] <==
	{"level":"warn","ts":"2024-09-20T18:18:23.879560Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T18:18:23.423214Z","time spent":"456.279086ms","remote":"127.0.0.1:60234","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":730,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/events/default/busybox.17f706a89f305acc\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/busybox.17f706a89f305acc\" value_size:663 lease:3926736525345634230 >> failure:<>"}
	{"level":"warn","ts":"2024-09-20T18:18:23.879867Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"454.917385ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/default-k8s-diff-port-553719\" ","response":"range_response_count:1 size:5537"}
	{"level":"info","ts":"2024-09-20T18:18:23.879920Z","caller":"traceutil/trace.go:171","msg":"trace[1341085918] range","detail":"{range_begin:/registry/minions/default-k8s-diff-port-553719; range_end:; response_count:1; response_revision:602; }","duration":"454.977076ms","start":"2024-09-20T18:18:23.424932Z","end":"2024-09-20T18:18:23.879909Z","steps":["trace[1341085918] 'agreement among raft nodes before linearized reading'  (duration: 454.820534ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T18:18:23.879952Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T18:18:23.424898Z","time spent":"455.045256ms","remote":"127.0.0.1:60338","response type":"/etcdserverpb.KV/Range","request count":0,"request size":48,"response count":1,"response size":5559,"request content":"key:\"/registry/minions/default-k8s-diff-port-553719\" "}
	{"level":"warn","ts":"2024-09-20T18:18:23.880389Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"444.892091ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T18:18:23.880441Z","caller":"traceutil/trace.go:171","msg":"trace[1375434689] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:602; }","duration":"444.949024ms","start":"2024-09-20T18:18:23.435483Z","end":"2024-09-20T18:18:23.880432Z","steps":["trace[1375434689] 'agreement among raft nodes before linearized reading'  (duration: 444.867598ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T18:18:23.880466Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T18:18:23.435438Z","time spent":"445.02219ms","remote":"127.0.0.1:60132","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-09-20T18:18:38.303569Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.842317ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3926736525345634635 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/storage-provisioner.17f706a4db914433\" mod_revision:561 > success:<request_put:<key:\"/registry/events/kube-system/storage-provisioner.17f706a4db914433\" value_size:688 lease:3926736525345634230 >> failure:<request_range:<key:\"/registry/events/kube-system/storage-provisioner.17f706a4db914433\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-09-20T18:18:38.303661Z","caller":"traceutil/trace.go:171","msg":"trace[782945959] linearizableReadLoop","detail":"{readStateIndex:680; appliedIndex:679; }","duration":"202.950028ms","start":"2024-09-20T18:18:38.100701Z","end":"2024-09-20T18:18:38.303651Z","steps":["trace[782945959] 'read index received'  (duration: 76.968908ms)","trace[782945959] 'applied index is now lower than readState.Index'  (duration: 125.97844ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-20T18:18:38.303749Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"203.040852ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1128"}
	{"level":"info","ts":"2024-09-20T18:18:38.303773Z","caller":"traceutil/trace.go:171","msg":"trace[695849660] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:632; }","duration":"203.067353ms","start":"2024-09-20T18:18:38.100697Z","end":"2024-09-20T18:18:38.303764Z","steps":["trace[695849660] 'agreement among raft nodes before linearized reading'  (duration: 202.982673ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T18:18:38.303933Z","caller":"traceutil/trace.go:171","msg":"trace[1594204947] transaction","detail":"{read_only:false; response_revision:632; number_of_response:1; }","duration":"206.991814ms","start":"2024-09-20T18:18:38.096926Z","end":"2024-09-20T18:18:38.303918Z","steps":["trace[1594204947] 'process raft request'  (duration: 80.704901ms)","trace[1594204947] 'compare'  (duration: 125.66549ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-20T18:18:44.135475Z","caller":"traceutil/trace.go:171","msg":"trace[319560462] linearizableReadLoop","detail":"{readStateIndex:684; appliedIndex:683; }","duration":"352.987998ms","start":"2024-09-20T18:18:43.782469Z","end":"2024-09-20T18:18:44.135457Z","steps":["trace[319560462] 'read index received'  (duration: 10.508544ms)","trace[319560462] 'applied index is now lower than readState.Index'  (duration: 342.478594ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-20T18:18:44.135695Z","caller":"traceutil/trace.go:171","msg":"trace[1768896368] transaction","detail":"{read_only:false; response_revision:635; number_of_response:1; }","duration":"366.400897ms","start":"2024-09-20T18:18:43.769283Z","end":"2024-09-20T18:18:44.135684Z","steps":["trace[1768896368] 'process raft request'  (duration: 364.968687ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T18:18:44.135803Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T18:18:43.769267Z","time spent":"366.46938ms","remote":"127.0.0.1:60346","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4381,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/metrics-server-6867b74b74-vtl79\" mod_revision:622 > success:<request_put:<key:\"/registry/pods/kube-system/metrics-server-6867b74b74-vtl79\" value_size:4315 >> failure:<request_range:<key:\"/registry/pods/kube-system/metrics-server-6867b74b74-vtl79\" > >"}
	{"level":"warn","ts":"2024-09-20T18:18:44.136003Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"353.506914ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T18:18:44.136118Z","caller":"traceutil/trace.go:171","msg":"trace[693549530] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:635; }","duration":"353.578002ms","start":"2024-09-20T18:18:43.782463Z","end":"2024-09-20T18:18:44.136041Z","steps":["trace[693549530] 'agreement among raft nodes before linearized reading'  (duration: 353.461415ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T18:18:44.136507Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"340.211474ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/metrics-server-6867b74b74-vtl79.17f706a6dd0688ed\" ","response":"range_response_count:1 size:804"}
	{"level":"info","ts":"2024-09-20T18:18:44.136599Z","caller":"traceutil/trace.go:171","msg":"trace[1776183942] range","detail":"{range_begin:/registry/events/kube-system/metrics-server-6867b74b74-vtl79.17f706a6dd0688ed; range_end:; response_count:1; response_revision:635; }","duration":"340.28596ms","start":"2024-09-20T18:18:43.796281Z","end":"2024-09-20T18:18:44.136567Z","steps":["trace[1776183942] 'agreement among raft nodes before linearized reading'  (duration: 340.143831ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T18:18:44.137422Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T18:18:43.796245Z","time spent":"341.16179ms","remote":"127.0.0.1:60234","response type":"/etcdserverpb.KV/Range","request count":0,"request size":79,"response count":1,"response size":826,"request content":"key:\"/registry/events/kube-system/metrics-server-6867b74b74-vtl79.17f706a6dd0688ed\" "}
	{"level":"warn","ts":"2024-09-20T18:18:44.136714Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"138.861532ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-6867b74b74-vtl79\" ","response":"range_response_count:1 size:4396"}
	{"level":"info","ts":"2024-09-20T18:18:44.138517Z","caller":"traceutil/trace.go:171","msg":"trace[1346530390] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-6867b74b74-vtl79; range_end:; response_count:1; response_revision:635; }","duration":"140.662896ms","start":"2024-09-20T18:18:43.997841Z","end":"2024-09-20T18:18:44.138503Z","steps":["trace[1346530390] 'agreement among raft nodes before linearized reading'  (duration: 138.836442ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T18:28:04.997193Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":859}
	{"level":"info","ts":"2024-09-20T18:28:05.008941Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":859,"took":"10.986725ms","hash":59549771,"current-db-size-bytes":2748416,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":2748416,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2024-09-20T18:28:05.009102Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":59549771,"revision":859,"compact-revision":-1}
	
	
	==> kernel <==
	 18:31:40 up 14 min,  0 users,  load average: 0.09, 0.12, 0.09
	Linux default-k8s-diff-port-553719 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [0ea0cfbd9902ae8e3797c77592bfe05a5f8162f75a391f5c2dc992eb5154c4ad] <==
	W0920 18:28:07.422967       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 18:28:07.423130       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0920 18:28:07.424295       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0920 18:28:07.424368       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0920 18:29:07.424769       1 handler_proxy.go:99] no RequestInfo found in the context
	W0920 18:29:07.424809       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 18:29:07.425111       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0920 18:29:07.425240       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0920 18:29:07.426428       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0920 18:29:07.426510       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0920 18:31:07.427045       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 18:31:07.427271       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0920 18:31:07.427429       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 18:31:07.427450       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0920 18:31:07.430644       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0920 18:31:07.431090       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [4ca303b795795bd920f261c88630f30fd51a91b9694a9a6e67af8164bab8242b] <==
	I0920 18:26:10.521118       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 18:26:40.057796       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 18:26:40.528588       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 18:27:10.063959       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 18:27:10.537234       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 18:27:40.070715       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 18:27:40.544850       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 18:28:10.077520       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 18:28:10.558016       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 18:28:40.084783       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 18:28:40.566269       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0920 18:28:50.480469       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-553719"
	E0920 18:29:10.091200       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 18:29:10.574524       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0920 18:29:28.777302       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="382.645µs"
	E0920 18:29:40.097768       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 18:29:40.582572       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0920 18:29:40.775406       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="83.278µs"
	E0920 18:30:10.105218       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 18:30:10.590875       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 18:30:40.112140       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 18:30:40.599400       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 18:31:10.119326       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 18:31:10.611669       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 18:31:40.125277       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	
	
	==> kube-proxy [702f7f440eb6016c2c758a6be4e1072274ab6dcd169d7a081e5464869a7c299c] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 18:18:07.395130       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0920 18:18:07.406475       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.190"]
	E0920 18:18:07.406555       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 18:18:07.456794       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 18:18:07.456844       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 18:18:07.456867       1 server_linux.go:169] "Using iptables Proxier"
	I0920 18:18:07.463431       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 18:18:07.463681       1 server.go:483] "Version info" version="v1.31.1"
	I0920 18:18:07.463709       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 18:18:07.466301       1 config.go:199] "Starting service config controller"
	I0920 18:18:07.466352       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 18:18:07.466381       1 config.go:105] "Starting endpoint slice config controller"
	I0920 18:18:07.466385       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 18:18:07.466878       1 config.go:328] "Starting node config controller"
	I0920 18:18:07.466908       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 18:18:07.567231       1 shared_informer.go:320] Caches are synced for node config
	I0920 18:18:07.567324       1 shared_informer.go:320] Caches are synced for service config
	I0920 18:18:07.567339       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [6ba313deffc6185e59dac50051e933f6b2ea70d630cf36b2b8c2c07042f793ba] <==
	I0920 18:18:04.966228       1 serving.go:386] Generated self-signed cert in-memory
	W0920 18:18:06.341785       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0920 18:18:06.341824       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0920 18:18:06.341859       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0920 18:18:06.341865       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0920 18:18:06.412319       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0920 18:18:06.412365       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 18:18:06.419170       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0920 18:18:06.419211       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0920 18:18:06.419606       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0920 18:18:06.419719       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0920 18:18:06.519864       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 20 18:30:30 default-k8s-diff-port-553719 kubelet[918]: E0920 18:30:30.760735     918 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-vtl79" podUID="29e0b6eb-22a9-4e37-97f9-83b48cc38193"
	Sep 20 18:30:32 default-k8s-diff-port-553719 kubelet[918]: E0920 18:30:32.980119     918 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857032979642384,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:30:32 default-k8s-diff-port-553719 kubelet[918]: E0920 18:30:32.980592     918 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857032979642384,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:30:42 default-k8s-diff-port-553719 kubelet[918]: E0920 18:30:42.982732     918 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857042982366634,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:30:42 default-k8s-diff-port-553719 kubelet[918]: E0920 18:30:42.983321     918 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857042982366634,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:30:45 default-k8s-diff-port-553719 kubelet[918]: E0920 18:30:45.761213     918 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-vtl79" podUID="29e0b6eb-22a9-4e37-97f9-83b48cc38193"
	Sep 20 18:30:52 default-k8s-diff-port-553719 kubelet[918]: E0920 18:30:52.986227     918 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857052985692366,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:30:52 default-k8s-diff-port-553719 kubelet[918]: E0920 18:30:52.986550     918 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857052985692366,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:30:58 default-k8s-diff-port-553719 kubelet[918]: E0920 18:30:58.766208     918 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-vtl79" podUID="29e0b6eb-22a9-4e37-97f9-83b48cc38193"
	Sep 20 18:31:02 default-k8s-diff-port-553719 kubelet[918]: E0920 18:31:02.783414     918 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 20 18:31:02 default-k8s-diff-port-553719 kubelet[918]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 20 18:31:02 default-k8s-diff-port-553719 kubelet[918]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 20 18:31:02 default-k8s-diff-port-553719 kubelet[918]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 20 18:31:02 default-k8s-diff-port-553719 kubelet[918]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 20 18:31:02 default-k8s-diff-port-553719 kubelet[918]: E0920 18:31:02.988351     918 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857062987907782,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:31:02 default-k8s-diff-port-553719 kubelet[918]: E0920 18:31:02.988386     918 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857062987907782,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:31:12 default-k8s-diff-port-553719 kubelet[918]: E0920 18:31:12.990444     918 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857072989608397,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:31:12 default-k8s-diff-port-553719 kubelet[918]: E0920 18:31:12.990806     918 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857072989608397,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:31:13 default-k8s-diff-port-553719 kubelet[918]: E0920 18:31:13.759961     918 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-vtl79" podUID="29e0b6eb-22a9-4e37-97f9-83b48cc38193"
	Sep 20 18:31:22 default-k8s-diff-port-553719 kubelet[918]: E0920 18:31:22.993556     918 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857082992781713,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:31:22 default-k8s-diff-port-553719 kubelet[918]: E0920 18:31:22.993978     918 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857082992781713,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:31:28 default-k8s-diff-port-553719 kubelet[918]: E0920 18:31:28.761299     918 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-vtl79" podUID="29e0b6eb-22a9-4e37-97f9-83b48cc38193"
	Sep 20 18:31:32 default-k8s-diff-port-553719 kubelet[918]: E0920 18:31:32.996421     918 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857092995869340,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:31:32 default-k8s-diff-port-553719 kubelet[918]: E0920 18:31:32.996462     918 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857092995869340,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:31:39 default-k8s-diff-port-553719 kubelet[918]: E0920 18:31:39.761009     918 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-vtl79" podUID="29e0b6eb-22a9-4e37-97f9-83b48cc38193"
	
	
	==> storage-provisioner [001bdc98537f0917f021c14a5d0733bd31cf19be1b4862502458a7a90e4300da] <==
	I0920 18:18:38.086146       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0920 18:18:38.099020       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0920 18:18:38.099239       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0920 18:18:55.707350       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0920 18:18:55.707572       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-553719_e6ae17bf-1a4f-4f05-8f9f-0545ec95b792!
	I0920 18:18:55.708512       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"112e5b22-6eac-4a3d-bf05-c16d06da4538", APIVersion:"v1", ResourceVersion:"640", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-553719_e6ae17bf-1a4f-4f05-8f9f-0545ec95b792 became leader
	I0920 18:18:55.809359       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-553719_e6ae17bf-1a4f-4f05-8f9f-0545ec95b792!
	
	
	==> storage-provisioner [c42201b6e3d55b6fd8beaf04d416658b228bf627e519ddb022dad6d17219cc1c] <==
	I0920 18:18:07.246782       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0920 18:18:37.250632       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-553719 -n default-k8s-diff-port-553719
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-553719 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-vtl79
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-553719 describe pod metrics-server-6867b74b74-vtl79
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-553719 describe pod metrics-server-6867b74b74-vtl79: exit status 1 (67.013904ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-vtl79" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-553719 describe pod metrics-server-6867b74b74-vtl79: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.58s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0920 18:23:04.124384   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/enable-default-cni-833505/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-768431 -n embed-certs-768431
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-09-20 18:31:49.879679134 +0000 UTC m=+6491.523760680
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-768431 -n embed-certs-768431
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-768431 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-768431 logs -n 25: (2.340216076s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-833505 sudo cat                             | flannel-833505               | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC | 20 Sep 24 18:09 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p flannel-833505 sudo                                 | flannel-833505               | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC | 20 Sep 24 18:09 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p flannel-833505 sudo                                 | flannel-833505               | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC | 20 Sep 24 18:09 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p flannel-833505 sudo                                 | flannel-833505               | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC | 20 Sep 24 18:09 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p flannel-833505 sudo find                            | flannel-833505               | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC | 20 Sep 24 18:09 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p flannel-833505 sudo crio                            | flannel-833505               | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC | 20 Sep 24 18:09 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p flannel-833505                                      | flannel-833505               | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC | 20 Sep 24 18:09 UTC |
	| delete  | -p                                                     | disable-driver-mounts-739804 | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC | 20 Sep 24 18:09 UTC |
	|         | disable-driver-mounts-739804                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-553719 | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC | 20 Sep 24 18:10 UTC |
	|         | default-k8s-diff-port-553719                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-956403             | no-preload-956403            | jenkins | v1.34.0 | 20 Sep 24 18:10 UTC | 20 Sep 24 18:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-956403                                   | no-preload-956403            | jenkins | v1.34.0 | 20 Sep 24 18:10 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-768431            | embed-certs-768431           | jenkins | v1.34.0 | 20 Sep 24 18:10 UTC | 20 Sep 24 18:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-768431                                  | embed-certs-768431           | jenkins | v1.34.0 | 20 Sep 24 18:10 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-553719  | default-k8s-diff-port-553719 | jenkins | v1.34.0 | 20 Sep 24 18:11 UTC | 20 Sep 24 18:11 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-553719 | jenkins | v1.34.0 | 20 Sep 24 18:11 UTC |                     |
	|         | default-k8s-diff-port-553719                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-956403                  | no-preload-956403            | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-744025        | old-k8s-version-744025       | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p no-preload-956403                                   | no-preload-956403            | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC | 20 Sep 24 18:23 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-768431                 | embed-certs-768431           | jenkins | v1.34.0 | 20 Sep 24 18:13 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-768431                                  | embed-certs-768431           | jenkins | v1.34.0 | 20 Sep 24 18:13 UTC | 20 Sep 24 18:22 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-553719       | default-k8s-diff-port-553719 | jenkins | v1.34.0 | 20 Sep 24 18:13 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-553719 | jenkins | v1.34.0 | 20 Sep 24 18:13 UTC | 20 Sep 24 18:22 UTC |
	|         | default-k8s-diff-port-553719                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-744025                              | old-k8s-version-744025       | jenkins | v1.34.0 | 20 Sep 24 18:14 UTC | 20 Sep 24 18:14 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-744025             | old-k8s-version-744025       | jenkins | v1.34.0 | 20 Sep 24 18:14 UTC | 20 Sep 24 18:14 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-744025                              | old-k8s-version-744025       | jenkins | v1.34.0 | 20 Sep 24 18:14 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 18:14:12
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 18:14:12.735020   75577 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:14:12.735385   75577 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:14:12.735400   75577 out.go:358] Setting ErrFile to fd 2...
	I0920 18:14:12.735407   75577 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:14:12.735610   75577 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-8777/.minikube/bin
	I0920 18:14:12.736161   75577 out.go:352] Setting JSON to false
	I0920 18:14:12.737033   75577 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6996,"bootTime":1726849057,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 18:14:12.737131   75577 start.go:139] virtualization: kvm guest
	I0920 18:14:12.739127   75577 out.go:177] * [old-k8s-version-744025] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 18:14:12.740288   75577 notify.go:220] Checking for updates...
	I0920 18:14:12.740310   75577 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 18:14:12.741542   75577 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 18:14:12.742727   75577 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-8777/kubeconfig
	I0920 18:14:12.743806   75577 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-8777/.minikube
	I0920 18:14:12.744863   75577 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 18:14:12.745877   75577 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 18:14:12.747201   75577 config.go:182] Loaded profile config "old-k8s-version-744025": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0920 18:14:12.747636   75577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:14:12.747678   75577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:14:12.762352   75577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42889
	I0920 18:14:12.762734   75577 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:14:12.763321   75577 main.go:141] libmachine: Using API Version  1
	I0920 18:14:12.763348   75577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:14:12.763742   75577 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:14:12.763968   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .DriverName
	I0920 18:14:12.765767   75577 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0920 18:14:12.767036   75577 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 18:14:12.767377   75577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:14:12.767449   75577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:14:12.782473   75577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40777
	I0920 18:14:12.782971   75577 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:14:12.783455   75577 main.go:141] libmachine: Using API Version  1
	I0920 18:14:12.783477   75577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:14:12.783807   75577 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:14:12.784035   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .DriverName
	I0920 18:14:12.820265   75577 out.go:177] * Using the kvm2 driver based on existing profile
	I0920 18:14:12.821397   75577 start.go:297] selected driver: kvm2
	I0920 18:14:12.821409   75577 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-744025 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-744025 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.207 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:14:12.821519   75577 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 18:14:12.822217   75577 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:14:12.822309   75577 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19672-8777/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 18:14:12.837641   75577 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 18:14:12.838107   75577 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:14:12.838143   75577 cni.go:84] Creating CNI manager for ""
	I0920 18:14:12.838193   75577 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:14:12.838231   75577 start.go:340] cluster config:
	{Name:old-k8s-version-744025 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-744025 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.207 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:14:12.838329   75577 iso.go:125] acquiring lock: {Name:mkba95ef0488e46f622333e9f317f43def93040b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:14:12.840059   75577 out.go:177] * Starting "old-k8s-version-744025" primary control-plane node in "old-k8s-version-744025" cluster
	I0920 18:14:15.230099   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:14:12.841339   75577 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0920 18:14:12.841384   75577 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0920 18:14:12.841394   75577 cache.go:56] Caching tarball of preloaded images
	I0920 18:14:12.841473   75577 preload.go:172] Found /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 18:14:12.841482   75577 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0920 18:14:12.841594   75577 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/config.json ...
	I0920 18:14:12.841781   75577 start.go:360] acquireMachinesLock for old-k8s-version-744025: {Name:mkfeedb385cf08b5d2aa00913e85815d02a180c2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 18:14:18.302232   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:14:24.382087   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:14:27.454110   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:14:33.534076   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:14:36.606161   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:14:42.686057   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:14:45.758152   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:14:51.838087   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:14:54.910159   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:00.990183   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:04.062105   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:10.142153   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:13.218090   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:19.294132   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:22.366139   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:28.446103   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:31.518062   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:37.598126   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:40.670145   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:46.750116   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:49.822142   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:55.902120   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:58.974169   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:16:05.054139   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:16:08.126122   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:16:14.206089   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:16:17.278137   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:16:23.358156   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:16:26.430180   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:16:32.510115   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:16:35.582114   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:16:41.662126   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:16:44.734140   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:16:50.814123   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:16:53.886188   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:16:59.966139   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:17:03.038067   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:17:09.118169   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:17:12.190091   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:17:15.194165   75086 start.go:364] duration metric: took 4m1.091611985s to acquireMachinesLock for "embed-certs-768431"
	I0920 18:17:15.194223   75086 start.go:96] Skipping create...Using existing machine configuration
	I0920 18:17:15.194231   75086 fix.go:54] fixHost starting: 
	I0920 18:17:15.194737   75086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:17:15.194794   75086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:17:15.210558   75086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43191
	I0920 18:17:15.211084   75086 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:17:15.211527   75086 main.go:141] libmachine: Using API Version  1
	I0920 18:17:15.211550   75086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:17:15.211882   75086 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:17:15.212099   75086 main.go:141] libmachine: (embed-certs-768431) Calling .DriverName
	I0920 18:17:15.212217   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetState
	I0920 18:17:15.213955   75086 fix.go:112] recreateIfNeeded on embed-certs-768431: state=Stopped err=<nil>
	I0920 18:17:15.213995   75086 main.go:141] libmachine: (embed-certs-768431) Calling .DriverName
	W0920 18:17:15.214129   75086 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 18:17:15.215650   75086 out.go:177] * Restarting existing kvm2 VM for "embed-certs-768431" ...
	I0920 18:17:15.191760   74753 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:17:15.191794   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetMachineName
	I0920 18:17:15.192105   74753 buildroot.go:166] provisioning hostname "no-preload-956403"
	I0920 18:17:15.192133   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetMachineName
	I0920 18:17:15.192325   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:17:15.194039   74753 machine.go:96] duration metric: took 4m37.421116123s to provisionDockerMachine
	I0920 18:17:15.194077   74753 fix.go:56] duration metric: took 4m37.444193238s for fixHost
	I0920 18:17:15.194087   74753 start.go:83] releasing machines lock for "no-preload-956403", held for 4m37.444234787s
	W0920 18:17:15.194109   74753 start.go:714] error starting host: provision: host is not running
	W0920 18:17:15.194214   74753 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0920 18:17:15.194222   74753 start.go:729] Will try again in 5 seconds ...
	I0920 18:17:15.216590   75086 main.go:141] libmachine: (embed-certs-768431) Calling .Start
	I0920 18:17:15.216773   75086 main.go:141] libmachine: (embed-certs-768431) Ensuring networks are active...
	I0920 18:17:15.217439   75086 main.go:141] libmachine: (embed-certs-768431) Ensuring network default is active
	I0920 18:17:15.217769   75086 main.go:141] libmachine: (embed-certs-768431) Ensuring network mk-embed-certs-768431 is active
	I0920 18:17:15.218058   75086 main.go:141] libmachine: (embed-certs-768431) Getting domain xml...
	I0920 18:17:15.218674   75086 main.go:141] libmachine: (embed-certs-768431) Creating domain...
	I0920 18:17:16.454922   75086 main.go:141] libmachine: (embed-certs-768431) Waiting to get IP...
	I0920 18:17:16.456011   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:16.456437   75086 main.go:141] libmachine: (embed-certs-768431) DBG | unable to find current IP address of domain embed-certs-768431 in network mk-embed-certs-768431
	I0920 18:17:16.456518   75086 main.go:141] libmachine: (embed-certs-768431) DBG | I0920 18:17:16.456416   76214 retry.go:31] will retry after 282.081004ms: waiting for machine to come up
	I0920 18:17:16.740062   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:16.740455   75086 main.go:141] libmachine: (embed-certs-768431) DBG | unable to find current IP address of domain embed-certs-768431 in network mk-embed-certs-768431
	I0920 18:17:16.740505   75086 main.go:141] libmachine: (embed-certs-768431) DBG | I0920 18:17:16.740420   76214 retry.go:31] will retry after 335.306847ms: waiting for machine to come up
	I0920 18:17:17.077169   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:17.077564   75086 main.go:141] libmachine: (embed-certs-768431) DBG | unable to find current IP address of domain embed-certs-768431 in network mk-embed-certs-768431
	I0920 18:17:17.077594   75086 main.go:141] libmachine: (embed-certs-768431) DBG | I0920 18:17:17.077513   76214 retry.go:31] will retry after 455.255315ms: waiting for machine to come up
	I0920 18:17:17.534166   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:17.534545   75086 main.go:141] libmachine: (embed-certs-768431) DBG | unable to find current IP address of domain embed-certs-768431 in network mk-embed-certs-768431
	I0920 18:17:17.534570   75086 main.go:141] libmachine: (embed-certs-768431) DBG | I0920 18:17:17.534511   76214 retry.go:31] will retry after 476.184378ms: waiting for machine to come up
	I0920 18:17:18.011940   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:18.012405   75086 main.go:141] libmachine: (embed-certs-768431) DBG | unable to find current IP address of domain embed-certs-768431 in network mk-embed-certs-768431
	I0920 18:17:18.012436   75086 main.go:141] libmachine: (embed-certs-768431) DBG | I0920 18:17:18.012359   76214 retry.go:31] will retry after 597.678215ms: waiting for machine to come up
	I0920 18:17:18.611179   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:18.611560   75086 main.go:141] libmachine: (embed-certs-768431) DBG | unable to find current IP address of domain embed-certs-768431 in network mk-embed-certs-768431
	I0920 18:17:18.611588   75086 main.go:141] libmachine: (embed-certs-768431) DBG | I0920 18:17:18.611521   76214 retry.go:31] will retry after 696.074491ms: waiting for machine to come up
	I0920 18:17:20.194460   74753 start.go:360] acquireMachinesLock for no-preload-956403: {Name:mkfeedb385cf08b5d2aa00913e85815d02a180c2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 18:17:19.309493   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:19.309940   75086 main.go:141] libmachine: (embed-certs-768431) DBG | unable to find current IP address of domain embed-certs-768431 in network mk-embed-certs-768431
	I0920 18:17:19.309958   75086 main.go:141] libmachine: (embed-certs-768431) DBG | I0920 18:17:19.309909   76214 retry.go:31] will retry after 825.638908ms: waiting for machine to come up
	I0920 18:17:20.137018   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:20.137380   75086 main.go:141] libmachine: (embed-certs-768431) DBG | unable to find current IP address of domain embed-certs-768431 in network mk-embed-certs-768431
	I0920 18:17:20.137407   75086 main.go:141] libmachine: (embed-certs-768431) DBG | I0920 18:17:20.137338   76214 retry.go:31] will retry after 997.909719ms: waiting for machine to come up
	I0920 18:17:21.137154   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:21.137608   75086 main.go:141] libmachine: (embed-certs-768431) DBG | unable to find current IP address of domain embed-certs-768431 in network mk-embed-certs-768431
	I0920 18:17:21.137630   75086 main.go:141] libmachine: (embed-certs-768431) DBG | I0920 18:17:21.137552   76214 retry.go:31] will retry after 1.368594293s: waiting for machine to come up
	I0920 18:17:22.507834   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:22.508202   75086 main.go:141] libmachine: (embed-certs-768431) DBG | unable to find current IP address of domain embed-certs-768431 in network mk-embed-certs-768431
	I0920 18:17:22.508228   75086 main.go:141] libmachine: (embed-certs-768431) DBG | I0920 18:17:22.508152   76214 retry.go:31] will retry after 1.922265011s: waiting for machine to come up
	I0920 18:17:24.431977   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:24.432422   75086 main.go:141] libmachine: (embed-certs-768431) DBG | unable to find current IP address of domain embed-certs-768431 in network mk-embed-certs-768431
	I0920 18:17:24.432452   75086 main.go:141] libmachine: (embed-certs-768431) DBG | I0920 18:17:24.432370   76214 retry.go:31] will retry after 2.875158038s: waiting for machine to come up
	I0920 18:17:27.309993   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:27.310512   75086 main.go:141] libmachine: (embed-certs-768431) DBG | unable to find current IP address of domain embed-certs-768431 in network mk-embed-certs-768431
	I0920 18:17:27.310539   75086 main.go:141] libmachine: (embed-certs-768431) DBG | I0920 18:17:27.310459   76214 retry.go:31] will retry after 3.089759463s: waiting for machine to come up
	I0920 18:17:30.402254   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:30.402718   75086 main.go:141] libmachine: (embed-certs-768431) DBG | unable to find current IP address of domain embed-certs-768431 in network mk-embed-certs-768431
	I0920 18:17:30.402757   75086 main.go:141] libmachine: (embed-certs-768431) DBG | I0920 18:17:30.402671   76214 retry.go:31] will retry after 3.42897838s: waiting for machine to come up
	I0920 18:17:33.835196   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:33.835576   75086 main.go:141] libmachine: (embed-certs-768431) Found IP for machine: 192.168.61.202
	I0920 18:17:33.835606   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has current primary IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:33.835615   75086 main.go:141] libmachine: (embed-certs-768431) Reserving static IP address...
	I0920 18:17:33.835974   75086 main.go:141] libmachine: (embed-certs-768431) Reserved static IP address: 192.168.61.202
	I0920 18:17:33.836010   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "embed-certs-768431", mac: "52:54:00:d2:f2:e2", ip: "192.168.61.202"} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:33.836024   75086 main.go:141] libmachine: (embed-certs-768431) Waiting for SSH to be available...
	I0920 18:17:33.836053   75086 main.go:141] libmachine: (embed-certs-768431) DBG | skip adding static IP to network mk-embed-certs-768431 - found existing host DHCP lease matching {name: "embed-certs-768431", mac: "52:54:00:d2:f2:e2", ip: "192.168.61.202"}
	I0920 18:17:33.836065   75086 main.go:141] libmachine: (embed-certs-768431) DBG | Getting to WaitForSSH function...
	I0920 18:17:33.838215   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:33.838561   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:33.838593   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:33.838735   75086 main.go:141] libmachine: (embed-certs-768431) DBG | Using SSH client type: external
	I0920 18:17:33.838768   75086 main.go:141] libmachine: (embed-certs-768431) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/embed-certs-768431/id_rsa (-rw-------)
	I0920 18:17:33.838809   75086 main.go:141] libmachine: (embed-certs-768431) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.202 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-8777/.minikube/machines/embed-certs-768431/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 18:17:33.838826   75086 main.go:141] libmachine: (embed-certs-768431) DBG | About to run SSH command:
	I0920 18:17:33.838838   75086 main.go:141] libmachine: (embed-certs-768431) DBG | exit 0
	I0920 18:17:33.962092   75086 main.go:141] libmachine: (embed-certs-768431) DBG | SSH cmd err, output: <nil>: 
	I0920 18:17:33.962513   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetConfigRaw
	I0920 18:17:33.963115   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetIP
	I0920 18:17:33.965714   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:33.966036   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:33.966056   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:33.966391   75086 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/embed-certs-768431/config.json ...
	I0920 18:17:33.966621   75086 machine.go:93] provisionDockerMachine start ...
	I0920 18:17:33.966641   75086 main.go:141] libmachine: (embed-certs-768431) Calling .DriverName
	I0920 18:17:33.966848   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHHostname
	I0920 18:17:33.968954   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:33.969235   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:33.969266   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:33.969358   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHPort
	I0920 18:17:33.969542   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:33.969694   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:33.969854   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHUsername
	I0920 18:17:33.970001   75086 main.go:141] libmachine: Using SSH client type: native
	I0920 18:17:33.970194   75086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0920 18:17:33.970204   75086 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 18:17:35.178887   75264 start.go:364] duration metric: took 3m58.041934266s to acquireMachinesLock for "default-k8s-diff-port-553719"
	I0920 18:17:35.178955   75264 start.go:96] Skipping create...Using existing machine configuration
	I0920 18:17:35.178968   75264 fix.go:54] fixHost starting: 
	I0920 18:17:35.179541   75264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:17:35.179604   75264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:17:35.199776   75264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39643
	I0920 18:17:35.200255   75264 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:17:35.200832   75264 main.go:141] libmachine: Using API Version  1
	I0920 18:17:35.200858   75264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:17:35.201213   75264 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:17:35.201427   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .DriverName
	I0920 18:17:35.201575   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetState
	I0920 18:17:35.203239   75264 fix.go:112] recreateIfNeeded on default-k8s-diff-port-553719: state=Stopped err=<nil>
	I0920 18:17:35.203279   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .DriverName
	W0920 18:17:35.203415   75264 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 18:17:35.205529   75264 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-553719" ...
	I0920 18:17:34.078412   75086 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 18:17:34.078447   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetMachineName
	I0920 18:17:34.078648   75086 buildroot.go:166] provisioning hostname "embed-certs-768431"
	I0920 18:17:34.078676   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetMachineName
	I0920 18:17:34.078854   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHHostname
	I0920 18:17:34.081703   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.082042   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:34.082079   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.082175   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHPort
	I0920 18:17:34.082371   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:34.082525   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:34.082656   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHUsername
	I0920 18:17:34.082778   75086 main.go:141] libmachine: Using SSH client type: native
	I0920 18:17:34.082937   75086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0920 18:17:34.082949   75086 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-768431 && echo "embed-certs-768431" | sudo tee /etc/hostname
	I0920 18:17:34.199661   75086 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-768431
	
	I0920 18:17:34.199726   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHHostname
	I0920 18:17:34.202468   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.202875   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:34.202903   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.203084   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHPort
	I0920 18:17:34.203328   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:34.203477   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:34.203630   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHUsername
	I0920 18:17:34.203769   75086 main.go:141] libmachine: Using SSH client type: native
	I0920 18:17:34.203936   75086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0920 18:17:34.203952   75086 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-768431' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-768431/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-768431' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 18:17:34.314604   75086 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:17:34.314632   75086 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-8777/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-8777/.minikube}
	I0920 18:17:34.314656   75086 buildroot.go:174] setting up certificates
	I0920 18:17:34.314666   75086 provision.go:84] configureAuth start
	I0920 18:17:34.314674   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetMachineName
	I0920 18:17:34.314940   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetIP
	I0920 18:17:34.317999   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.318306   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:34.318326   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.318538   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHHostname
	I0920 18:17:34.320715   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.321046   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:34.321073   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.321187   75086 provision.go:143] copyHostCerts
	I0920 18:17:34.321256   75086 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem, removing ...
	I0920 18:17:34.321276   75086 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem
	I0920 18:17:34.321359   75086 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem (1082 bytes)
	I0920 18:17:34.321452   75086 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem, removing ...
	I0920 18:17:34.321459   75086 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem
	I0920 18:17:34.321493   75086 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem (1123 bytes)
	I0920 18:17:34.321549   75086 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem, removing ...
	I0920 18:17:34.321558   75086 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem
	I0920 18:17:34.321580   75086 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem (1675 bytes)
	I0920 18:17:34.321692   75086 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem org=jenkins.embed-certs-768431 san=[127.0.0.1 192.168.61.202 embed-certs-768431 localhost minikube]
	I0920 18:17:34.547266   75086 provision.go:177] copyRemoteCerts
	I0920 18:17:34.547343   75086 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 18:17:34.547373   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHHostname
	I0920 18:17:34.550265   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.550648   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:34.550680   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.550895   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHPort
	I0920 18:17:34.551084   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:34.551227   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHUsername
	I0920 18:17:34.551359   75086 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/embed-certs-768431/id_rsa Username:docker}
	I0920 18:17:34.632404   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0920 18:17:34.656468   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 18:17:34.681704   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0920 18:17:34.706486   75086 provision.go:87] duration metric: took 391.807931ms to configureAuth
	I0920 18:17:34.706514   75086 buildroot.go:189] setting minikube options for container-runtime
	I0920 18:17:34.706681   75086 config.go:182] Loaded profile config "embed-certs-768431": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:17:34.706750   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHHostname
	I0920 18:17:34.709500   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.709851   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:34.709915   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.709972   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHPort
	I0920 18:17:34.710199   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:34.710337   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:34.710511   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHUsername
	I0920 18:17:34.710669   75086 main.go:141] libmachine: Using SSH client type: native
	I0920 18:17:34.710854   75086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0920 18:17:34.710875   75086 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 18:17:34.941246   75086 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 18:17:34.941277   75086 machine.go:96] duration metric: took 974.641764ms to provisionDockerMachine
	I0920 18:17:34.941293   75086 start.go:293] postStartSetup for "embed-certs-768431" (driver="kvm2")
	I0920 18:17:34.941341   75086 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 18:17:34.941370   75086 main.go:141] libmachine: (embed-certs-768431) Calling .DriverName
	I0920 18:17:34.941757   75086 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 18:17:34.941792   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHHostname
	I0920 18:17:34.944366   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.944712   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:34.944749   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.944861   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHPort
	I0920 18:17:34.945051   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:34.945198   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHUsername
	I0920 18:17:34.945320   75086 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/embed-certs-768431/id_rsa Username:docker}
	I0920 18:17:35.028570   75086 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 18:17:35.032711   75086 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 18:17:35.032751   75086 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/addons for local assets ...
	I0920 18:17:35.032859   75086 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/files for local assets ...
	I0920 18:17:35.032973   75086 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem -> 159732.pem in /etc/ssl/certs
	I0920 18:17:35.033127   75086 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 18:17:35.042212   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /etc/ssl/certs/159732.pem (1708 bytes)
	I0920 18:17:35.066379   75086 start.go:296] duration metric: took 125.0653ms for postStartSetup
	I0920 18:17:35.066429   75086 fix.go:56] duration metric: took 19.872197784s for fixHost
	I0920 18:17:35.066456   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHHostname
	I0920 18:17:35.069888   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:35.070261   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:35.070286   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:35.070472   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHPort
	I0920 18:17:35.070693   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:35.070836   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:35.070989   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHUsername
	I0920 18:17:35.071204   75086 main.go:141] libmachine: Using SSH client type: native
	I0920 18:17:35.071396   75086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0920 18:17:35.071407   75086 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 18:17:35.178674   75086 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726856255.154624802
	
	I0920 18:17:35.178704   75086 fix.go:216] guest clock: 1726856255.154624802
	I0920 18:17:35.178712   75086 fix.go:229] Guest: 2024-09-20 18:17:35.154624802 +0000 UTC Remote: 2024-09-20 18:17:35.066435161 +0000 UTC m=+261.108982198 (delta=88.189641ms)
	I0920 18:17:35.178734   75086 fix.go:200] guest clock delta is within tolerance: 88.189641ms
	I0920 18:17:35.178760   75086 start.go:83] releasing machines lock for "embed-certs-768431", held for 19.984535459s
	I0920 18:17:35.178801   75086 main.go:141] libmachine: (embed-certs-768431) Calling .DriverName
	I0920 18:17:35.179101   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetIP
	I0920 18:17:35.181807   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:35.182179   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:35.182208   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:35.182353   75086 main.go:141] libmachine: (embed-certs-768431) Calling .DriverName
	I0920 18:17:35.182850   75086 main.go:141] libmachine: (embed-certs-768431) Calling .DriverName
	I0920 18:17:35.182993   75086 main.go:141] libmachine: (embed-certs-768431) Calling .DriverName
	I0920 18:17:35.183097   75086 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 18:17:35.183171   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHHostname
	I0920 18:17:35.183171   75086 ssh_runner.go:195] Run: cat /version.json
	I0920 18:17:35.183230   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHHostname
	I0920 18:17:35.185905   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:35.185942   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:35.186249   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:35.186279   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:35.186317   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:35.186331   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:35.186476   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHPort
	I0920 18:17:35.186597   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHPort
	I0920 18:17:35.186687   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:35.186791   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:35.186816   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHUsername
	I0920 18:17:35.186974   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHUsername
	I0920 18:17:35.186992   75086 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/embed-certs-768431/id_rsa Username:docker}
	I0920 18:17:35.187127   75086 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/embed-certs-768431/id_rsa Username:docker}
	I0920 18:17:35.311021   75086 ssh_runner.go:195] Run: systemctl --version
	I0920 18:17:35.317587   75086 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 18:17:35.462634   75086 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 18:17:35.469331   75086 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 18:17:35.469444   75086 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 18:17:35.485752   75086 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 18:17:35.485780   75086 start.go:495] detecting cgroup driver to use...
	I0920 18:17:35.485903   75086 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 18:17:35.507004   75086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 18:17:35.525868   75086 docker.go:217] disabling cri-docker service (if available) ...
	I0920 18:17:35.525934   75086 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 18:17:35.542533   75086 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 18:17:35.557622   75086 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 18:17:35.669845   75086 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 18:17:35.827784   75086 docker.go:233] disabling docker service ...
	I0920 18:17:35.827855   75086 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 18:17:35.842877   75086 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 18:17:35.859468   75086 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 18:17:35.994094   75086 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 18:17:36.123353   75086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 18:17:36.137941   75086 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 18:17:36.157781   75086 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 18:17:36.157871   75086 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:36.168857   75086 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 18:17:36.168929   75086 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:36.179807   75086 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:36.192260   75086 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:36.203220   75086 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 18:17:36.214388   75086 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:36.224686   75086 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:36.242143   75086 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:36.257047   75086 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 18:17:36.272014   75086 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 18:17:36.272130   75086 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 18:17:36.286083   75086 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 18:17:36.296624   75086 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:17:36.414196   75086 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 18:17:36.511654   75086 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 18:17:36.511720   75086 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 18:17:36.516593   75086 start.go:563] Will wait 60s for crictl version
	I0920 18:17:36.516640   75086 ssh_runner.go:195] Run: which crictl
	I0920 18:17:36.520360   75086 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 18:17:36.556846   75086 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 18:17:36.556949   75086 ssh_runner.go:195] Run: crio --version
	I0920 18:17:36.584699   75086 ssh_runner.go:195] Run: crio --version
	I0920 18:17:36.615179   75086 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 18:17:35.206839   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .Start
	I0920 18:17:35.207055   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Ensuring networks are active...
	I0920 18:17:35.207928   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Ensuring network default is active
	I0920 18:17:35.208260   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Ensuring network mk-default-k8s-diff-port-553719 is active
	I0920 18:17:35.208679   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Getting domain xml...
	I0920 18:17:35.209488   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Creating domain...
	I0920 18:17:36.500751   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting to get IP...
	I0920 18:17:36.501630   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:36.502033   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | unable to find current IP address of domain default-k8s-diff-port-553719 in network mk-default-k8s-diff-port-553719
	I0920 18:17:36.502137   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | I0920 18:17:36.502044   76346 retry.go:31] will retry after 216.050114ms: waiting for machine to come up
	I0920 18:17:36.719559   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:36.720002   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | unable to find current IP address of domain default-k8s-diff-port-553719 in network mk-default-k8s-diff-port-553719
	I0920 18:17:36.720026   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | I0920 18:17:36.719977   76346 retry.go:31] will retry after 303.832728ms: waiting for machine to come up
	I0920 18:17:37.025606   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:37.026096   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | unable to find current IP address of domain default-k8s-diff-port-553719 in network mk-default-k8s-diff-port-553719
	I0920 18:17:37.026137   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | I0920 18:17:37.026050   76346 retry.go:31] will retry after 316.827461ms: waiting for machine to come up
	I0920 18:17:36.616640   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetIP
	I0920 18:17:36.619927   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:36.620377   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:36.620407   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:36.620642   75086 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0920 18:17:36.627189   75086 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:17:36.639842   75086 kubeadm.go:883] updating cluster {Name:embed-certs-768431 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-768431 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.202 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 18:17:36.639953   75086 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:17:36.640019   75086 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:17:36.676519   75086 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 18:17:36.676599   75086 ssh_runner.go:195] Run: which lz4
	I0920 18:17:36.680472   75086 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 18:17:36.684650   75086 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 18:17:36.684683   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0920 18:17:38.090996   75086 crio.go:462] duration metric: took 1.410558742s to copy over tarball
	I0920 18:17:38.091068   75086 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 18:17:37.344880   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:37.345393   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | unable to find current IP address of domain default-k8s-diff-port-553719 in network mk-default-k8s-diff-port-553719
	I0920 18:17:37.345419   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | I0920 18:17:37.345323   76346 retry.go:31] will retry after 456.966571ms: waiting for machine to come up
	I0920 18:17:37.804436   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:37.804919   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | unable to find current IP address of domain default-k8s-diff-port-553719 in network mk-default-k8s-diff-port-553719
	I0920 18:17:37.804954   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | I0920 18:17:37.804891   76346 retry.go:31] will retry after 582.103589ms: waiting for machine to come up
	I0920 18:17:38.388738   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:38.389266   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | unable to find current IP address of domain default-k8s-diff-port-553719 in network mk-default-k8s-diff-port-553719
	I0920 18:17:38.389297   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | I0920 18:17:38.389217   76346 retry.go:31] will retry after 884.882678ms: waiting for machine to come up
	I0920 18:17:39.276048   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:39.276478   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | unable to find current IP address of domain default-k8s-diff-port-553719 in network mk-default-k8s-diff-port-553719
	I0920 18:17:39.276504   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | I0920 18:17:39.276453   76346 retry.go:31] will retry after 807.001285ms: waiting for machine to come up
	I0920 18:17:40.085749   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:40.086228   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | unable to find current IP address of domain default-k8s-diff-port-553719 in network mk-default-k8s-diff-port-553719
	I0920 18:17:40.086256   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | I0920 18:17:40.086190   76346 retry.go:31] will retry after 1.283354255s: waiting for machine to come up
	I0920 18:17:41.370861   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:41.371314   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | unable to find current IP address of domain default-k8s-diff-port-553719 in network mk-default-k8s-diff-port-553719
	I0920 18:17:41.371343   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | I0920 18:17:41.371265   76346 retry.go:31] will retry after 1.756084886s: waiting for machine to come up
	I0920 18:17:40.301535   75086 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.210437306s)
	I0920 18:17:40.301567   75086 crio.go:469] duration metric: took 2.210539553s to extract the tarball
	I0920 18:17:40.301578   75086 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 18:17:40.338638   75086 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:17:40.381753   75086 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 18:17:40.381780   75086 cache_images.go:84] Images are preloaded, skipping loading
	I0920 18:17:40.381788   75086 kubeadm.go:934] updating node { 192.168.61.202 8443 v1.31.1 crio true true} ...
	I0920 18:17:40.381914   75086 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-768431 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.202
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-768431 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 18:17:40.381998   75086 ssh_runner.go:195] Run: crio config
	I0920 18:17:40.428457   75086 cni.go:84] Creating CNI manager for ""
	I0920 18:17:40.428482   75086 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:17:40.428491   75086 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 18:17:40.428512   75086 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.202 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-768431 NodeName:embed-certs-768431 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.202"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.202 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 18:17:40.428681   75086 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.202
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-768431"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.202
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.202"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 18:17:40.428800   75086 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 18:17:40.439049   75086 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 18:17:40.439123   75086 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 18:17:40.449220   75086 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0920 18:17:40.466012   75086 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 18:17:40.482158   75086 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0920 18:17:40.499225   75086 ssh_runner.go:195] Run: grep 192.168.61.202	control-plane.minikube.internal$ /etc/hosts
	I0920 18:17:40.502817   75086 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.202	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:17:40.515241   75086 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:17:40.638615   75086 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:17:40.655790   75086 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/embed-certs-768431 for IP: 192.168.61.202
	I0920 18:17:40.655814   75086 certs.go:194] generating shared ca certs ...
	I0920 18:17:40.655834   75086 certs.go:226] acquiring lock for ca certs: {Name:mkc7ef6c737c6bdc3fdd9dcff8f57029c020d8f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:17:40.656006   75086 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key
	I0920 18:17:40.656070   75086 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key
	I0920 18:17:40.656084   75086 certs.go:256] generating profile certs ...
	I0920 18:17:40.656193   75086 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/embed-certs-768431/client.key
	I0920 18:17:40.656281   75086 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/embed-certs-768431/apiserver.key.a3dbf377
	I0920 18:17:40.656354   75086 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/embed-certs-768431/proxy-client.key
	I0920 18:17:40.656508   75086 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem (1338 bytes)
	W0920 18:17:40.656560   75086 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973_empty.pem, impossibly tiny 0 bytes
	I0920 18:17:40.656573   75086 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 18:17:40.656620   75086 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem (1082 bytes)
	I0920 18:17:40.656654   75086 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem (1123 bytes)
	I0920 18:17:40.656679   75086 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem (1675 bytes)
	I0920 18:17:40.656733   75086 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem (1708 bytes)
	I0920 18:17:40.657567   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 18:17:40.693111   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 18:17:40.733664   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 18:17:40.774367   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0920 18:17:40.800930   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/embed-certs-768431/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0920 18:17:40.827793   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/embed-certs-768431/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 18:17:40.851189   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/embed-certs-768431/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 18:17:40.875092   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/embed-certs-768431/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 18:17:40.898639   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem --> /usr/share/ca-certificates/15973.pem (1338 bytes)
	I0920 18:17:40.921569   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /usr/share/ca-certificates/159732.pem (1708 bytes)
	I0920 18:17:40.945739   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 18:17:40.969942   75086 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 18:17:40.987563   75086 ssh_runner.go:195] Run: openssl version
	I0920 18:17:40.993363   75086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15973.pem && ln -fs /usr/share/ca-certificates/15973.pem /etc/ssl/certs/15973.pem"
	I0920 18:17:41.004673   75086 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15973.pem
	I0920 18:17:41.008985   75086 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:04 /usr/share/ca-certificates/15973.pem
	I0920 18:17:41.009032   75086 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15973.pem
	I0920 18:17:41.014950   75086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15973.pem /etc/ssl/certs/51391683.0"
	I0920 18:17:41.027944   75086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/159732.pem && ln -fs /usr/share/ca-certificates/159732.pem /etc/ssl/certs/159732.pem"
	I0920 18:17:41.040711   75086 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/159732.pem
	I0920 18:17:41.045304   75086 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:04 /usr/share/ca-certificates/159732.pem
	I0920 18:17:41.045361   75086 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/159732.pem
	I0920 18:17:41.050891   75086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/159732.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 18:17:41.061542   75086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 18:17:41.072622   75086 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:17:41.076916   75086 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:17:41.076965   75086 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:17:41.082644   75086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 18:17:41.093136   75086 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 18:17:41.097496   75086 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 18:17:41.103393   75086 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 18:17:41.109444   75086 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 18:17:41.115844   75086 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 18:17:41.121578   75086 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 18:17:41.127292   75086 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 18:17:41.133310   75086 kubeadm.go:392] StartCluster: {Name:embed-certs-768431 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-768431 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.202 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:17:41.133393   75086 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 18:17:41.133439   75086 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:17:41.177284   75086 cri.go:89] found id: ""
	I0920 18:17:41.177380   75086 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 18:17:41.187289   75086 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 18:17:41.187311   75086 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 18:17:41.187362   75086 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 18:17:41.196445   75086 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 18:17:41.197939   75086 kubeconfig.go:125] found "embed-certs-768431" server: "https://192.168.61.202:8443"
	I0920 18:17:41.201030   75086 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 18:17:41.210356   75086 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.202
	I0920 18:17:41.210394   75086 kubeadm.go:1160] stopping kube-system containers ...
	I0920 18:17:41.210422   75086 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0920 18:17:41.210499   75086 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:17:41.244659   75086 cri.go:89] found id: ""
	I0920 18:17:41.244721   75086 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 18:17:41.264341   75086 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 18:17:41.275229   75086 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 18:17:41.275252   75086 kubeadm.go:157] found existing configuration files:
	
	I0920 18:17:41.275314   75086 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 18:17:41.285420   75086 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 18:17:41.285502   75086 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 18:17:41.295902   75086 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 18:17:41.305951   75086 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 18:17:41.306015   75086 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 18:17:41.316567   75086 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 18:17:41.325623   75086 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 18:17:41.325691   75086 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 18:17:41.336405   75086 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 18:17:41.346501   75086 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 18:17:41.346574   75086 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 18:17:41.357340   75086 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 18:17:41.367956   75086 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:17:41.479022   75086 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:17:42.796122   75086 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.317061067s)
	I0920 18:17:42.796155   75086 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:17:43.053135   75086 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:17:43.139770   75086 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:17:43.244066   75086 api_server.go:52] waiting for apiserver process to appear ...
	I0920 18:17:43.244166   75086 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:17:43.744622   75086 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:17:43.129383   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:43.129827   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | unable to find current IP address of domain default-k8s-diff-port-553719 in network mk-default-k8s-diff-port-553719
	I0920 18:17:43.129864   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | I0920 18:17:43.129798   76346 retry.go:31] will retry after 2.259775905s: waiting for machine to come up
	I0920 18:17:45.391508   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:45.391915   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | unable to find current IP address of domain default-k8s-diff-port-553719 in network mk-default-k8s-diff-port-553719
	I0920 18:17:45.391941   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | I0920 18:17:45.391885   76346 retry.go:31] will retry after 1.767770692s: waiting for machine to come up
	I0920 18:17:44.244723   75086 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:17:44.265463   75086 api_server.go:72] duration metric: took 1.02139562s to wait for apiserver process to appear ...
	I0920 18:17:44.265500   75086 api_server.go:88] waiting for apiserver healthz status ...
	I0920 18:17:44.265528   75086 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0920 18:17:46.935510   75086 api_server.go:279] https://192.168.61.202:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 18:17:46.935571   75086 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 18:17:46.935589   75086 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0920 18:17:46.947553   75086 api_server.go:279] https://192.168.61.202:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 18:17:46.947586   75086 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 18:17:47.265919   75086 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0920 18:17:47.272986   75086 api_server.go:279] https://192.168.61.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 18:17:47.273020   75086 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 18:17:47.766662   75086 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0920 18:17:47.783700   75086 api_server.go:279] https://192.168.61.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 18:17:47.783749   75086 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 18:17:48.266012   75086 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0920 18:17:48.270573   75086 api_server.go:279] https://192.168.61.202:8443/healthz returned 200:
	ok
	I0920 18:17:48.277290   75086 api_server.go:141] control plane version: v1.31.1
	I0920 18:17:48.277331   75086 api_server.go:131] duration metric: took 4.011813655s to wait for apiserver health ...
	I0920 18:17:48.277342   75086 cni.go:84] Creating CNI manager for ""
	I0920 18:17:48.277351   75086 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:17:48.278863   75086 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 18:17:48.279934   75086 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 18:17:48.304037   75086 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 18:17:48.337082   75086 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 18:17:48.349485   75086 system_pods.go:59] 8 kube-system pods found
	I0920 18:17:48.349541   75086 system_pods.go:61] "coredns-7c65d6cfc9-cskt4" [2ce6fa43-bdf9-4625-a198-8f59ee9fcf0b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 18:17:48.349554   75086 system_pods.go:61] "etcd-embed-certs-768431" [656af9fd-b380-4934-a0b5-5d39a755de44] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0920 18:17:48.349565   75086 system_pods.go:61] "kube-apiserver-embed-certs-768431" [e128f4ff-3432-4d27-83de-ed76b686403d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0920 18:17:48.349573   75086 system_pods.go:61] "kube-controller-manager-embed-certs-768431" [a2c7e6d7-e351-4e6a-ab95-638aecdaab28] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0920 18:17:48.349579   75086 system_pods.go:61] "kube-proxy-cshjm" [86a82f1b-d8d5-4ab7-89fb-84b836dc0470] Running
	I0920 18:17:48.349586   75086 system_pods.go:61] "kube-scheduler-embed-certs-768431" [64bc50a6-2e99-4992-b249-9ffdf463539a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0920 18:17:48.349601   75086 system_pods.go:61] "metrics-server-6867b74b74-dwnt6" [bb0a120f-ca9e-4b54-826b-55aa20b2f6a4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 18:17:48.349606   75086 system_pods.go:61] "storage-provisioner" [083c6964-a774-4b85-8e6d-acd04ededb3c] Running
	I0920 18:17:48.349617   75086 system_pods.go:74] duration metric: took 12.507925ms to wait for pod list to return data ...
	I0920 18:17:48.349630   75086 node_conditions.go:102] verifying NodePressure condition ...
	I0920 18:17:48.353604   75086 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 18:17:48.353635   75086 node_conditions.go:123] node cpu capacity is 2
	I0920 18:17:48.353648   75086 node_conditions.go:105] duration metric: took 4.012436ms to run NodePressure ...
	I0920 18:17:48.353668   75086 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:17:48.623666   75086 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0920 18:17:48.634113   75086 kubeadm.go:739] kubelet initialised
	I0920 18:17:48.634184   75086 kubeadm.go:740] duration metric: took 10.458173ms waiting for restarted kubelet to initialise ...
	I0920 18:17:48.634205   75086 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:17:48.642334   75086 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-cskt4" in "kube-system" namespace to be "Ready" ...
	I0920 18:17:47.161670   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:47.162064   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | unable to find current IP address of domain default-k8s-diff-port-553719 in network mk-default-k8s-diff-port-553719
	I0920 18:17:47.162094   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | I0920 18:17:47.162020   76346 retry.go:31] will retry after 2.299554771s: waiting for machine to come up
	I0920 18:17:49.463022   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:49.463483   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | unable to find current IP address of domain default-k8s-diff-port-553719 in network mk-default-k8s-diff-port-553719
	I0920 18:17:49.463515   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | I0920 18:17:49.463442   76346 retry.go:31] will retry after 4.372569793s: waiting for machine to come up
	I0920 18:17:50.650781   75086 pod_ready.go:103] pod "coredns-7c65d6cfc9-cskt4" in "kube-system" namespace has status "Ready":"False"
	I0920 18:17:53.149156   75086 pod_ready.go:103] pod "coredns-7c65d6cfc9-cskt4" in "kube-system" namespace has status "Ready":"False"
	I0920 18:17:55.087078   75577 start.go:364] duration metric: took 3m42.245259516s to acquireMachinesLock for "old-k8s-version-744025"
	I0920 18:17:55.087155   75577 start.go:96] Skipping create...Using existing machine configuration
	I0920 18:17:55.087166   75577 fix.go:54] fixHost starting: 
	I0920 18:17:55.087618   75577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:17:55.087671   75577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:17:55.107336   75577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34989
	I0920 18:17:55.107839   75577 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:17:55.108462   75577 main.go:141] libmachine: Using API Version  1
	I0920 18:17:55.108496   75577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:17:55.108855   75577 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:17:55.109072   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .DriverName
	I0920 18:17:55.109222   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetState
	I0920 18:17:55.110831   75577 fix.go:112] recreateIfNeeded on old-k8s-version-744025: state=Stopped err=<nil>
	I0920 18:17:55.110857   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .DriverName
	W0920 18:17:55.110973   75577 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 18:17:55.113408   75577 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-744025" ...
	I0920 18:17:53.840215   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:53.840817   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has current primary IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:53.840844   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Found IP for machine: 192.168.72.190
	I0920 18:17:53.840859   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Reserving static IP address...
	I0920 18:17:53.841438   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Reserved static IP address: 192.168.72.190
	I0920 18:17:53.841460   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for SSH to be available...
	I0920 18:17:53.841475   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-553719", mac: "52:54:00:dd:93:60", ip: "192.168.72.190"} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:53.841503   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | skip adding static IP to network mk-default-k8s-diff-port-553719 - found existing host DHCP lease matching {name: "default-k8s-diff-port-553719", mac: "52:54:00:dd:93:60", ip: "192.168.72.190"}
	I0920 18:17:53.841583   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | Getting to WaitForSSH function...
	I0920 18:17:53.843772   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:53.844082   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:53.844115   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:53.844201   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | Using SSH client type: external
	I0920 18:17:53.844238   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/default-k8s-diff-port-553719/id_rsa (-rw-------)
	I0920 18:17:53.844282   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.190 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-8777/.minikube/machines/default-k8s-diff-port-553719/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 18:17:53.844296   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | About to run SSH command:
	I0920 18:17:53.844309   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | exit 0
	I0920 18:17:53.965938   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | SSH cmd err, output: <nil>: 
	I0920 18:17:53.966354   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetConfigRaw
	I0920 18:17:53.967105   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetIP
	I0920 18:17:53.969715   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:53.970112   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:53.970140   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:53.970429   75264 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/default-k8s-diff-port-553719/config.json ...
	I0920 18:17:53.970686   75264 machine.go:93] provisionDockerMachine start ...
	I0920 18:17:53.970707   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .DriverName
	I0920 18:17:53.970924   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHHostname
	I0920 18:17:53.973207   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:53.973541   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:53.973571   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:53.973716   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHPort
	I0920 18:17:53.973901   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:53.974041   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:53.974155   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHUsername
	I0920 18:17:53.974298   75264 main.go:141] libmachine: Using SSH client type: native
	I0920 18:17:53.974518   75264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.190 22 <nil> <nil>}
	I0920 18:17:53.974533   75264 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 18:17:54.078095   75264 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 18:17:54.078121   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetMachineName
	I0920 18:17:54.078377   75264 buildroot.go:166] provisioning hostname "default-k8s-diff-port-553719"
	I0920 18:17:54.078397   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetMachineName
	I0920 18:17:54.078540   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHHostname
	I0920 18:17:54.081589   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.081998   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:54.082032   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.082179   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHPort
	I0920 18:17:54.082375   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:54.082539   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:54.082743   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHUsername
	I0920 18:17:54.082949   75264 main.go:141] libmachine: Using SSH client type: native
	I0920 18:17:54.083153   75264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.190 22 <nil> <nil>}
	I0920 18:17:54.083173   75264 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-553719 && echo "default-k8s-diff-port-553719" | sudo tee /etc/hostname
	I0920 18:17:54.199155   75264 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-553719
	
	I0920 18:17:54.199192   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHHostname
	I0920 18:17:54.202231   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.202650   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:54.202685   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.202835   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHPort
	I0920 18:17:54.203112   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:54.203289   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:54.203458   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHUsername
	I0920 18:17:54.203696   75264 main.go:141] libmachine: Using SSH client type: native
	I0920 18:17:54.203944   75264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.190 22 <nil> <nil>}
	I0920 18:17:54.203969   75264 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-553719' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-553719/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-553719' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 18:17:54.312356   75264 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:17:54.312386   75264 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-8777/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-8777/.minikube}
	I0920 18:17:54.312432   75264 buildroot.go:174] setting up certificates
	I0920 18:17:54.312450   75264 provision.go:84] configureAuth start
	I0920 18:17:54.312464   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetMachineName
	I0920 18:17:54.312758   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetIP
	I0920 18:17:54.315327   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.315697   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:54.315725   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.315870   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHHostname
	I0920 18:17:54.318203   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.318653   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:54.318679   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.318824   75264 provision.go:143] copyHostCerts
	I0920 18:17:54.318885   75264 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem, removing ...
	I0920 18:17:54.318906   75264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem
	I0920 18:17:54.318978   75264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem (1675 bytes)
	I0920 18:17:54.319096   75264 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem, removing ...
	I0920 18:17:54.319107   75264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem
	I0920 18:17:54.319141   75264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem (1082 bytes)
	I0920 18:17:54.319384   75264 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem, removing ...
	I0920 18:17:54.319401   75264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem
	I0920 18:17:54.319465   75264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem (1123 bytes)
	I0920 18:17:54.319551   75264 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-553719 san=[127.0.0.1 192.168.72.190 default-k8s-diff-port-553719 localhost minikube]
	I0920 18:17:54.453062   75264 provision.go:177] copyRemoteCerts
	I0920 18:17:54.453131   75264 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 18:17:54.453160   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHHostname
	I0920 18:17:54.456047   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.456402   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:54.456431   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.456598   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHPort
	I0920 18:17:54.456796   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:54.456970   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHUsername
	I0920 18:17:54.457092   75264 sshutil.go:53] new ssh client: &{IP:192.168.72.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/default-k8s-diff-port-553719/id_rsa Username:docker}
	I0920 18:17:54.544567   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0920 18:17:54.570009   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0920 18:17:54.594249   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0920 18:17:54.617652   75264 provision.go:87] duration metric: took 305.186554ms to configureAuth
	I0920 18:17:54.617686   75264 buildroot.go:189] setting minikube options for container-runtime
	I0920 18:17:54.617971   75264 config.go:182] Loaded profile config "default-k8s-diff-port-553719": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:17:54.618064   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHHostname
	I0920 18:17:54.621408   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.621882   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:54.621915   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.622140   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHPort
	I0920 18:17:54.622368   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:54.622592   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:54.622833   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHUsername
	I0920 18:17:54.623047   75264 main.go:141] libmachine: Using SSH client type: native
	I0920 18:17:54.623287   75264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.190 22 <nil> <nil>}
	I0920 18:17:54.623305   75264 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 18:17:54.859786   75264 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 18:17:54.859810   75264 machine.go:96] duration metric: took 889.110491ms to provisionDockerMachine
	I0920 18:17:54.859820   75264 start.go:293] postStartSetup for "default-k8s-diff-port-553719" (driver="kvm2")
	I0920 18:17:54.859831   75264 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 18:17:54.859850   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .DriverName
	I0920 18:17:54.860209   75264 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 18:17:54.860258   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHHostname
	I0920 18:17:54.863576   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.863933   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:54.863966   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.864168   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHPort
	I0920 18:17:54.864345   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:54.864494   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHUsername
	I0920 18:17:54.864640   75264 sshutil.go:53] new ssh client: &{IP:192.168.72.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/default-k8s-diff-port-553719/id_rsa Username:docker}
	I0920 18:17:54.944717   75264 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 18:17:54.948836   75264 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 18:17:54.948870   75264 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/addons for local assets ...
	I0920 18:17:54.948937   75264 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/files for local assets ...
	I0920 18:17:54.949014   75264 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem -> 159732.pem in /etc/ssl/certs
	I0920 18:17:54.949116   75264 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 18:17:54.958726   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /etc/ssl/certs/159732.pem (1708 bytes)
	I0920 18:17:54.982792   75264 start.go:296] duration metric: took 122.958621ms for postStartSetup
	I0920 18:17:54.982831   75264 fix.go:56] duration metric: took 19.803863913s for fixHost
	I0920 18:17:54.982856   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHHostname
	I0920 18:17:54.985588   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.985924   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:54.985966   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.986145   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHPort
	I0920 18:17:54.986407   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:54.986586   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:54.986784   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHUsername
	I0920 18:17:54.986982   75264 main.go:141] libmachine: Using SSH client type: native
	I0920 18:17:54.987233   75264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.190 22 <nil> <nil>}
	I0920 18:17:54.987248   75264 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 18:17:55.086859   75264 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726856275.038902431
	
	I0920 18:17:55.086888   75264 fix.go:216] guest clock: 1726856275.038902431
	I0920 18:17:55.086900   75264 fix.go:229] Guest: 2024-09-20 18:17:55.038902431 +0000 UTC Remote: 2024-09-20 18:17:54.98283641 +0000 UTC m=+257.985357778 (delta=56.066021ms)
	I0920 18:17:55.086959   75264 fix.go:200] guest clock delta is within tolerance: 56.066021ms
	I0920 18:17:55.086968   75264 start.go:83] releasing machines lock for "default-k8s-diff-port-553719", held for 19.908037967s
	I0920 18:17:55.087009   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .DriverName
	I0920 18:17:55.087303   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetIP
	I0920 18:17:55.090396   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:55.090777   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:55.090805   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:55.090973   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .DriverName
	I0920 18:17:55.091481   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .DriverName
	I0920 18:17:55.091684   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .DriverName
	I0920 18:17:55.091772   75264 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 18:17:55.091827   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHHostname
	I0920 18:17:55.091951   75264 ssh_runner.go:195] Run: cat /version.json
	I0920 18:17:55.091976   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHHostname
	I0920 18:17:55.094742   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:55.094878   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:55.095102   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:55.095133   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:55.095265   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHPort
	I0920 18:17:55.095362   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:55.095400   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:55.095443   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:55.095550   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHPort
	I0920 18:17:55.095619   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHUsername
	I0920 18:17:55.095689   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:55.095763   75264 sshutil.go:53] new ssh client: &{IP:192.168.72.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/default-k8s-diff-port-553719/id_rsa Username:docker}
	I0920 18:17:55.095795   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHUsername
	I0920 18:17:55.095950   75264 sshutil.go:53] new ssh client: &{IP:192.168.72.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/default-k8s-diff-port-553719/id_rsa Username:docker}
	I0920 18:17:55.213223   75264 ssh_runner.go:195] Run: systemctl --version
	I0920 18:17:55.220952   75264 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 18:17:55.370747   75264 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 18:17:55.377509   75264 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 18:17:55.377595   75264 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 18:17:55.395830   75264 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 18:17:55.395854   75264 start.go:495] detecting cgroup driver to use...
	I0920 18:17:55.395920   75264 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 18:17:55.412885   75264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 18:17:55.428380   75264 docker.go:217] disabling cri-docker service (if available) ...
	I0920 18:17:55.428433   75264 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 18:17:55.444371   75264 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 18:17:55.459485   75264 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 18:17:55.583649   75264 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 18:17:55.768003   75264 docker.go:233] disabling docker service ...
	I0920 18:17:55.768065   75264 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 18:17:55.787062   75264 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 18:17:55.802662   75264 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 18:17:55.967892   75264 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 18:17:56.105744   75264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 18:17:56.120499   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 18:17:56.140527   75264 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 18:17:56.140613   75264 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:56.151282   75264 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 18:17:56.151355   75264 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:56.164680   75264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:56.176142   75264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:56.187120   75264 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 18:17:56.198384   75264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:56.209298   75264 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:56.226714   75264 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:56.237886   75264 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 18:17:56.253664   75264 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 18:17:56.253778   75264 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 18:17:56.267429   75264 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 18:17:56.279118   75264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:17:56.412692   75264 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 18:17:56.513349   75264 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 18:17:56.513438   75264 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 18:17:56.517974   75264 start.go:563] Will wait 60s for crictl version
	I0920 18:17:56.518042   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:17:56.521966   75264 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 18:17:56.561446   75264 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 18:17:56.561520   75264 ssh_runner.go:195] Run: crio --version
	I0920 18:17:56.592009   75264 ssh_runner.go:195] Run: crio --version
	I0920 18:17:56.626555   75264 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 18:17:56.627668   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetIP
	I0920 18:17:56.630995   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:56.631450   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:56.631480   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:56.631751   75264 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0920 18:17:56.636473   75264 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:17:56.648824   75264 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-553719 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-553719 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.190 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 18:17:56.648937   75264 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:17:56.648980   75264 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:17:56.691029   75264 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 18:17:56.691104   75264 ssh_runner.go:195] Run: which lz4
	I0920 18:17:56.695538   75264 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 18:17:56.699703   75264 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 18:17:56.699735   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0920 18:17:55.114625   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .Start
	I0920 18:17:55.114814   75577 main.go:141] libmachine: (old-k8s-version-744025) Ensuring networks are active...
	I0920 18:17:55.115808   75577 main.go:141] libmachine: (old-k8s-version-744025) Ensuring network default is active
	I0920 18:17:55.116209   75577 main.go:141] libmachine: (old-k8s-version-744025) Ensuring network mk-old-k8s-version-744025 is active
	I0920 18:17:55.116615   75577 main.go:141] libmachine: (old-k8s-version-744025) Getting domain xml...
	I0920 18:17:55.117403   75577 main.go:141] libmachine: (old-k8s-version-744025) Creating domain...
	I0920 18:17:56.411359   75577 main.go:141] libmachine: (old-k8s-version-744025) Waiting to get IP...
	I0920 18:17:56.412507   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:17:56.413010   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:17:56.413094   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:17:56.412996   76990 retry.go:31] will retry after 300.110477ms: waiting for machine to come up
	I0920 18:17:56.714648   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:17:56.715163   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:17:56.715183   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:17:56.715114   76990 retry.go:31] will retry after 352.948637ms: waiting for machine to come up
	I0920 18:17:57.069760   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:17:57.070449   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:17:57.070474   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:17:57.070404   76990 retry.go:31] will retry after 335.58281ms: waiting for machine to come up
	I0920 18:17:57.408023   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:17:57.408521   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:17:57.408546   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:17:57.408469   76990 retry.go:31] will retry after 498.040148ms: waiting for machine to come up
	I0920 18:17:54.653346   75086 pod_ready.go:93] pod "coredns-7c65d6cfc9-cskt4" in "kube-system" namespace has status "Ready":"True"
	I0920 18:17:54.653373   75086 pod_ready.go:82] duration metric: took 6.011015799s for pod "coredns-7c65d6cfc9-cskt4" in "kube-system" namespace to be "Ready" ...
	I0920 18:17:54.653383   75086 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:17:56.660606   75086 pod_ready.go:103] pod "etcd-embed-certs-768431" in "kube-system" namespace has status "Ready":"False"
	I0920 18:17:58.162253   75086 pod_ready.go:93] pod "etcd-embed-certs-768431" in "kube-system" namespace has status "Ready":"True"
	I0920 18:17:58.162282   75086 pod_ready.go:82] duration metric: took 3.508893207s for pod "etcd-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:17:58.162292   75086 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:17:58.109419   75264 crio.go:462] duration metric: took 1.413913626s to copy over tarball
	I0920 18:17:58.109504   75264 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 18:18:00.360623   75264 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.25108327s)
	I0920 18:18:00.360655   75264 crio.go:469] duration metric: took 2.251201466s to extract the tarball
	I0920 18:18:00.360703   75264 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 18:18:00.400822   75264 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:18:00.451366   75264 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 18:18:00.451402   75264 cache_images.go:84] Images are preloaded, skipping loading
	I0920 18:18:00.451413   75264 kubeadm.go:934] updating node { 192.168.72.190 8444 v1.31.1 crio true true} ...
	I0920 18:18:00.451590   75264 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-553719 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.190
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-553719 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 18:18:00.451688   75264 ssh_runner.go:195] Run: crio config
	I0920 18:18:00.502703   75264 cni.go:84] Creating CNI manager for ""
	I0920 18:18:00.502729   75264 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:18:00.502740   75264 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 18:18:00.502778   75264 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.190 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-553719 NodeName:default-k8s-diff-port-553719 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.190"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.190 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 18:18:00.502927   75264 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.190
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-553719"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.190
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.190"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 18:18:00.502996   75264 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 18:18:00.513768   75264 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 18:18:00.513870   75264 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 18:18:00.523631   75264 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0920 18:18:00.540428   75264 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 18:18:00.556858   75264 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0920 18:18:00.574954   75264 ssh_runner.go:195] Run: grep 192.168.72.190	control-plane.minikube.internal$ /etc/hosts
	I0920 18:18:00.578951   75264 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.190	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:18:00.592592   75264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:18:00.712035   75264 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:18:00.728657   75264 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/default-k8s-diff-port-553719 for IP: 192.168.72.190
	I0920 18:18:00.728684   75264 certs.go:194] generating shared ca certs ...
	I0920 18:18:00.728706   75264 certs.go:226] acquiring lock for ca certs: {Name:mkc7ef6c737c6bdc3fdd9dcff8f57029c020d8f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:18:00.728877   75264 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key
	I0920 18:18:00.728937   75264 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key
	I0920 18:18:00.728948   75264 certs.go:256] generating profile certs ...
	I0920 18:18:00.729055   75264 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/default-k8s-diff-port-553719/client.key
	I0920 18:18:00.729151   75264 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/default-k8s-diff-port-553719/apiserver.key.cc1f66e7
	I0920 18:18:00.729205   75264 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/default-k8s-diff-port-553719/proxy-client.key
	I0920 18:18:00.729368   75264 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem (1338 bytes)
	W0920 18:18:00.729415   75264 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973_empty.pem, impossibly tiny 0 bytes
	I0920 18:18:00.729425   75264 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 18:18:00.729453   75264 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem (1082 bytes)
	I0920 18:18:00.729474   75264 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem (1123 bytes)
	I0920 18:18:00.729501   75264 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem (1675 bytes)
	I0920 18:18:00.729538   75264 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem (1708 bytes)
	I0920 18:18:00.730192   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 18:18:00.775634   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 18:18:00.812006   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 18:18:00.846904   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0920 18:18:00.877908   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/default-k8s-diff-port-553719/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0920 18:18:00.910057   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/default-k8s-diff-port-553719/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 18:18:00.935377   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/default-k8s-diff-port-553719/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 18:18:00.960143   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/default-k8s-diff-port-553719/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 18:18:00.983906   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 18:18:01.007105   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem --> /usr/share/ca-certificates/15973.pem (1338 bytes)
	I0920 18:18:01.030331   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /usr/share/ca-certificates/159732.pem (1708 bytes)
	I0920 18:18:01.055976   75264 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 18:18:01.073892   75264 ssh_runner.go:195] Run: openssl version
	I0920 18:18:01.081246   75264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 18:18:01.092174   75264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:18:01.096541   75264 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:18:01.096595   75264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:18:01.103564   75264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 18:18:01.117697   75264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15973.pem && ln -fs /usr/share/ca-certificates/15973.pem /etc/ssl/certs/15973.pem"
	I0920 18:18:01.130349   75264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15973.pem
	I0920 18:18:01.135397   75264 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:04 /usr/share/ca-certificates/15973.pem
	I0920 18:18:01.135471   75264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15973.pem
	I0920 18:18:01.141399   75264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15973.pem /etc/ssl/certs/51391683.0"
	I0920 18:18:01.153123   75264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/159732.pem && ln -fs /usr/share/ca-certificates/159732.pem /etc/ssl/certs/159732.pem"
	I0920 18:18:01.165157   75264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/159732.pem
	I0920 18:18:01.170596   75264 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:04 /usr/share/ca-certificates/159732.pem
	I0920 18:18:01.170678   75264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/159732.pem
	I0920 18:18:01.176534   75264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/159732.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 18:18:01.188576   75264 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 18:18:01.193401   75264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 18:18:01.199557   75264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 18:18:01.205410   75264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 18:18:01.211296   75264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 18:18:01.216953   75264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 18:18:01.223417   75264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 18:18:01.229172   75264 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-553719 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-553719 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.190 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:18:01.229428   75264 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 18:18:01.229488   75264 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:18:01.270537   75264 cri.go:89] found id: ""
	I0920 18:18:01.270610   75264 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 18:18:01.284566   75264 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 18:18:01.284588   75264 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 18:18:01.284638   75264 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 18:18:01.297884   75264 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 18:18:01.298879   75264 kubeconfig.go:125] found "default-k8s-diff-port-553719" server: "https://192.168.72.190:8444"
	I0920 18:18:01.300996   75264 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 18:18:01.312183   75264 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.190
	I0920 18:18:01.312251   75264 kubeadm.go:1160] stopping kube-system containers ...
	I0920 18:18:01.312266   75264 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0920 18:18:01.312324   75264 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:18:01.358873   75264 cri.go:89] found id: ""
	I0920 18:18:01.358959   75264 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 18:18:01.377027   75264 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 18:18:01.386872   75264 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 18:18:01.386890   75264 kubeadm.go:157] found existing configuration files:
	
	I0920 18:18:01.386931   75264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0920 18:18:01.396044   75264 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 18:18:01.396118   75264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 18:18:01.405783   75264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0920 18:18:01.415296   75264 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 18:18:01.415367   75264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 18:18:01.427782   75264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0920 18:18:01.438838   75264 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 18:18:01.438897   75264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 18:18:01.450255   75264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0920 18:18:01.461237   75264 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 18:18:01.461313   75264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 18:18:01.472543   75264 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 18:18:01.483456   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:01.607155   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:17:57.908137   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:17:57.908561   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:17:57.908589   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:17:57.908522   76990 retry.go:31] will retry after 749.044696ms: waiting for machine to come up
	I0920 18:17:58.658869   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:17:58.659393   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:17:58.659424   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:17:58.659342   76990 retry.go:31] will retry after 949.936088ms: waiting for machine to come up
	I0920 18:17:59.610936   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:17:59.611467   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:17:59.611513   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:17:59.611430   76990 retry.go:31] will retry after 762.437104ms: waiting for machine to come up
	I0920 18:18:00.375768   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:00.376207   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:18:00.376235   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:18:00.376152   76990 retry.go:31] will retry after 1.228102027s: waiting for machine to come up
	I0920 18:18:01.606490   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:01.606958   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:18:01.606982   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:18:01.606907   76990 retry.go:31] will retry after 1.383186524s: waiting for machine to come up
	I0920 18:18:00.169862   75086 pod_ready.go:103] pod "kube-apiserver-embed-certs-768431" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:01.669351   75086 pod_ready.go:93] pod "kube-apiserver-embed-certs-768431" in "kube-system" namespace has status "Ready":"True"
	I0920 18:18:01.669378   75086 pod_ready.go:82] duration metric: took 3.507078668s for pod "kube-apiserver-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:01.669392   75086 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:01.674981   75086 pod_ready.go:93] pod "kube-controller-manager-embed-certs-768431" in "kube-system" namespace has status "Ready":"True"
	I0920 18:18:01.675006   75086 pod_ready.go:82] duration metric: took 5.604682ms for pod "kube-controller-manager-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:01.675018   75086 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-cshjm" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:01.680522   75086 pod_ready.go:93] pod "kube-proxy-cshjm" in "kube-system" namespace has status "Ready":"True"
	I0920 18:18:01.680547   75086 pod_ready.go:82] duration metric: took 5.521295ms for pod "kube-proxy-cshjm" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:01.680559   75086 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:01.685097   75086 pod_ready.go:93] pod "kube-scheduler-embed-certs-768431" in "kube-system" namespace has status "Ready":"True"
	I0920 18:18:01.685120   75086 pod_ready.go:82] duration metric: took 4.551724ms for pod "kube-scheduler-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:01.685131   75086 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:03.693234   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:02.268120   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:02.471363   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:02.530979   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:02.580339   75264 api_server.go:52] waiting for apiserver process to appear ...
	I0920 18:18:02.580455   75264 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:03.081156   75264 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:03.580564   75264 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:04.080991   75264 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:04.095108   75264 api_server.go:72] duration metric: took 1.514770034s to wait for apiserver process to appear ...
	I0920 18:18:04.095139   75264 api_server.go:88] waiting for apiserver healthz status ...
	I0920 18:18:04.095164   75264 api_server.go:253] Checking apiserver healthz at https://192.168.72.190:8444/healthz ...
	I0920 18:18:04.095704   75264 api_server.go:269] stopped: https://192.168.72.190:8444/healthz: Get "https://192.168.72.190:8444/healthz": dial tcp 192.168.72.190:8444: connect: connection refused
	I0920 18:18:04.595270   75264 api_server.go:253] Checking apiserver healthz at https://192.168.72.190:8444/healthz ...
	I0920 18:18:06.378496   75264 api_server.go:279] https://192.168.72.190:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 18:18:06.378532   75264 api_server.go:103] status: https://192.168.72.190:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 18:18:06.378550   75264 api_server.go:253] Checking apiserver healthz at https://192.168.72.190:8444/healthz ...
	I0920 18:18:06.425353   75264 api_server.go:279] https://192.168.72.190:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 18:18:06.425386   75264 api_server.go:103] status: https://192.168.72.190:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 18:18:06.595786   75264 api_server.go:253] Checking apiserver healthz at https://192.168.72.190:8444/healthz ...
	I0920 18:18:06.602114   75264 api_server.go:279] https://192.168.72.190:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 18:18:06.602148   75264 api_server.go:103] status: https://192.168.72.190:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 18:18:07.096095   75264 api_server.go:253] Checking apiserver healthz at https://192.168.72.190:8444/healthz ...
	I0920 18:18:07.102320   75264 api_server.go:279] https://192.168.72.190:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 18:18:07.102348   75264 api_server.go:103] status: https://192.168.72.190:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 18:18:07.595804   75264 api_server.go:253] Checking apiserver healthz at https://192.168.72.190:8444/healthz ...
	I0920 18:18:07.600025   75264 api_server.go:279] https://192.168.72.190:8444/healthz returned 200:
	ok
	I0920 18:18:07.606598   75264 api_server.go:141] control plane version: v1.31.1
	I0920 18:18:07.606626   75264 api_server.go:131] duration metric: took 3.511479158s to wait for apiserver health ...
	I0920 18:18:07.606637   75264 cni.go:84] Creating CNI manager for ""
	I0920 18:18:07.606645   75264 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:18:07.608423   75264 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 18:18:02.992412   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:02.992887   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:18:02.992919   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:18:02.992822   76990 retry.go:31] will retry after 2.100326569s: waiting for machine to come up
	I0920 18:18:05.095088   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:05.095546   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:18:05.095569   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:18:05.095507   76990 retry.go:31] will retry after 1.758181729s: waiting for machine to come up
	I0920 18:18:06.855172   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:06.855654   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:18:06.855678   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:18:06.855606   76990 retry.go:31] will retry after 2.478116743s: waiting for machine to come up
	I0920 18:18:05.694350   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:08.191644   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:07.609444   75264 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 18:18:07.620420   75264 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 18:18:07.637796   75264 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 18:18:07.652676   75264 system_pods.go:59] 8 kube-system pods found
	I0920 18:18:07.652710   75264 system_pods.go:61] "coredns-7c65d6cfc9-dmdfb" [8e4ffdb3-37e0-4498-bdf5-d6e7dbcad020] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 18:18:07.652721   75264 system_pods.go:61] "etcd-default-k8s-diff-port-553719" [e8de0e96-5f3a-4f3f-ae69-c5da7d3c2eb7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0920 18:18:07.652727   75264 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-553719" [5164e26c-7fcd-41dc-865f-429046a9ad61] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0920 18:18:07.652735   75264 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-553719" [b2c00a76-9645-4c63-9812-43e6ed9f1d4f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0920 18:18:07.652741   75264 system_pods.go:61] "kube-proxy-p9crq" [83e0f53d-6960-42c4-904d-ea85ba9160f4] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0920 18:18:07.652746   75264 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-553719" [42155f7b-95bb-467b-acf5-89eb4f80cf76] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0920 18:18:07.652751   75264 system_pods.go:61] "metrics-server-6867b74b74-vtl79" [29e0b6eb-22a9-4e37-97f9-83b48cc38193] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 18:18:07.652760   75264 system_pods.go:61] "storage-provisioner" [6fad2d07-f99e-45ac-9657-bce6d73d7fce] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0920 18:18:07.652769   75264 system_pods.go:74] duration metric: took 14.950839ms to wait for pod list to return data ...
	I0920 18:18:07.652780   75264 node_conditions.go:102] verifying NodePressure condition ...
	I0920 18:18:07.658890   75264 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 18:18:07.658927   75264 node_conditions.go:123] node cpu capacity is 2
	I0920 18:18:07.658941   75264 node_conditions.go:105] duration metric: took 6.15496ms to run NodePressure ...
	I0920 18:18:07.658964   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:07.932231   75264 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0920 18:18:07.936474   75264 kubeadm.go:739] kubelet initialised
	I0920 18:18:07.936500   75264 kubeadm.go:740] duration metric: took 4.236071ms waiting for restarted kubelet to initialise ...
	I0920 18:18:07.936507   75264 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:18:07.941918   75264 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-dmdfb" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:07.947609   75264 pod_ready.go:98] node "default-k8s-diff-port-553719" hosting pod "coredns-7c65d6cfc9-dmdfb" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:07.947642   75264 pod_ready.go:82] duration metric: took 5.693968ms for pod "coredns-7c65d6cfc9-dmdfb" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:07.947653   75264 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-553719" hosting pod "coredns-7c65d6cfc9-dmdfb" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:07.947660   75264 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:07.954317   75264 pod_ready.go:98] node "default-k8s-diff-port-553719" hosting pod "etcd-default-k8s-diff-port-553719" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:07.954358   75264 pod_ready.go:82] duration metric: took 6.689603ms for pod "etcd-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:07.954373   75264 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-553719" hosting pod "etcd-default-k8s-diff-port-553719" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:07.954382   75264 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:07.966099   75264 pod_ready.go:98] node "default-k8s-diff-port-553719" hosting pod "kube-apiserver-default-k8s-diff-port-553719" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:07.966126   75264 pod_ready.go:82] duration metric: took 11.735727ms for pod "kube-apiserver-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:07.966141   75264 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-553719" hosting pod "kube-apiserver-default-k8s-diff-port-553719" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:07.966154   75264 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:08.041768   75264 pod_ready.go:98] node "default-k8s-diff-port-553719" hosting pod "kube-controller-manager-default-k8s-diff-port-553719" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:08.041794   75264 pod_ready.go:82] duration metric: took 75.630916ms for pod "kube-controller-manager-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:08.041805   75264 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-553719" hosting pod "kube-controller-manager-default-k8s-diff-port-553719" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:08.041816   75264 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-p9crq" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:08.440423   75264 pod_ready.go:98] node "default-k8s-diff-port-553719" hosting pod "kube-proxy-p9crq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:08.440459   75264 pod_ready.go:82] duration metric: took 398.635469ms for pod "kube-proxy-p9crq" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:08.440471   75264 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-553719" hosting pod "kube-proxy-p9crq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:08.440480   75264 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:08.841917   75264 pod_ready.go:98] node "default-k8s-diff-port-553719" hosting pod "kube-scheduler-default-k8s-diff-port-553719" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:08.841953   75264 pod_ready.go:82] duration metric: took 401.459059ms for pod "kube-scheduler-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:08.841968   75264 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-553719" hosting pod "kube-scheduler-default-k8s-diff-port-553719" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:08.841977   75264 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:09.241198   75264 pod_ready.go:98] node "default-k8s-diff-port-553719" hosting pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:09.241225   75264 pod_ready.go:82] duration metric: took 399.238784ms for pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:09.241237   75264 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-553719" hosting pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:09.241245   75264 pod_ready.go:39] duration metric: took 1.304729447s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:18:09.241259   75264 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 18:18:09.253519   75264 ops.go:34] apiserver oom_adj: -16
	I0920 18:18:09.253546   75264 kubeadm.go:597] duration metric: took 7.96895197s to restartPrimaryControlPlane
	I0920 18:18:09.253558   75264 kubeadm.go:394] duration metric: took 8.024395552s to StartCluster
	I0920 18:18:09.253586   75264 settings.go:142] acquiring lock: {Name:mk2a13c58cdc0faf8cddca5d6716175d45db9bfd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:18:09.253682   75264 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19672-8777/kubeconfig
	I0920 18:18:09.255907   75264 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/kubeconfig: {Name:mkf32a4c736808e023459b2f0e40188618a38db1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:18:09.256208   75264 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.190 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:18:09.256322   75264 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 18:18:09.256394   75264 config.go:182] Loaded profile config "default-k8s-diff-port-553719": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:18:09.256420   75264 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-553719"
	I0920 18:18:09.256428   75264 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-553719"
	I0920 18:18:09.256440   75264 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-553719"
	I0920 18:18:09.256448   75264 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-553719"
	I0920 18:18:09.256457   75264 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-553719"
	W0920 18:18:09.256468   75264 addons.go:243] addon metrics-server should already be in state true
	I0920 18:18:09.256496   75264 host.go:66] Checking if "default-k8s-diff-port-553719" exists ...
	I0920 18:18:09.256441   75264 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-553719"
	W0920 18:18:09.256569   75264 addons.go:243] addon storage-provisioner should already be in state true
	I0920 18:18:09.256602   75264 host.go:66] Checking if "default-k8s-diff-port-553719" exists ...
	I0920 18:18:09.256814   75264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:09.256844   75264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:09.256882   75264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:09.256926   75264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:09.256930   75264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:09.256957   75264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:09.258738   75264 out.go:177] * Verifying Kubernetes components...
	I0920 18:18:09.259893   75264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:18:09.272294   75264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45635
	I0920 18:18:09.272357   75264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39993
	I0920 18:18:09.272304   75264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32923
	I0920 18:18:09.272766   75264 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:09.272889   75264 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:09.272918   75264 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:09.273279   75264 main.go:141] libmachine: Using API Version  1
	I0920 18:18:09.273294   75264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:09.273416   75264 main.go:141] libmachine: Using API Version  1
	I0920 18:18:09.273438   75264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:09.273457   75264 main.go:141] libmachine: Using API Version  1
	I0920 18:18:09.273478   75264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:09.273657   75264 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:09.273850   75264 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:09.273922   75264 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:09.274017   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetState
	I0920 18:18:09.274244   75264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:09.274287   75264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:09.274399   75264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:09.274430   75264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:09.277535   75264 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-553719"
	W0920 18:18:09.277556   75264 addons.go:243] addon default-storageclass should already be in state true
	I0920 18:18:09.277586   75264 host.go:66] Checking if "default-k8s-diff-port-553719" exists ...
	I0920 18:18:09.277990   75264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:09.278041   75264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:09.290955   75264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39845
	I0920 18:18:09.291487   75264 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:09.292058   75264 main.go:141] libmachine: Using API Version  1
	I0920 18:18:09.292087   75264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:09.292409   75264 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:09.292607   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetState
	I0920 18:18:09.293351   75264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44575
	I0920 18:18:09.293790   75264 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:09.294056   75264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44517
	I0920 18:18:09.294412   75264 main.go:141] libmachine: Using API Version  1
	I0920 18:18:09.294438   75264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:09.294540   75264 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:09.294854   75264 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:09.294902   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .DriverName
	I0920 18:18:09.295362   75264 main.go:141] libmachine: Using API Version  1
	I0920 18:18:09.295387   75264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:09.295521   75264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:09.295569   75264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:09.295783   75264 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:09.295993   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetState
	I0920 18:18:09.297231   75264 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0920 18:18:09.298214   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .DriverName
	I0920 18:18:09.298825   75264 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 18:18:09.298849   75264 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 18:18:09.298870   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHHostname
	I0920 18:18:09.299849   75264 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:18:09.301018   75264 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 18:18:09.301034   75264 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 18:18:09.301048   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHHostname
	I0920 18:18:09.302841   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:18:09.303141   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:18:09.303165   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:18:09.303335   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHPort
	I0920 18:18:09.303491   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:18:09.303627   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHUsername
	I0920 18:18:09.303772   75264 sshutil.go:53] new ssh client: &{IP:192.168.72.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/default-k8s-diff-port-553719/id_rsa Username:docker}
	I0920 18:18:09.304104   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:18:09.304507   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:18:09.304552   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:18:09.304623   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHPort
	I0920 18:18:09.304786   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:18:09.304946   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHUsername
	I0920 18:18:09.305085   75264 sshutil.go:53] new ssh client: &{IP:192.168.72.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/default-k8s-diff-port-553719/id_rsa Username:docker}
	I0920 18:18:09.312912   75264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35085
	I0920 18:18:09.313277   75264 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:09.313780   75264 main.go:141] libmachine: Using API Version  1
	I0920 18:18:09.313801   75264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:09.314355   75264 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:09.314525   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetState
	I0920 18:18:09.316088   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .DriverName
	I0920 18:18:09.316415   75264 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 18:18:09.316429   75264 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 18:18:09.316448   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHHostname
	I0920 18:18:09.319116   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:18:09.319484   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:18:09.319512   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:18:09.319664   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHPort
	I0920 18:18:09.319832   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:18:09.319984   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHUsername
	I0920 18:18:09.320098   75264 sshutil.go:53] new ssh client: &{IP:192.168.72.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/default-k8s-diff-port-553719/id_rsa Username:docker}
	I0920 18:18:09.457570   75264 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:18:09.476315   75264 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-553719" to be "Ready" ...
	I0920 18:18:09.599420   75264 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 18:18:09.599442   75264 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0920 18:18:09.600777   75264 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 18:18:09.621891   75264 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 18:18:09.626217   75264 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 18:18:09.626251   75264 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 18:18:09.675537   75264 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 18:18:09.675569   75264 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 18:18:09.723167   75264 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 18:18:10.813006   75264 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.191083617s)
	I0920 18:18:10.813080   75264 main.go:141] libmachine: Making call to close driver server
	I0920 18:18:10.813095   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .Close
	I0920 18:18:10.813403   75264 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:18:10.813452   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | Closing plugin on server side
	I0920 18:18:10.813455   75264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:18:10.813497   75264 main.go:141] libmachine: Making call to close driver server
	I0920 18:18:10.813518   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .Close
	I0920 18:18:10.813748   75264 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:18:10.813764   75264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:18:10.813783   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | Closing plugin on server side
	I0920 18:18:10.813975   75264 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.213167075s)
	I0920 18:18:10.814018   75264 main.go:141] libmachine: Making call to close driver server
	I0920 18:18:10.814034   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .Close
	I0920 18:18:10.814323   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | Closing plugin on server side
	I0920 18:18:10.814325   75264 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:18:10.814345   75264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:18:10.814358   75264 main.go:141] libmachine: Making call to close driver server
	I0920 18:18:10.814368   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .Close
	I0920 18:18:10.814653   75264 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:18:10.814673   75264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:18:10.821768   75264 main.go:141] libmachine: Making call to close driver server
	I0920 18:18:10.821785   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .Close
	I0920 18:18:10.822016   75264 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:18:10.822034   75264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:18:10.846062   75264 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.122849108s)
	I0920 18:18:10.846122   75264 main.go:141] libmachine: Making call to close driver server
	I0920 18:18:10.846139   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .Close
	I0920 18:18:10.846441   75264 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:18:10.846469   75264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:18:10.846469   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | Closing plugin on server side
	I0920 18:18:10.846479   75264 main.go:141] libmachine: Making call to close driver server
	I0920 18:18:10.846488   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .Close
	I0920 18:18:10.846715   75264 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:18:10.846737   75264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:18:10.846748   75264 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-553719"
	I0920 18:18:10.846749   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | Closing plugin on server side
	I0920 18:18:10.848702   75264 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0920 18:18:10.849990   75264 addons.go:510] duration metric: took 1.593667614s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0920 18:18:11.480490   75264 node_ready.go:53] node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:09.334928   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:09.335367   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:18:09.335402   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:18:09.335314   76990 retry.go:31] will retry after 4.194120768s: waiting for machine to come up
	I0920 18:18:10.192078   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:12.192186   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:14.778926   74753 start.go:364] duration metric: took 54.584395971s to acquireMachinesLock for "no-preload-956403"
	I0920 18:18:14.778979   74753 start.go:96] Skipping create...Using existing machine configuration
	I0920 18:18:14.778987   74753 fix.go:54] fixHost starting: 
	I0920 18:18:14.779392   74753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:14.779430   74753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:14.796004   74753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45761
	I0920 18:18:14.796509   74753 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:14.797006   74753 main.go:141] libmachine: Using API Version  1
	I0920 18:18:14.797028   74753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:14.797330   74753 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:14.797499   74753 main.go:141] libmachine: (no-preload-956403) Calling .DriverName
	I0920 18:18:14.797650   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetState
	I0920 18:18:14.799376   74753 fix.go:112] recreateIfNeeded on no-preload-956403: state=Stopped err=<nil>
	I0920 18:18:14.799400   74753 main.go:141] libmachine: (no-preload-956403) Calling .DriverName
	W0920 18:18:14.799564   74753 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 18:18:14.802451   74753 out.go:177] * Restarting existing kvm2 VM for "no-preload-956403" ...
	I0920 18:18:13.533702   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.534281   75577 main.go:141] libmachine: (old-k8s-version-744025) Found IP for machine: 192.168.39.207
	I0920 18:18:13.534309   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has current primary IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.534314   75577 main.go:141] libmachine: (old-k8s-version-744025) Reserving static IP address...
	I0920 18:18:13.534725   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "old-k8s-version-744025", mac: "52:54:00:e5:57:41", ip: "192.168.39.207"} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:13.534761   75577 main.go:141] libmachine: (old-k8s-version-744025) Reserved static IP address: 192.168.39.207
	I0920 18:18:13.534783   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | skip adding static IP to network mk-old-k8s-version-744025 - found existing host DHCP lease matching {name: "old-k8s-version-744025", mac: "52:54:00:e5:57:41", ip: "192.168.39.207"}
	I0920 18:18:13.534800   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | Getting to WaitForSSH function...
	I0920 18:18:13.534816   75577 main.go:141] libmachine: (old-k8s-version-744025) Waiting for SSH to be available...
	I0920 18:18:13.536879   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.537271   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:13.537301   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.537391   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | Using SSH client type: external
	I0920 18:18:13.537439   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/old-k8s-version-744025/id_rsa (-rw-------)
	I0920 18:18:13.537476   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.207 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-8777/.minikube/machines/old-k8s-version-744025/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 18:18:13.537486   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | About to run SSH command:
	I0920 18:18:13.537498   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | exit 0
	I0920 18:18:13.662214   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | SSH cmd err, output: <nil>: 
	I0920 18:18:13.662565   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetConfigRaw
	I0920 18:18:13.663269   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetIP
	I0920 18:18:13.666068   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.666530   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:13.666561   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.666899   75577 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/config.json ...
	I0920 18:18:13.667111   75577 machine.go:93] provisionDockerMachine start ...
	I0920 18:18:13.667129   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .DriverName
	I0920 18:18:13.667331   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHHostname
	I0920 18:18:13.669347   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.669717   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:13.669743   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.669908   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHPort
	I0920 18:18:13.670167   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:13.670334   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:13.670453   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHUsername
	I0920 18:18:13.670583   75577 main.go:141] libmachine: Using SSH client type: native
	I0920 18:18:13.670759   75577 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.207 22 <nil> <nil>}
	I0920 18:18:13.670770   75577 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 18:18:13.774059   75577 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 18:18:13.774093   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetMachineName
	I0920 18:18:13.774406   75577 buildroot.go:166] provisioning hostname "old-k8s-version-744025"
	I0920 18:18:13.774434   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetMachineName
	I0920 18:18:13.774618   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHHostname
	I0920 18:18:13.777175   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.777482   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:13.777513   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.777633   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHPort
	I0920 18:18:13.777803   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:13.777966   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:13.778082   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHUsername
	I0920 18:18:13.778235   75577 main.go:141] libmachine: Using SSH client type: native
	I0920 18:18:13.778404   75577 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.207 22 <nil> <nil>}
	I0920 18:18:13.778417   75577 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-744025 && echo "old-k8s-version-744025" | sudo tee /etc/hostname
	I0920 18:18:13.900180   75577 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-744025
	
	I0920 18:18:13.900224   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHHostname
	I0920 18:18:13.902958   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.903309   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:13.903340   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.903543   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHPort
	I0920 18:18:13.903762   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:13.903931   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:13.904051   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHUsername
	I0920 18:18:13.904214   75577 main.go:141] libmachine: Using SSH client type: native
	I0920 18:18:13.904393   75577 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.207 22 <nil> <nil>}
	I0920 18:18:13.904409   75577 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-744025' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-744025/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-744025' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 18:18:14.023748   75577 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:18:14.023781   75577 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-8777/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-8777/.minikube}
	I0920 18:18:14.023834   75577 buildroot.go:174] setting up certificates
	I0920 18:18:14.023851   75577 provision.go:84] configureAuth start
	I0920 18:18:14.023866   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetMachineName
	I0920 18:18:14.024154   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetIP
	I0920 18:18:14.027240   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.027640   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:14.027778   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.027867   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHHostname
	I0920 18:18:14.030383   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.030741   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:14.030765   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.030915   75577 provision.go:143] copyHostCerts
	I0920 18:18:14.030979   75577 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem, removing ...
	I0920 18:18:14.030999   75577 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem
	I0920 18:18:14.031072   75577 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem (1082 bytes)
	I0920 18:18:14.031188   75577 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem, removing ...
	I0920 18:18:14.031205   75577 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem
	I0920 18:18:14.031240   75577 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem (1123 bytes)
	I0920 18:18:14.031340   75577 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem, removing ...
	I0920 18:18:14.031351   75577 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem
	I0920 18:18:14.031378   75577 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem (1675 bytes)
	I0920 18:18:14.031455   75577 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-744025 san=[127.0.0.1 192.168.39.207 localhost minikube old-k8s-version-744025]
	I0920 18:18:14.140775   75577 provision.go:177] copyRemoteCerts
	I0920 18:18:14.140847   75577 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 18:18:14.140883   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHHostname
	I0920 18:18:14.143599   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.144016   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:14.144062   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.144199   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHPort
	I0920 18:18:14.144377   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:14.144526   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHUsername
	I0920 18:18:14.144656   75577 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/old-k8s-version-744025/id_rsa Username:docker}
	I0920 18:18:14.228286   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 18:18:14.257293   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0920 18:18:14.281335   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0920 18:18:14.305736   75577 provision.go:87] duration metric: took 281.853458ms to configureAuth
	I0920 18:18:14.305762   75577 buildroot.go:189] setting minikube options for container-runtime
	I0920 18:18:14.305983   75577 config.go:182] Loaded profile config "old-k8s-version-744025": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0920 18:18:14.306076   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHHostname
	I0920 18:18:14.308833   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.309220   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:14.309244   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.309535   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHPort
	I0920 18:18:14.309779   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:14.309974   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:14.310124   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHUsername
	I0920 18:18:14.310316   75577 main.go:141] libmachine: Using SSH client type: native
	I0920 18:18:14.310535   75577 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.207 22 <nil> <nil>}
	I0920 18:18:14.310551   75577 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 18:18:14.536416   75577 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 18:18:14.536442   75577 machine.go:96] duration metric: took 869.318772ms to provisionDockerMachine
	I0920 18:18:14.536456   75577 start.go:293] postStartSetup for "old-k8s-version-744025" (driver="kvm2")
	I0920 18:18:14.536468   75577 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 18:18:14.536510   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .DriverName
	I0920 18:18:14.536803   75577 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 18:18:14.536831   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHHostname
	I0920 18:18:14.539503   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.539923   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:14.539951   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.540126   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHPort
	I0920 18:18:14.540303   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:14.540463   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHUsername
	I0920 18:18:14.540649   75577 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/old-k8s-version-744025/id_rsa Username:docker}
	I0920 18:18:14.625866   75577 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 18:18:14.630527   75577 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 18:18:14.630551   75577 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/addons for local assets ...
	I0920 18:18:14.630637   75577 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/files for local assets ...
	I0920 18:18:14.630727   75577 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem -> 159732.pem in /etc/ssl/certs
	I0920 18:18:14.630840   75577 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 18:18:14.640965   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /etc/ssl/certs/159732.pem (1708 bytes)
	I0920 18:18:14.668227   75577 start.go:296] duration metric: took 131.756559ms for postStartSetup
	I0920 18:18:14.668268   75577 fix.go:56] duration metric: took 19.581104117s for fixHost
	I0920 18:18:14.668295   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHHostname
	I0920 18:18:14.671138   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.671520   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:14.671549   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.671777   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHPort
	I0920 18:18:14.671981   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:14.672141   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:14.672280   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHUsername
	I0920 18:18:14.672436   75577 main.go:141] libmachine: Using SSH client type: native
	I0920 18:18:14.672606   75577 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.207 22 <nil> <nil>}
	I0920 18:18:14.672616   75577 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 18:18:14.778752   75577 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726856294.753230182
	
	I0920 18:18:14.778785   75577 fix.go:216] guest clock: 1726856294.753230182
	I0920 18:18:14.778796   75577 fix.go:229] Guest: 2024-09-20 18:18:14.753230182 +0000 UTC Remote: 2024-09-20 18:18:14.668273351 +0000 UTC m=+241.967312055 (delta=84.956831ms)
	I0920 18:18:14.778827   75577 fix.go:200] guest clock delta is within tolerance: 84.956831ms
	I0920 18:18:14.778836   75577 start.go:83] releasing machines lock for "old-k8s-version-744025", held for 19.691716285s
	I0920 18:18:14.778874   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .DriverName
	I0920 18:18:14.779135   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetIP
	I0920 18:18:14.781932   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.782357   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:14.782386   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.782572   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .DriverName
	I0920 18:18:14.783124   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .DriverName
	I0920 18:18:14.783328   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .DriverName
	I0920 18:18:14.783401   75577 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 18:18:14.783465   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHHostname
	I0920 18:18:14.783519   75577 ssh_runner.go:195] Run: cat /version.json
	I0920 18:18:14.783552   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHHostname
	I0920 18:18:14.786645   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.786817   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.786994   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:14.787023   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.787207   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:14.787259   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.787264   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHPort
	I0920 18:18:14.787404   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHPort
	I0920 18:18:14.787469   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:14.787561   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:14.787612   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHUsername
	I0920 18:18:14.787711   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHUsername
	I0920 18:18:14.787845   75577 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/old-k8s-version-744025/id_rsa Username:docker}
	I0920 18:18:14.787845   75577 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/old-k8s-version-744025/id_rsa Username:docker}
	I0920 18:18:14.872183   75577 ssh_runner.go:195] Run: systemctl --version
	I0920 18:18:14.913275   75577 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 18:18:15.071299   75577 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 18:18:15.078725   75577 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 18:18:15.078806   75577 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 18:18:15.101497   75577 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 18:18:15.101522   75577 start.go:495] detecting cgroup driver to use...
	I0920 18:18:15.101579   75577 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 18:18:15.120626   75577 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 18:18:15.137297   75577 docker.go:217] disabling cri-docker service (if available) ...
	I0920 18:18:15.137401   75577 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 18:18:15.152152   75577 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 18:18:15.167359   75577 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 18:18:15.288763   75577 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 18:18:15.475429   75577 docker.go:233] disabling docker service ...
	I0920 18:18:15.475512   75577 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 18:18:15.493331   75577 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 18:18:15.510326   75577 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 18:18:15.629119   75577 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 18:18:15.758923   75577 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 18:18:15.776778   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 18:18:15.798980   75577 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0920 18:18:15.799042   75577 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:18:15.810264   75577 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 18:18:15.810322   75577 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:18:15.821060   75577 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:18:15.832685   75577 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:18:15.843026   75577 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 18:18:15.853996   75577 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 18:18:15.864126   75577 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 18:18:15.864183   75577 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 18:18:15.878216   75577 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 18:18:15.889820   75577 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:18:16.015555   75577 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 18:18:16.118797   75577 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 18:18:16.118880   75577 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 18:18:16.124430   75577 start.go:563] Will wait 60s for crictl version
	I0920 18:18:16.124485   75577 ssh_runner.go:195] Run: which crictl
	I0920 18:18:16.128632   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 18:18:16.165596   75577 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 18:18:16.165707   75577 ssh_runner.go:195] Run: crio --version
	I0920 18:18:16.198772   75577 ssh_runner.go:195] Run: crio --version
	I0920 18:18:16.238511   75577 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0920 18:18:13.980331   75264 node_ready.go:53] node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:15.981435   75264 node_ready.go:53] node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:16.982412   75264 node_ready.go:49] node "default-k8s-diff-port-553719" has status "Ready":"True"
	I0920 18:18:16.982441   75264 node_ready.go:38] duration metric: took 7.506086526s for node "default-k8s-diff-port-553719" to be "Ready" ...
	I0920 18:18:16.982457   75264 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:18:16.989870   75264 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-dmdfb" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:14.803646   74753 main.go:141] libmachine: (no-preload-956403) Calling .Start
	I0920 18:18:14.803856   74753 main.go:141] libmachine: (no-preload-956403) Ensuring networks are active...
	I0920 18:18:14.804594   74753 main.go:141] libmachine: (no-preload-956403) Ensuring network default is active
	I0920 18:18:14.804920   74753 main.go:141] libmachine: (no-preload-956403) Ensuring network mk-no-preload-956403 is active
	I0920 18:18:14.805341   74753 main.go:141] libmachine: (no-preload-956403) Getting domain xml...
	I0920 18:18:14.806068   74753 main.go:141] libmachine: (no-preload-956403) Creating domain...
	I0920 18:18:16.196663   74753 main.go:141] libmachine: (no-preload-956403) Waiting to get IP...
	I0920 18:18:16.197381   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:16.197762   74753 main.go:141] libmachine: (no-preload-956403) DBG | unable to find current IP address of domain no-preload-956403 in network mk-no-preload-956403
	I0920 18:18:16.197885   74753 main.go:141] libmachine: (no-preload-956403) DBG | I0920 18:18:16.197764   77223 retry.go:31] will retry after 214.087819ms: waiting for machine to come up
	I0920 18:18:16.413365   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:16.413951   74753 main.go:141] libmachine: (no-preload-956403) DBG | unable to find current IP address of domain no-preload-956403 in network mk-no-preload-956403
	I0920 18:18:16.413977   74753 main.go:141] libmachine: (no-preload-956403) DBG | I0920 18:18:16.413916   77223 retry.go:31] will retry after 249.35647ms: waiting for machine to come up
	I0920 18:18:16.665587   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:16.666168   74753 main.go:141] libmachine: (no-preload-956403) DBG | unable to find current IP address of domain no-preload-956403 in network mk-no-preload-956403
	I0920 18:18:16.666203   74753 main.go:141] libmachine: (no-preload-956403) DBG | I0920 18:18:16.666132   77223 retry.go:31] will retry after 374.598012ms: waiting for machine to come up
	I0920 18:18:17.042911   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:17.043594   74753 main.go:141] libmachine: (no-preload-956403) DBG | unable to find current IP address of domain no-preload-956403 in network mk-no-preload-956403
	I0920 18:18:17.043618   74753 main.go:141] libmachine: (no-preload-956403) DBG | I0920 18:18:17.043550   77223 retry.go:31] will retry after 536.252353ms: waiting for machine to come up
	I0920 18:18:17.581141   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:17.581582   74753 main.go:141] libmachine: (no-preload-956403) DBG | unable to find current IP address of domain no-preload-956403 in network mk-no-preload-956403
	I0920 18:18:17.581616   74753 main.go:141] libmachine: (no-preload-956403) DBG | I0920 18:18:17.581528   77223 retry.go:31] will retry after 459.241867ms: waiting for machine to come up
	I0920 18:18:16.239946   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetIP
	I0920 18:18:16.242727   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:16.243147   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:16.243184   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:16.243561   75577 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 18:18:16.247928   75577 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:18:16.262165   75577 kubeadm.go:883] updating cluster {Name:old-k8s-version-744025 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-744025 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.207 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 18:18:16.262310   75577 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0920 18:18:16.262358   75577 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:18:16.313771   75577 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0920 18:18:16.313854   75577 ssh_runner.go:195] Run: which lz4
	I0920 18:18:16.318361   75577 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 18:18:16.322529   75577 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 18:18:16.322570   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0920 18:18:14.192319   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:16.194161   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:18.699498   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:18.999985   75264 pod_ready.go:103] pod "coredns-7c65d6cfc9-dmdfb" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:21.497825   75264 pod_ready.go:103] pod "coredns-7c65d6cfc9-dmdfb" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:18.042075   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:18.042566   74753 main.go:141] libmachine: (no-preload-956403) DBG | unable to find current IP address of domain no-preload-956403 in network mk-no-preload-956403
	I0920 18:18:18.042603   74753 main.go:141] libmachine: (no-preload-956403) DBG | I0920 18:18:18.042533   77223 retry.go:31] will retry after 833.585895ms: waiting for machine to come up
	I0920 18:18:18.877534   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:18.878023   74753 main.go:141] libmachine: (no-preload-956403) DBG | unable to find current IP address of domain no-preload-956403 in network mk-no-preload-956403
	I0920 18:18:18.878047   74753 main.go:141] libmachine: (no-preload-956403) DBG | I0920 18:18:18.877988   77223 retry.go:31] will retry after 1.035805905s: waiting for machine to come up
	I0920 18:18:19.915316   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:19.915735   74753 main.go:141] libmachine: (no-preload-956403) DBG | unable to find current IP address of domain no-preload-956403 in network mk-no-preload-956403
	I0920 18:18:19.915859   74753 main.go:141] libmachine: (no-preload-956403) DBG | I0920 18:18:19.915787   77223 retry.go:31] will retry after 978.827371ms: waiting for machine to come up
	I0920 18:18:20.896532   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:20.897185   74753 main.go:141] libmachine: (no-preload-956403) DBG | unable to find current IP address of domain no-preload-956403 in network mk-no-preload-956403
	I0920 18:18:20.897216   74753 main.go:141] libmachine: (no-preload-956403) DBG | I0920 18:18:20.897122   77223 retry.go:31] will retry after 1.808853939s: waiting for machine to come up
	I0920 18:18:17.953626   75577 crio.go:462] duration metric: took 1.635321078s to copy over tarball
	I0920 18:18:17.953717   75577 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 18:18:21.049561   75577 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.095809102s)
	I0920 18:18:21.049588   75577 crio.go:469] duration metric: took 3.095926273s to extract the tarball
	I0920 18:18:21.049596   75577 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 18:18:21.093521   75577 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:18:21.132318   75577 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0920 18:18:21.132361   75577 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0920 18:18:21.132426   75577 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:18:21.132449   75577 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 18:18:21.132432   75577 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 18:18:21.132551   75577 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 18:18:21.132587   75577 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0920 18:18:21.132708   75577 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0920 18:18:21.132819   75577 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 18:18:21.132557   75577 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0920 18:18:21.134217   75577 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0920 18:18:21.134256   75577 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0920 18:18:21.134277   75577 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 18:18:21.134313   75577 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0920 18:18:21.134327   75577 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 18:18:21.134341   75577 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 18:18:21.134266   75577 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 18:18:21.134356   75577 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:18:21.421546   75577 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0920 18:18:21.467311   75577 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0920 18:18:21.467349   75577 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0920 18:18:21.467400   75577 ssh_runner.go:195] Run: which crictl
	I0920 18:18:21.471158   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 18:18:21.474543   75577 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0920 18:18:21.480501   75577 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 18:18:21.499826   75577 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0920 18:18:21.499835   75577 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0920 18:18:21.505712   75577 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0920 18:18:21.514377   75577 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0920 18:18:21.514897   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 18:18:21.601531   75577 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0920 18:18:21.601582   75577 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0920 18:18:21.601635   75577 ssh_runner.go:195] Run: which crictl
	I0920 18:18:21.660417   75577 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0920 18:18:21.660457   75577 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 18:18:21.660504   75577 ssh_runner.go:195] Run: which crictl
	I0920 18:18:21.660528   75577 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0920 18:18:21.660571   75577 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 18:18:21.660625   75577 ssh_runner.go:195] Run: which crictl
	I0920 18:18:21.667538   75577 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0920 18:18:21.667558   75577 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0920 18:18:21.667583   75577 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 18:18:21.667595   75577 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0920 18:18:21.667633   75577 ssh_runner.go:195] Run: which crictl
	I0920 18:18:21.667652   75577 ssh_runner.go:195] Run: which crictl
	I0920 18:18:21.676529   75577 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0920 18:18:21.676580   75577 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 18:18:21.676602   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 18:18:21.676633   75577 ssh_runner.go:195] Run: which crictl
	I0920 18:18:21.676643   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 18:18:21.680041   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 18:18:21.681078   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 18:18:21.681180   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 18:18:21.683409   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 18:18:21.804439   75577 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0920 18:18:21.804492   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 18:18:21.804530   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 18:18:21.804585   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 18:18:21.823434   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 18:18:21.823497   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 18:18:21.823539   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 18:18:21.932568   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 18:18:21.932619   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 18:18:21.932639   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 18:18:21.958001   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 18:18:21.971089   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 18:18:21.975643   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 18:18:22.100118   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 18:18:22.100173   75577 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0920 18:18:22.100197   75577 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0920 18:18:22.100276   75577 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0920 18:18:22.100333   75577 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0920 18:18:22.109895   75577 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0920 18:18:22.137798   75577 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0920 18:18:22.331884   75577 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:18:22.476470   75577 cache_images.go:92] duration metric: took 1.344086626s to LoadCachedImages
	W0920 18:18:22.476575   75577 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0920 18:18:22.476595   75577 kubeadm.go:934] updating node { 192.168.39.207 8443 v1.20.0 crio true true} ...
	I0920 18:18:22.476742   75577 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-744025 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.207
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-744025 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 18:18:22.476833   75577 ssh_runner.go:195] Run: crio config
	I0920 18:18:22.528964   75577 cni.go:84] Creating CNI manager for ""
	I0920 18:18:22.528990   75577 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:18:22.528999   75577 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 18:18:22.529016   75577 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.207 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-744025 NodeName:old-k8s-version-744025 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.207"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.207 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0920 18:18:22.529173   75577 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.207
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-744025"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.207
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.207"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 18:18:22.529233   75577 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0920 18:18:22.540849   75577 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 18:18:22.540935   75577 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 18:18:22.551148   75577 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0920 18:18:22.569199   75577 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 18:18:22.585909   75577 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0920 18:18:22.604987   75577 ssh_runner.go:195] Run: grep 192.168.39.207	control-plane.minikube.internal$ /etc/hosts
	I0920 18:18:22.609152   75577 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.207	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:18:22.628042   75577 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:18:21.191366   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:23.193152   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:23.914443   75264 pod_ready.go:103] pod "coredns-7c65d6cfc9-dmdfb" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:25.002152   75264 pod_ready.go:93] pod "coredns-7c65d6cfc9-dmdfb" in "kube-system" namespace has status "Ready":"True"
	I0920 18:18:25.002185   75264 pod_ready.go:82] duration metric: took 8.012280973s for pod "coredns-7c65d6cfc9-dmdfb" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:25.002198   75264 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:25.010541   75264 pod_ready.go:93] pod "etcd-default-k8s-diff-port-553719" in "kube-system" namespace has status "Ready":"True"
	I0920 18:18:25.010570   75264 pod_ready.go:82] duration metric: took 8.362504ms for pod "etcd-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:25.010591   75264 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:25.023701   75264 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-553719" in "kube-system" namespace has status "Ready":"True"
	I0920 18:18:25.023741   75264 pod_ready.go:82] duration metric: took 13.139423ms for pod "kube-apiserver-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:25.023758   75264 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:25.031247   75264 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-553719" in "kube-system" namespace has status "Ready":"True"
	I0920 18:18:25.031282   75264 pod_ready.go:82] duration metric: took 7.515474ms for pod "kube-controller-manager-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:25.031307   75264 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-p9crq" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:25.119242   75264 pod_ready.go:93] pod "kube-proxy-p9crq" in "kube-system" namespace has status "Ready":"True"
	I0920 18:18:25.119267   75264 pod_ready.go:82] duration metric: took 87.951791ms for pod "kube-proxy-p9crq" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:25.119280   75264 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:25.518243   75264 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-553719" in "kube-system" namespace has status "Ready":"True"
	I0920 18:18:25.518267   75264 pod_ready.go:82] duration metric: took 398.979314ms for pod "kube-scheduler-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:25.518281   75264 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:22.707778   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:22.708271   74753 main.go:141] libmachine: (no-preload-956403) DBG | unable to find current IP address of domain no-preload-956403 in network mk-no-preload-956403
	I0920 18:18:22.708333   74753 main.go:141] libmachine: (no-preload-956403) DBG | I0920 18:18:22.708161   77223 retry.go:31] will retry after 1.516042611s: waiting for machine to come up
	I0920 18:18:24.225560   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:24.225987   74753 main.go:141] libmachine: (no-preload-956403) DBG | unable to find current IP address of domain no-preload-956403 in network mk-no-preload-956403
	I0920 18:18:24.226041   74753 main.go:141] libmachine: (no-preload-956403) DBG | I0920 18:18:24.225968   77223 retry.go:31] will retry after 2.135874415s: waiting for machine to come up
	I0920 18:18:26.363371   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:26.363861   74753 main.go:141] libmachine: (no-preload-956403) DBG | unable to find current IP address of domain no-preload-956403 in network mk-no-preload-956403
	I0920 18:18:26.363895   74753 main.go:141] libmachine: (no-preload-956403) DBG | I0920 18:18:26.363777   77223 retry.go:31] will retry after 3.29383193s: waiting for machine to come up
	I0920 18:18:22.796651   75577 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:18:22.813789   75577 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025 for IP: 192.168.39.207
	I0920 18:18:22.813817   75577 certs.go:194] generating shared ca certs ...
	I0920 18:18:22.813848   75577 certs.go:226] acquiring lock for ca certs: {Name:mkc7ef6c737c6bdc3fdd9dcff8f57029c020d8f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:18:22.814045   75577 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key
	I0920 18:18:22.814101   75577 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key
	I0920 18:18:22.814118   75577 certs.go:256] generating profile certs ...
	I0920 18:18:22.814290   75577 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/client.key
	I0920 18:18:22.814383   75577 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/apiserver.key.3105b99d
	I0920 18:18:22.814445   75577 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/proxy-client.key
	I0920 18:18:22.814626   75577 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem (1338 bytes)
	W0920 18:18:22.814660   75577 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973_empty.pem, impossibly tiny 0 bytes
	I0920 18:18:22.814666   75577 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 18:18:22.814691   75577 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem (1082 bytes)
	I0920 18:18:22.814717   75577 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem (1123 bytes)
	I0920 18:18:22.814749   75577 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem (1675 bytes)
	I0920 18:18:22.814813   75577 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem (1708 bytes)
	I0920 18:18:22.815736   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 18:18:22.863832   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 18:18:22.921747   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 18:18:22.959556   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0920 18:18:22.992097   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0920 18:18:23.027565   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 18:18:23.057374   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 18:18:23.094290   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 18:18:23.120095   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem --> /usr/share/ca-certificates/15973.pem (1338 bytes)
	I0920 18:18:23.144374   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /usr/share/ca-certificates/159732.pem (1708 bytes)
	I0920 18:18:23.169431   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 18:18:23.195779   75577 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 18:18:23.212820   75577 ssh_runner.go:195] Run: openssl version
	I0920 18:18:23.218876   75577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 18:18:23.229684   75577 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:18:23.234533   75577 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:18:23.234603   75577 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:18:23.240460   75577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 18:18:23.251940   75577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15973.pem && ln -fs /usr/share/ca-certificates/15973.pem /etc/ssl/certs/15973.pem"
	I0920 18:18:23.263308   75577 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15973.pem
	I0920 18:18:23.268059   75577 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:04 /usr/share/ca-certificates/15973.pem
	I0920 18:18:23.268128   75577 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15973.pem
	I0920 18:18:23.274199   75577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15973.pem /etc/ssl/certs/51391683.0"
	I0920 18:18:23.286362   75577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/159732.pem && ln -fs /usr/share/ca-certificates/159732.pem /etc/ssl/certs/159732.pem"
	I0920 18:18:23.303962   75577 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/159732.pem
	I0920 18:18:23.310907   75577 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:04 /usr/share/ca-certificates/159732.pem
	I0920 18:18:23.310981   75577 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/159732.pem
	I0920 18:18:23.317881   75577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/159732.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 18:18:23.329247   75577 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 18:18:23.334223   75577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 18:18:23.340565   75577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 18:18:23.346929   75577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 18:18:23.353681   75577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 18:18:23.359699   75577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 18:18:23.365749   75577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 18:18:23.371899   75577 kubeadm.go:392] StartCluster: {Name:old-k8s-version-744025 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-744025 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.207 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:18:23.371981   75577 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 18:18:23.372027   75577 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:18:23.419904   75577 cri.go:89] found id: ""
	I0920 18:18:23.419982   75577 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 18:18:23.431761   75577 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 18:18:23.431782   75577 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 18:18:23.431833   75577 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 18:18:23.444358   75577 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 18:18:23.445545   75577 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-744025" does not appear in /home/jenkins/minikube-integration/19672-8777/kubeconfig
	I0920 18:18:23.446531   75577 kubeconfig.go:62] /home/jenkins/minikube-integration/19672-8777/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-744025" cluster setting kubeconfig missing "old-k8s-version-744025" context setting]
	I0920 18:18:23.447808   75577 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/kubeconfig: {Name:mkf32a4c736808e023459b2f0e40188618a38db1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:18:23.568927   75577 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 18:18:23.579991   75577 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.207
	I0920 18:18:23.580025   75577 kubeadm.go:1160] stopping kube-system containers ...
	I0920 18:18:23.580038   75577 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0920 18:18:23.580097   75577 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:18:23.625568   75577 cri.go:89] found id: ""
	I0920 18:18:23.625648   75577 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 18:18:23.643938   75577 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 18:18:23.654375   75577 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 18:18:23.654398   75577 kubeadm.go:157] found existing configuration files:
	
	I0920 18:18:23.654453   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 18:18:23.664335   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 18:18:23.664409   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 18:18:23.674996   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 18:18:23.685310   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 18:18:23.685401   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 18:18:23.696241   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 18:18:23.706386   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 18:18:23.706465   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 18:18:23.716491   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 18:18:23.726566   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 18:18:23.726626   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 18:18:23.738576   75577 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 18:18:23.749510   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:23.877503   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:24.789322   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:25.054969   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:25.152117   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:25.249140   75577 api_server.go:52] waiting for apiserver process to appear ...
	I0920 18:18:25.249245   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:25.749427   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:26.249360   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:26.749895   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:27.249636   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:25.194678   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:27.693680   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:27.524378   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:29.524998   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:32.025310   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:29.661278   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:29.661743   74753 main.go:141] libmachine: (no-preload-956403) DBG | unable to find current IP address of domain no-preload-956403 in network mk-no-preload-956403
	I0920 18:18:29.661762   74753 main.go:141] libmachine: (no-preload-956403) DBG | I0920 18:18:29.661714   77223 retry.go:31] will retry after 3.154777794s: waiting for machine to come up
	I0920 18:18:27.749629   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:28.250139   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:28.749691   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:29.249709   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:29.749550   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:30.250186   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:30.749562   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:31.249622   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:31.749925   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:32.250324   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:30.191419   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:32.191873   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:32.820331   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:32.820870   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has current primary IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:32.820895   74753 main.go:141] libmachine: (no-preload-956403) Found IP for machine: 192.168.50.47
	I0920 18:18:32.820908   74753 main.go:141] libmachine: (no-preload-956403) Reserving static IP address...
	I0920 18:18:32.821313   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "no-preload-956403", mac: "52:54:00:b6:13:30", ip: "192.168.50.47"} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:32.821339   74753 main.go:141] libmachine: (no-preload-956403) Reserved static IP address: 192.168.50.47
	I0920 18:18:32.821355   74753 main.go:141] libmachine: (no-preload-956403) DBG | skip adding static IP to network mk-no-preload-956403 - found existing host DHCP lease matching {name: "no-preload-956403", mac: "52:54:00:b6:13:30", ip: "192.168.50.47"}
	I0920 18:18:32.821372   74753 main.go:141] libmachine: (no-preload-956403) DBG | Getting to WaitForSSH function...
	I0920 18:18:32.821389   74753 main.go:141] libmachine: (no-preload-956403) Waiting for SSH to be available...
	I0920 18:18:32.823590   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:32.823894   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:32.823928   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:32.824019   74753 main.go:141] libmachine: (no-preload-956403) DBG | Using SSH client type: external
	I0920 18:18:32.824046   74753 main.go:141] libmachine: (no-preload-956403) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/no-preload-956403/id_rsa (-rw-------)
	I0920 18:18:32.824077   74753 main.go:141] libmachine: (no-preload-956403) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.47 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-8777/.minikube/machines/no-preload-956403/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 18:18:32.824090   74753 main.go:141] libmachine: (no-preload-956403) DBG | About to run SSH command:
	I0920 18:18:32.824102   74753 main.go:141] libmachine: (no-preload-956403) DBG | exit 0
	I0920 18:18:32.950122   74753 main.go:141] libmachine: (no-preload-956403) DBG | SSH cmd err, output: <nil>: 
	I0920 18:18:32.950537   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetConfigRaw
	I0920 18:18:32.951187   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetIP
	I0920 18:18:32.953731   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:32.954073   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:32.954102   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:32.954365   74753 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/no-preload-956403/config.json ...
	I0920 18:18:32.954603   74753 machine.go:93] provisionDockerMachine start ...
	I0920 18:18:32.954621   74753 main.go:141] libmachine: (no-preload-956403) Calling .DriverName
	I0920 18:18:32.954814   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:18:32.957049   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:32.957379   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:32.957425   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:32.957538   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHPort
	I0920 18:18:32.957717   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:32.957920   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:32.958094   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHUsername
	I0920 18:18:32.958282   74753 main.go:141] libmachine: Using SSH client type: native
	I0920 18:18:32.958494   74753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.47 22 <nil> <nil>}
	I0920 18:18:32.958506   74753 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 18:18:33.058187   74753 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 18:18:33.058222   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetMachineName
	I0920 18:18:33.058477   74753 buildroot.go:166] provisioning hostname "no-preload-956403"
	I0920 18:18:33.058508   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetMachineName
	I0920 18:18:33.058668   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:18:33.061310   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.061616   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:33.061649   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.061783   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHPort
	I0920 18:18:33.061961   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:33.062089   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:33.062197   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHUsername
	I0920 18:18:33.062340   74753 main.go:141] libmachine: Using SSH client type: native
	I0920 18:18:33.062536   74753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.47 22 <nil> <nil>}
	I0920 18:18:33.062553   74753 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-956403 && echo "no-preload-956403" | sudo tee /etc/hostname
	I0920 18:18:33.175825   74753 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-956403
	
	I0920 18:18:33.175860   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:18:33.178483   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.178749   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:33.178769   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.178904   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHPort
	I0920 18:18:33.179077   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:33.179226   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:33.179366   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHUsername
	I0920 18:18:33.179545   74753 main.go:141] libmachine: Using SSH client type: native
	I0920 18:18:33.179710   74753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.47 22 <nil> <nil>}
	I0920 18:18:33.179726   74753 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-956403' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-956403/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-956403' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 18:18:33.290675   74753 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:18:33.290701   74753 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-8777/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-8777/.minikube}
	I0920 18:18:33.290722   74753 buildroot.go:174] setting up certificates
	I0920 18:18:33.290735   74753 provision.go:84] configureAuth start
	I0920 18:18:33.290747   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetMachineName
	I0920 18:18:33.291015   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetIP
	I0920 18:18:33.293810   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.294244   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:33.294282   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.294337   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:18:33.296376   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.296749   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:33.296776   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.296930   74753 provision.go:143] copyHostCerts
	I0920 18:18:33.296985   74753 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem, removing ...
	I0920 18:18:33.296995   74753 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem
	I0920 18:18:33.297048   74753 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem (1082 bytes)
	I0920 18:18:33.297166   74753 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem, removing ...
	I0920 18:18:33.297177   74753 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem
	I0920 18:18:33.297211   74753 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem (1123 bytes)
	I0920 18:18:33.297287   74753 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem, removing ...
	I0920 18:18:33.297296   74753 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem
	I0920 18:18:33.297320   74753 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem (1675 bytes)
	I0920 18:18:33.297387   74753 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem org=jenkins.no-preload-956403 san=[127.0.0.1 192.168.50.47 localhost minikube no-preload-956403]
	I0920 18:18:33.366768   74753 provision.go:177] copyRemoteCerts
	I0920 18:18:33.366830   74753 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 18:18:33.366852   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:18:33.369441   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.369755   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:33.369787   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.369958   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHPort
	I0920 18:18:33.370127   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:33.370293   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHUsername
	I0920 18:18:33.370490   74753 sshutil.go:53] new ssh client: &{IP:192.168.50.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/no-preload-956403/id_rsa Username:docker}
	I0920 18:18:33.452070   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0920 18:18:33.477564   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0920 18:18:33.501002   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 18:18:33.526725   74753 provision.go:87] duration metric: took 235.975552ms to configureAuth
	I0920 18:18:33.526755   74753 buildroot.go:189] setting minikube options for container-runtime
	I0920 18:18:33.526943   74753 config.go:182] Loaded profile config "no-preload-956403": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:18:33.527011   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:18:33.529870   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.530338   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:33.530371   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.530485   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHPort
	I0920 18:18:33.530708   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:33.530897   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:33.531057   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHUsername
	I0920 18:18:33.531276   74753 main.go:141] libmachine: Using SSH client type: native
	I0920 18:18:33.531497   74753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.47 22 <nil> <nil>}
	I0920 18:18:33.531519   74753 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 18:18:33.758711   74753 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 18:18:33.758739   74753 machine.go:96] duration metric: took 804.122904ms to provisionDockerMachine
	I0920 18:18:33.758753   74753 start.go:293] postStartSetup for "no-preload-956403" (driver="kvm2")
	I0920 18:18:33.758771   74753 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 18:18:33.758796   74753 main.go:141] libmachine: (no-preload-956403) Calling .DriverName
	I0920 18:18:33.759207   74753 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 18:18:33.759262   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:18:33.762524   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.762991   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:33.763027   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.763207   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHPort
	I0920 18:18:33.763471   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:33.763643   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHUsername
	I0920 18:18:33.763861   74753 sshutil.go:53] new ssh client: &{IP:192.168.50.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/no-preload-956403/id_rsa Username:docker}
	I0920 18:18:33.844910   74753 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 18:18:33.849111   74753 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 18:18:33.849137   74753 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/addons for local assets ...
	I0920 18:18:33.849205   74753 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/files for local assets ...
	I0920 18:18:33.849280   74753 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem -> 159732.pem in /etc/ssl/certs
	I0920 18:18:33.849367   74753 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 18:18:33.858757   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /etc/ssl/certs/159732.pem (1708 bytes)
	I0920 18:18:33.883430   74753 start.go:296] duration metric: took 124.663757ms for postStartSetup
	I0920 18:18:33.883468   74753 fix.go:56] duration metric: took 19.104481875s for fixHost
	I0920 18:18:33.883488   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:18:33.886703   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.887047   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:33.887076   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.887276   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHPort
	I0920 18:18:33.887502   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:33.887685   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:33.887816   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHUsername
	I0920 18:18:33.887962   74753 main.go:141] libmachine: Using SSH client type: native
	I0920 18:18:33.888137   74753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.47 22 <nil> <nil>}
	I0920 18:18:33.888148   74753 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 18:18:33.994635   74753 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726856313.964367484
	
	I0920 18:18:33.994660   74753 fix.go:216] guest clock: 1726856313.964367484
	I0920 18:18:33.994670   74753 fix.go:229] Guest: 2024-09-20 18:18:33.964367484 +0000 UTC Remote: 2024-09-20 18:18:33.883472234 +0000 UTC m=+356.278282007 (delta=80.89525ms)
	I0920 18:18:33.994695   74753 fix.go:200] guest clock delta is within tolerance: 80.89525ms
	I0920 18:18:33.994701   74753 start.go:83] releasing machines lock for "no-preload-956403", held for 19.215743841s
	I0920 18:18:33.994726   74753 main.go:141] libmachine: (no-preload-956403) Calling .DriverName
	I0920 18:18:33.994976   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetIP
	I0920 18:18:33.997685   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.998089   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:33.998114   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.998291   74753 main.go:141] libmachine: (no-preload-956403) Calling .DriverName
	I0920 18:18:33.998765   74753 main.go:141] libmachine: (no-preload-956403) Calling .DriverName
	I0920 18:18:33.998923   74753 main.go:141] libmachine: (no-preload-956403) Calling .DriverName
	I0920 18:18:33.999007   74753 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 18:18:33.999054   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:18:33.999142   74753 ssh_runner.go:195] Run: cat /version.json
	I0920 18:18:33.999168   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:18:34.001725   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:34.001891   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:34.002197   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:34.002225   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:34.002369   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHPort
	I0920 18:18:34.002424   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:34.002452   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:34.002533   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:34.002625   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHPort
	I0920 18:18:34.002695   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHUsername
	I0920 18:18:34.002744   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:34.002813   74753 sshutil.go:53] new ssh client: &{IP:192.168.50.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/no-preload-956403/id_rsa Username:docker}
	I0920 18:18:34.002888   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHUsername
	I0920 18:18:34.003032   74753 sshutil.go:53] new ssh client: &{IP:192.168.50.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/no-preload-956403/id_rsa Username:docker}
	I0920 18:18:34.079092   74753 ssh_runner.go:195] Run: systemctl --version
	I0920 18:18:34.112066   74753 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 18:18:34.257205   74753 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 18:18:34.265774   74753 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 18:18:34.265871   74753 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 18:18:34.285222   74753 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 18:18:34.285247   74753 start.go:495] detecting cgroup driver to use...
	I0920 18:18:34.285320   74753 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 18:18:34.302192   74753 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 18:18:34.316624   74753 docker.go:217] disabling cri-docker service (if available) ...
	I0920 18:18:34.316695   74753 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 18:18:34.331098   74753 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 18:18:34.345433   74753 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 18:18:34.481886   74753 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 18:18:34.644442   74753 docker.go:233] disabling docker service ...
	I0920 18:18:34.644530   74753 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 18:18:34.658714   74753 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 18:18:34.671506   74753 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 18:18:34.809548   74753 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 18:18:34.957438   74753 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 18:18:34.972102   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 18:18:34.993129   74753 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 18:18:34.993199   74753 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:18:35.004196   74753 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 18:18:35.004273   74753 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:18:35.015258   74753 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:18:35.026240   74753 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:18:35.037658   74753 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 18:18:35.048637   74753 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:18:35.059494   74753 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:18:35.077264   74753 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:18:35.087777   74753 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 18:18:35.097812   74753 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 18:18:35.097947   74753 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 18:18:35.112200   74753 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 18:18:35.123381   74753 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:18:35.262386   74753 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 18:18:35.361152   74753 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 18:18:35.361257   74753 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 18:18:35.366321   74753 start.go:563] Will wait 60s for crictl version
	I0920 18:18:35.366390   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:18:35.370379   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 18:18:35.415080   74753 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 18:18:35.415182   74753 ssh_runner.go:195] Run: crio --version
	I0920 18:18:35.446934   74753 ssh_runner.go:195] Run: crio --version
	I0920 18:18:35.478823   74753 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 18:18:34.026138   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:36.525133   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:35.479956   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetIP
	I0920 18:18:35.483428   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:35.483820   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:35.483848   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:35.484079   74753 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0920 18:18:35.488686   74753 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:18:35.501184   74753 kubeadm.go:883] updating cluster {Name:no-preload-956403 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-956403 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.47 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 18:18:35.501348   74753 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:18:35.501391   74753 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:18:35.535377   74753 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 18:18:35.535400   74753 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0920 18:18:35.535466   74753 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 18:18:35.535499   74753 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0920 18:18:35.535510   74753 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0920 18:18:35.535534   74753 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0920 18:18:35.535538   74753 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0920 18:18:35.535513   74753 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0920 18:18:35.535625   74753 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0920 18:18:35.535483   74753 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:18:35.537100   74753 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 18:18:35.537104   74753 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0920 18:18:35.537126   74753 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0920 18:18:35.537091   74753 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0920 18:18:35.537164   74753 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0920 18:18:35.537180   74753 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0920 18:18:35.537191   74753 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0920 18:18:35.537105   74753 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:18:35.770046   74753 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0920 18:18:35.796364   74753 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0920 18:18:35.800776   74753 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0920 18:18:35.802291   74753 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0920 18:18:35.845969   74753 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0920 18:18:35.860263   74753 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0920 18:18:35.892349   74753 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 18:18:35.919269   74753 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0920 18:18:35.919323   74753 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0920 18:18:35.919375   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:18:35.929045   74753 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0920 18:18:35.929096   74753 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0920 18:18:35.929143   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:18:35.929181   74753 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0920 18:18:35.929235   74753 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0920 18:18:35.929299   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:18:35.968513   74753 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0920 18:18:35.968557   74753 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0920 18:18:35.968553   74753 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0920 18:18:35.968589   74753 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0920 18:18:35.968612   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:18:35.968636   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:18:35.984335   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0920 18:18:35.984366   74753 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0920 18:18:35.984379   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0920 18:18:35.984403   74753 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 18:18:35.984433   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0920 18:18:35.984450   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:18:35.984474   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0920 18:18:35.984505   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0920 18:18:36.106947   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0920 18:18:36.106964   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 18:18:36.106954   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0920 18:18:36.107025   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0920 18:18:36.107112   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0920 18:18:36.107142   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0920 18:18:36.231892   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0920 18:18:36.231989   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0920 18:18:36.232026   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0920 18:18:36.232143   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0920 18:18:36.232193   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0920 18:18:36.232281   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 18:18:36.355534   74753 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0920 18:18:36.355632   74753 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0920 18:18:36.355650   74753 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0920 18:18:36.355730   74753 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0920 18:18:36.358584   74753 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0920 18:18:36.358653   74753 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0920 18:18:36.358678   74753 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0920 18:18:36.358677   74753 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0920 18:18:36.358723   74753 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0920 18:18:36.358751   74753 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0920 18:18:36.367463   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 18:18:36.372389   74753 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0920 18:18:36.372412   74753 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0920 18:18:36.372457   74753 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0920 18:18:36.372580   74753 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I0920 18:18:36.374496   74753 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0920 18:18:36.374538   74753 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I0920 18:18:36.374638   74753 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I0920 18:18:36.410860   74753 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0920 18:18:36.410961   74753 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0920 18:18:36.738003   74753 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:18:32.749877   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:33.249391   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:33.749510   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:34.250003   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:34.749967   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:35.249953   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:35.749992   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:36.250161   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:36.750339   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:37.249600   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:34.192782   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:36.692485   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:38.692626   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:39.024337   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:41.025364   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:38.480788   74753 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.108302901s)
	I0920 18:18:38.480819   74753 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0920 18:18:38.480839   74753 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0920 18:18:38.480869   74753 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1: (2.069867717s)
	I0920 18:18:38.480893   74753 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0920 18:18:38.480896   74753 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I0920 18:18:38.480922   74753 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.742864605s)
	I0920 18:18:38.480970   74753 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0920 18:18:38.480993   74753 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:18:38.481031   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:18:40.340748   74753 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (1.85982838s)
	I0920 18:18:40.340782   74753 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0920 18:18:40.340790   74753 ssh_runner.go:235] Completed: which crictl: (1.859743083s)
	I0920 18:18:40.340812   74753 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0920 18:18:40.340850   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:18:40.340869   74753 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0920 18:18:37.750269   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:38.249855   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:38.749845   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:39.249342   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:39.750306   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:40.250103   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:40.750346   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:41.250134   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:41.750136   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:42.249945   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:41.191300   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:43.193536   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:43.025860   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:45.527119   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:44.427309   74753 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (4.086436233s)
	I0920 18:18:44.427365   74753 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (4.086473186s)
	I0920 18:18:44.427395   74753 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0920 18:18:44.427400   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:18:44.427430   74753 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0920 18:18:44.427510   74753 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0920 18:18:46.393094   74753 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (1.965557326s)
	I0920 18:18:46.393133   74753 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0920 18:18:46.393163   74753 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0920 18:18:46.393170   74753 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.965750119s)
	I0920 18:18:46.393216   74753 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0920 18:18:46.393241   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:18:42.749980   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:43.249416   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:43.750087   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:44.249972   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:44.749649   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:45.250128   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:45.750346   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:46.249611   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:46.749566   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:47.249814   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:45.193713   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:47.691882   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:48.027047   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:50.527076   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:47.771559   74753 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.37829872s)
	I0920 18:18:47.771595   74753 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0920 18:18:47.771607   74753 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.378351302s)
	I0920 18:18:47.771618   74753 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0920 18:18:47.771645   74753 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0920 18:18:47.771668   74753 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0920 18:18:47.771726   74753 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0920 18:18:49.544117   74753 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.772423068s)
	I0920 18:18:49.544147   74753 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0920 18:18:49.544159   74753 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.772401848s)
	I0920 18:18:49.544187   74753 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0920 18:18:49.544199   74753 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0920 18:18:49.544275   74753 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0920 18:18:50.198691   74753 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0920 18:18:50.198744   74753 cache_images.go:123] Successfully loaded all cached images
	I0920 18:18:50.198752   74753 cache_images.go:92] duration metric: took 14.66333409s to LoadCachedImages
	I0920 18:18:50.198766   74753 kubeadm.go:934] updating node { 192.168.50.47 8443 v1.31.1 crio true true} ...
	I0920 18:18:50.198900   74753 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-956403 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.47
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-956403 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 18:18:50.198988   74753 ssh_runner.go:195] Run: crio config
	I0920 18:18:50.249876   74753 cni.go:84] Creating CNI manager for ""
	I0920 18:18:50.249901   74753 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:18:50.249915   74753 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 18:18:50.249942   74753 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.47 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-956403 NodeName:no-preload-956403 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.47"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.47 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 18:18:50.250150   74753 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.47
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-956403"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.47
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.47"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 18:18:50.250264   74753 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 18:18:50.262805   74753 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 18:18:50.262886   74753 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 18:18:50.272958   74753 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0920 18:18:50.290269   74753 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 18:18:50.306981   74753 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0920 18:18:50.324360   74753 ssh_runner.go:195] Run: grep 192.168.50.47	control-plane.minikube.internal$ /etc/hosts
	I0920 18:18:50.328382   74753 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.47	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:18:50.341021   74753 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:18:50.462105   74753 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:18:50.478780   74753 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/no-preload-956403 for IP: 192.168.50.47
	I0920 18:18:50.478804   74753 certs.go:194] generating shared ca certs ...
	I0920 18:18:50.478824   74753 certs.go:226] acquiring lock for ca certs: {Name:mkc7ef6c737c6bdc3fdd9dcff8f57029c020d8f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:18:50.479010   74753 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key
	I0920 18:18:50.479069   74753 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key
	I0920 18:18:50.479082   74753 certs.go:256] generating profile certs ...
	I0920 18:18:50.479188   74753 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/no-preload-956403/client.key
	I0920 18:18:50.479270   74753 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/no-preload-956403/apiserver.key.859cf278
	I0920 18:18:50.479335   74753 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/no-preload-956403/proxy-client.key
	I0920 18:18:50.479491   74753 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem (1338 bytes)
	W0920 18:18:50.479534   74753 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973_empty.pem, impossibly tiny 0 bytes
	I0920 18:18:50.479549   74753 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 18:18:50.479596   74753 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem (1082 bytes)
	I0920 18:18:50.479633   74753 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem (1123 bytes)
	I0920 18:18:50.479668   74753 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem (1675 bytes)
	I0920 18:18:50.479771   74753 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem (1708 bytes)
	I0920 18:18:50.480696   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 18:18:50.514559   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 18:18:50.548790   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 18:18:50.581384   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0920 18:18:50.609138   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/no-preload-956403/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0920 18:18:50.641098   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/no-preload-956403/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 18:18:50.680479   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/no-preload-956403/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 18:18:50.705168   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/no-preload-956403/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 18:18:50.727603   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /usr/share/ca-certificates/159732.pem (1708 bytes)
	I0920 18:18:50.750272   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 18:18:50.776117   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem --> /usr/share/ca-certificates/15973.pem (1338 bytes)
	I0920 18:18:50.799799   74753 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 18:18:50.816250   74753 ssh_runner.go:195] Run: openssl version
	I0920 18:18:50.821680   74753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/159732.pem && ln -fs /usr/share/ca-certificates/159732.pem /etc/ssl/certs/159732.pem"
	I0920 18:18:50.832295   74753 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/159732.pem
	I0920 18:18:50.836833   74753 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:04 /usr/share/ca-certificates/159732.pem
	I0920 18:18:50.836896   74753 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/159732.pem
	I0920 18:18:50.842626   74753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/159732.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 18:18:50.853297   74753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 18:18:50.864400   74753 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:18:50.868951   74753 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:18:50.869011   74753 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:18:50.874547   74753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 18:18:50.885615   74753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15973.pem && ln -fs /usr/share/ca-certificates/15973.pem /etc/ssl/certs/15973.pem"
	I0920 18:18:50.896960   74753 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15973.pem
	I0920 18:18:50.901798   74753 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:04 /usr/share/ca-certificates/15973.pem
	I0920 18:18:50.901879   74753 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15973.pem
	I0920 18:18:50.907920   74753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15973.pem /etc/ssl/certs/51391683.0"
	I0920 18:18:50.919345   74753 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 18:18:50.923671   74753 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 18:18:50.929701   74753 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 18:18:50.935649   74753 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 18:18:50.942162   74753 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 18:18:50.948012   74753 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 18:18:50.954097   74753 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 18:18:50.960442   74753 kubeadm.go:392] StartCluster: {Name:no-preload-956403 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-956403 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.47 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:18:50.960535   74753 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 18:18:50.960600   74753 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:18:50.998528   74753 cri.go:89] found id: ""
	I0920 18:18:50.998608   74753 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 18:18:51.008964   74753 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 18:18:51.008985   74753 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 18:18:51.009043   74753 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 18:18:51.018457   74753 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 18:18:51.019553   74753 kubeconfig.go:125] found "no-preload-956403" server: "https://192.168.50.47:8443"
	I0920 18:18:51.021712   74753 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 18:18:51.033439   74753 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.47
	I0920 18:18:51.033470   74753 kubeadm.go:1160] stopping kube-system containers ...
	I0920 18:18:51.033481   74753 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0920 18:18:51.033538   74753 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:18:51.070049   74753 cri.go:89] found id: ""
	I0920 18:18:51.070137   74753 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 18:18:51.087472   74753 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 18:18:51.098582   74753 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 18:18:51.098606   74753 kubeadm.go:157] found existing configuration files:
	
	I0920 18:18:51.098654   74753 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 18:18:51.107201   74753 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 18:18:51.107276   74753 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 18:18:51.116058   74753 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 18:18:51.124563   74753 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 18:18:51.124630   74753 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 18:18:51.134174   74753 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 18:18:51.142880   74753 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 18:18:51.142944   74753 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 18:18:51.152181   74753 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 18:18:51.161942   74753 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 18:18:51.162012   74753 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 18:18:51.171615   74753 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 18:18:51.180728   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:51.292140   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:52.019018   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:52.239327   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:52.317900   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:52.433910   74753 api_server.go:52] waiting for apiserver process to appear ...
	I0920 18:18:52.433991   74753 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:47.749487   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:48.249612   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:48.750324   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:49.250006   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:49.749667   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:50.249802   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:50.749597   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:51.250236   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:51.750203   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:52.250132   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:49.693075   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:52.192547   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:53.024979   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:55.025225   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:52.934956   74753 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:53.434681   74753 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:53.451797   74753 api_server.go:72] duration metric: took 1.017867426s to wait for apiserver process to appear ...
	I0920 18:18:53.451828   74753 api_server.go:88] waiting for apiserver healthz status ...
	I0920 18:18:53.451851   74753 api_server.go:253] Checking apiserver healthz at https://192.168.50.47:8443/healthz ...
	I0920 18:18:53.452366   74753 api_server.go:269] stopped: https://192.168.50.47:8443/healthz: Get "https://192.168.50.47:8443/healthz": dial tcp 192.168.50.47:8443: connect: connection refused
	I0920 18:18:53.952175   74753 api_server.go:253] Checking apiserver healthz at https://192.168.50.47:8443/healthz ...
	I0920 18:18:55.972801   74753 api_server.go:279] https://192.168.50.47:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 18:18:55.972835   74753 api_server.go:103] status: https://192.168.50.47:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 18:18:55.972854   74753 api_server.go:253] Checking apiserver healthz at https://192.168.50.47:8443/healthz ...
	I0920 18:18:56.018532   74753 api_server.go:279] https://192.168.50.47:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 18:18:56.018563   74753 api_server.go:103] status: https://192.168.50.47:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 18:18:56.452127   74753 api_server.go:253] Checking apiserver healthz at https://192.168.50.47:8443/healthz ...
	I0920 18:18:56.459752   74753 api_server.go:279] https://192.168.50.47:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 18:18:56.459796   74753 api_server.go:103] status: https://192.168.50.47:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 18:18:56.952284   74753 api_server.go:253] Checking apiserver healthz at https://192.168.50.47:8443/healthz ...
	I0920 18:18:56.959496   74753 api_server.go:279] https://192.168.50.47:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 18:18:56.959553   74753 api_server.go:103] status: https://192.168.50.47:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 18:18:57.452049   74753 api_server.go:253] Checking apiserver healthz at https://192.168.50.47:8443/healthz ...
	I0920 18:18:57.456586   74753 api_server.go:279] https://192.168.50.47:8443/healthz returned 200:
	ok
	I0920 18:18:57.463754   74753 api_server.go:141] control plane version: v1.31.1
	I0920 18:18:57.463785   74753 api_server.go:131] duration metric: took 4.011948835s to wait for apiserver health ...
	I0920 18:18:57.463795   74753 cni.go:84] Creating CNI manager for ""
	I0920 18:18:57.463804   74753 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:18:57.465916   74753 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 18:18:57.467416   74753 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 18:18:57.485487   74753 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 18:18:57.529426   74753 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 18:18:57.540046   74753 system_pods.go:59] 8 kube-system pods found
	I0920 18:18:57.540100   74753 system_pods.go:61] "coredns-7c65d6cfc9-j2t5h" [dd2636f1-3200-4f22-957c-046277c9be8c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 18:18:57.540113   74753 system_pods.go:61] "etcd-no-preload-956403" [daf84be0-e673-4685-a998-47ec64664484] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0920 18:18:57.540127   74753 system_pods.go:61] "kube-apiserver-no-preload-956403" [416217e6-3c91-49ec-bd69-812a49b4afd9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0920 18:18:57.540139   74753 system_pods.go:61] "kube-controller-manager-no-preload-956403" [30fae740-b073-4619-bca5-010ca90b2667] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0920 18:18:57.540148   74753 system_pods.go:61] "kube-proxy-sz4bm" [269600fb-ef65-4b17-8c07-76c79e35f5a8] Running
	I0920 18:18:57.540160   74753 system_pods.go:61] "kube-scheduler-no-preload-956403" [46432927-e506-4665-8825-c5f6a2ed0458] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0920 18:18:57.540174   74753 system_pods.go:61] "metrics-server-6867b74b74-tfsff" [599ba06a-6d4d-483b-b390-a3595a814757] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 18:18:57.540184   74753 system_pods.go:61] "storage-provisioner" [df661627-7d32-4467-9805-1ae65d4fa35c] Running
	I0920 18:18:57.540196   74753 system_pods.go:74] duration metric: took 10.749595ms to wait for pod list to return data ...
	I0920 18:18:57.540208   74753 node_conditions.go:102] verifying NodePressure condition ...
	I0920 18:18:57.544401   74753 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 18:18:57.544438   74753 node_conditions.go:123] node cpu capacity is 2
	I0920 18:18:57.544451   74753 node_conditions.go:105] duration metric: took 4.237618ms to run NodePressure ...
	I0920 18:18:57.544471   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:52.749468   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:53.249519   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:53.750056   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:54.250286   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:54.750240   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:55.249980   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:55.750192   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:56.249940   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:56.749289   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:57.249615   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:54.193519   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:56.693152   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:57.818736   74753 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0920 18:18:57.823058   74753 kubeadm.go:739] kubelet initialised
	I0920 18:18:57.823082   74753 kubeadm.go:740] duration metric: took 4.316691ms waiting for restarted kubelet to initialise ...
	I0920 18:18:57.823091   74753 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:18:57.827594   74753 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-j2t5h" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:57.833207   74753 pod_ready.go:98] node "no-preload-956403" hosting pod "coredns-7c65d6cfc9-j2t5h" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:57.833240   74753 pod_ready.go:82] duration metric: took 5.620335ms for pod "coredns-7c65d6cfc9-j2t5h" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:57.833252   74753 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-956403" hosting pod "coredns-7c65d6cfc9-j2t5h" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:57.833269   74753 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:57.838687   74753 pod_ready.go:98] node "no-preload-956403" hosting pod "etcd-no-preload-956403" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:57.838723   74753 pod_ready.go:82] duration metric: took 5.441795ms for pod "etcd-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:57.838737   74753 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-956403" hosting pod "etcd-no-preload-956403" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:57.838747   74753 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:57.848481   74753 pod_ready.go:98] node "no-preload-956403" hosting pod "kube-apiserver-no-preload-956403" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:57.848510   74753 pod_ready.go:82] duration metric: took 9.754053ms for pod "kube-apiserver-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:57.848520   74753 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-956403" hosting pod "kube-apiserver-no-preload-956403" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:57.848530   74753 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:57.934356   74753 pod_ready.go:98] node "no-preload-956403" hosting pod "kube-controller-manager-no-preload-956403" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:57.934383   74753 pod_ready.go:82] duration metric: took 85.842265ms for pod "kube-controller-manager-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:57.934392   74753 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-956403" hosting pod "kube-controller-manager-no-preload-956403" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:57.934397   74753 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-sz4bm" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:58.332467   74753 pod_ready.go:98] node "no-preload-956403" hosting pod "kube-proxy-sz4bm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:58.332495   74753 pod_ready.go:82] duration metric: took 398.088821ms for pod "kube-proxy-sz4bm" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:58.332504   74753 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-956403" hosting pod "kube-proxy-sz4bm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:58.332510   74753 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:58.732836   74753 pod_ready.go:98] node "no-preload-956403" hosting pod "kube-scheduler-no-preload-956403" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:58.732859   74753 pod_ready.go:82] duration metric: took 400.343274ms for pod "kube-scheduler-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:58.732868   74753 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-956403" hosting pod "kube-scheduler-no-preload-956403" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:58.732874   74753 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:59.132465   74753 pod_ready.go:98] node "no-preload-956403" hosting pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:59.132489   74753 pod_ready.go:82] duration metric: took 399.606625ms for pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:59.132503   74753 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-956403" hosting pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:59.132510   74753 pod_ready.go:39] duration metric: took 1.309409155s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:18:59.132526   74753 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 18:18:59.147163   74753 ops.go:34] apiserver oom_adj: -16
	I0920 18:18:59.147190   74753 kubeadm.go:597] duration metric: took 8.138198351s to restartPrimaryControlPlane
	I0920 18:18:59.147200   74753 kubeadm.go:394] duration metric: took 8.186768244s to StartCluster
	I0920 18:18:59.147214   74753 settings.go:142] acquiring lock: {Name:mk2a13c58cdc0faf8cddca5d6716175d45db9bfd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:18:59.147287   74753 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19672-8777/kubeconfig
	I0920 18:18:59.149036   74753 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/kubeconfig: {Name:mkf32a4c736808e023459b2f0e40188618a38db1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:18:59.149303   74753 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.47 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:18:59.149381   74753 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 18:18:59.149479   74753 addons.go:69] Setting storage-provisioner=true in profile "no-preload-956403"
	I0920 18:18:59.149503   74753 addons.go:234] Setting addon storage-provisioner=true in "no-preload-956403"
	I0920 18:18:59.149501   74753 addons.go:69] Setting default-storageclass=true in profile "no-preload-956403"
	W0920 18:18:59.149515   74753 addons.go:243] addon storage-provisioner should already be in state true
	I0920 18:18:59.149526   74753 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-956403"
	I0920 18:18:59.149520   74753 addons.go:69] Setting metrics-server=true in profile "no-preload-956403"
	I0920 18:18:59.149546   74753 host.go:66] Checking if "no-preload-956403" exists ...
	I0920 18:18:59.149558   74753 addons.go:234] Setting addon metrics-server=true in "no-preload-956403"
	W0920 18:18:59.149575   74753 addons.go:243] addon metrics-server should already be in state true
	I0920 18:18:59.149588   74753 config.go:182] Loaded profile config "no-preload-956403": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:18:59.149610   74753 host.go:66] Checking if "no-preload-956403" exists ...
	I0920 18:18:59.149847   74753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:59.149889   74753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:59.149944   74753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:59.149954   74753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:59.150003   74753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:59.150075   74753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:59.151276   74753 out.go:177] * Verifying Kubernetes components...
	I0920 18:18:59.152577   74753 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:18:59.178294   74753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42565
	I0920 18:18:59.178917   74753 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:59.179414   74753 main.go:141] libmachine: Using API Version  1
	I0920 18:18:59.179435   74753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:59.179869   74753 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:59.180059   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetState
	I0920 18:18:59.183556   74753 addons.go:234] Setting addon default-storageclass=true in "no-preload-956403"
	W0920 18:18:59.183575   74753 addons.go:243] addon default-storageclass should already be in state true
	I0920 18:18:59.183606   74753 host.go:66] Checking if "no-preload-956403" exists ...
	I0920 18:18:59.183970   74753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:59.184011   74753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:59.195338   74753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43759
	I0920 18:18:59.195739   74753 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:59.196290   74753 main.go:141] libmachine: Using API Version  1
	I0920 18:18:59.196316   74753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:59.196742   74753 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:59.197211   74753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33531
	I0920 18:18:59.197327   74753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:59.197375   74753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:59.197552   74753 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:59.198055   74753 main.go:141] libmachine: Using API Version  1
	I0920 18:18:59.198080   74753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:59.198420   74753 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:59.198997   74753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:59.199037   74753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:59.203164   74753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40055
	I0920 18:18:59.203700   74753 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:59.204320   74753 main.go:141] libmachine: Using API Version  1
	I0920 18:18:59.204341   74753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:59.204745   74753 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:59.205399   74753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:59.205440   74753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:59.215457   74753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37991
	I0920 18:18:59.215953   74753 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:59.216312   74753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35461
	I0920 18:18:59.216521   74753 main.go:141] libmachine: Using API Version  1
	I0920 18:18:59.216723   74753 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:59.216826   74753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:59.217221   74753 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:59.217403   74753 main.go:141] libmachine: Using API Version  1
	I0920 18:18:59.217414   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetState
	I0920 18:18:59.217452   74753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:59.217942   74753 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:59.218403   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetState
	I0920 18:18:59.219340   74753 main.go:141] libmachine: (no-preload-956403) Calling .DriverName
	I0920 18:18:59.220536   74753 main.go:141] libmachine: (no-preload-956403) Calling .DriverName
	I0920 18:18:59.221934   74753 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0920 18:18:59.222788   74753 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:18:59.223744   74753 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 18:18:59.223766   74753 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 18:18:59.223788   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:18:59.224567   74753 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 18:18:59.224588   74753 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 18:18:59.224607   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:18:59.225333   74753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45207
	I0920 18:18:59.225992   74753 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:59.226975   74753 main.go:141] libmachine: Using API Version  1
	I0920 18:18:59.226991   74753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:59.227472   74753 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:59.227788   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetState
	I0920 18:18:59.227818   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:59.228234   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:59.228282   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:59.228441   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHPort
	I0920 18:18:59.228636   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:59.228671   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:59.228821   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHUsername
	I0920 18:18:59.228960   74753 sshutil.go:53] new ssh client: &{IP:192.168.50.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/no-preload-956403/id_rsa Username:docker}
	I0920 18:18:59.229280   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:59.229297   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:59.229478   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHPort
	I0920 18:18:59.229651   74753 main.go:141] libmachine: (no-preload-956403) Calling .DriverName
	I0920 18:18:59.229807   74753 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 18:18:59.229817   74753 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 18:18:59.229843   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:18:59.230195   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:59.230729   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHUsername
	I0920 18:18:59.230899   74753 sshutil.go:53] new ssh client: &{IP:192.168.50.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/no-preload-956403/id_rsa Username:docker}
	I0920 18:18:59.232419   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:59.232844   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:59.232877   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:59.233020   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHPort
	I0920 18:18:59.233201   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:59.233311   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHUsername
	I0920 18:18:59.233424   74753 sshutil.go:53] new ssh client: &{IP:192.168.50.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/no-preload-956403/id_rsa Username:docker}
	I0920 18:18:59.373284   74753 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:18:59.391114   74753 node_ready.go:35] waiting up to 6m0s for node "no-preload-956403" to be "Ready" ...
	I0920 18:18:59.475607   74753 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 18:18:59.530301   74753 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 18:18:59.530320   74753 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0920 18:18:59.530804   74753 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 18:18:59.607246   74753 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 18:18:59.607272   74753 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 18:18:59.685234   74753 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 18:18:59.685273   74753 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 18:18:59.753902   74753 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 18:19:00.029605   74753 main.go:141] libmachine: Making call to close driver server
	I0920 18:19:00.029630   74753 main.go:141] libmachine: (no-preload-956403) Calling .Close
	I0920 18:19:00.029968   74753 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:19:00.030016   74753 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:19:00.030032   74753 main.go:141] libmachine: Making call to close driver server
	I0920 18:19:00.030039   74753 main.go:141] libmachine: (no-preload-956403) Calling .Close
	I0920 18:19:00.030285   74753 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:19:00.030299   74753 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:19:00.049472   74753 main.go:141] libmachine: Making call to close driver server
	I0920 18:19:00.049500   74753 main.go:141] libmachine: (no-preload-956403) Calling .Close
	I0920 18:19:00.049765   74753 main.go:141] libmachine: (no-preload-956403) DBG | Closing plugin on server side
	I0920 18:19:00.049781   74753 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:19:00.049797   74753 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:19:00.790635   74753 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.259791892s)
	I0920 18:19:00.790694   74753 main.go:141] libmachine: Making call to close driver server
	I0920 18:19:00.790703   74753 main.go:141] libmachine: (no-preload-956403) Calling .Close
	I0920 18:19:00.791004   74753 main.go:141] libmachine: (no-preload-956403) DBG | Closing plugin on server side
	I0920 18:19:00.791007   74753 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:19:00.791075   74753 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:19:00.791100   74753 main.go:141] libmachine: Making call to close driver server
	I0920 18:19:00.791110   74753 main.go:141] libmachine: (no-preload-956403) Calling .Close
	I0920 18:19:00.791339   74753 main.go:141] libmachine: (no-preload-956403) DBG | Closing plugin on server side
	I0920 18:19:00.791347   74753 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:19:00.791358   74753 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:19:00.829250   74753 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.075301587s)
	I0920 18:19:00.829297   74753 main.go:141] libmachine: Making call to close driver server
	I0920 18:19:00.829311   74753 main.go:141] libmachine: (no-preload-956403) Calling .Close
	I0920 18:19:00.829607   74753 main.go:141] libmachine: (no-preload-956403) DBG | Closing plugin on server side
	I0920 18:19:00.829612   74753 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:19:00.829632   74753 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:19:00.829641   74753 main.go:141] libmachine: Making call to close driver server
	I0920 18:19:00.829647   74753 main.go:141] libmachine: (no-preload-956403) Calling .Close
	I0920 18:19:00.829905   74753 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:19:00.829927   74753 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:19:00.829926   74753 main.go:141] libmachine: (no-preload-956403) DBG | Closing plugin on server side
	I0920 18:19:00.829938   74753 addons.go:475] Verifying addon metrics-server=true in "no-preload-956403"
	I0920 18:19:00.831897   74753 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0920 18:18:57.524979   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:59.526500   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:02.024579   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:00.833093   74753 addons.go:510] duration metric: took 1.683722999s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0920 18:19:01.394507   74753 node_ready.go:53] node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:57.750113   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:58.250100   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:58.750133   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:59.249427   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:59.749350   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:00.250267   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:00.749723   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:01.249549   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:01.749698   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:02.250043   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:59.195097   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:01.692391   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:03.692510   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:04.024713   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:06.525591   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:03.395035   74753 node_ready.go:53] node "no-preload-956403" has status "Ready":"False"
	I0920 18:19:05.895477   74753 node_ready.go:53] node "no-preload-956403" has status "Ready":"False"
	I0920 18:19:06.395177   74753 node_ready.go:49] node "no-preload-956403" has status "Ready":"True"
	I0920 18:19:06.395211   74753 node_ready.go:38] duration metric: took 7.0040677s for node "no-preload-956403" to be "Ready" ...
	I0920 18:19:06.395224   74753 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:19:06.400929   74753 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-j2t5h" in "kube-system" namespace to be "Ready" ...
	I0920 18:19:06.406052   74753 pod_ready.go:93] pod "coredns-7c65d6cfc9-j2t5h" in "kube-system" namespace has status "Ready":"True"
	I0920 18:19:06.406076   74753 pod_ready.go:82] duration metric: took 5.118178ms for pod "coredns-7c65d6cfc9-j2t5h" in "kube-system" namespace to be "Ready" ...
	I0920 18:19:06.406088   74753 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	I0920 18:19:07.412930   74753 pod_ready.go:93] pod "etcd-no-preload-956403" in "kube-system" namespace has status "Ready":"True"
	I0920 18:19:07.412946   74753 pod_ready.go:82] duration metric: took 1.006851075s for pod "etcd-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	I0920 18:19:07.412955   74753 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	I0920 18:19:07.420312   74753 pod_ready.go:93] pod "kube-apiserver-no-preload-956403" in "kube-system" namespace has status "Ready":"True"
	I0920 18:19:07.420342   74753 pod_ready.go:82] duration metric: took 7.380815ms for pod "kube-apiserver-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	I0920 18:19:07.420354   74753 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	I0920 18:19:02.750082   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:03.249861   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:03.749400   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:04.250213   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:04.749806   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:05.249569   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:05.750115   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:06.250182   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:06.750188   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:07.250041   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:06.191328   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:08.192222   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:09.024510   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:11.025023   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:08.426149   74753 pod_ready.go:93] pod "kube-controller-manager-no-preload-956403" in "kube-system" namespace has status "Ready":"True"
	I0920 18:19:08.426173   74753 pod_ready.go:82] duration metric: took 1.005811855s for pod "kube-controller-manager-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	I0920 18:19:08.426182   74753 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-sz4bm" in "kube-system" namespace to be "Ready" ...
	I0920 18:19:08.431446   74753 pod_ready.go:93] pod "kube-proxy-sz4bm" in "kube-system" namespace has status "Ready":"True"
	I0920 18:19:08.431474   74753 pod_ready.go:82] duration metric: took 5.284488ms for pod "kube-proxy-sz4bm" in "kube-system" namespace to be "Ready" ...
	I0920 18:19:08.431486   74753 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	I0920 18:19:08.794973   74753 pod_ready.go:93] pod "kube-scheduler-no-preload-956403" in "kube-system" namespace has status "Ready":"True"
	I0920 18:19:08.794997   74753 pod_ready.go:82] duration metric: took 363.504269ms for pod "kube-scheduler-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	I0920 18:19:08.795005   74753 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace to be "Ready" ...
	I0920 18:19:10.802181   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:07.749479   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:08.249641   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:08.749637   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:09.249803   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:09.749534   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:10.249365   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:10.749366   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:11.250045   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:11.750298   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:12.249766   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:10.192631   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:12.692470   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:13.525217   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:16.025516   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:13.300755   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:15.301345   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:12.749447   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:13.249356   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:13.749514   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:14.249942   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:14.749988   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:15.249405   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:15.749620   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:16.249962   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:16.750325   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:17.250198   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:15.191379   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:17.692235   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:18.525176   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:21.023910   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:17.801315   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:20.302369   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:22.302578   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:17.749509   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:18.250076   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:18.749518   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:19.249440   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:19.750156   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:20.249686   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:20.750315   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:21.249755   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:21.749370   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:22.249767   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:20.192027   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:22.192925   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:23.024677   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:25.025216   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:24.801558   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:26.802796   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:22.749289   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:23.249386   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:23.749870   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:24.249424   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:24.750349   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:25.249582   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:25.249657   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:25.287604   75577 cri.go:89] found id: ""
	I0920 18:19:25.287633   75577 logs.go:276] 0 containers: []
	W0920 18:19:25.287641   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:25.287647   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:25.287692   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:25.322093   75577 cri.go:89] found id: ""
	I0920 18:19:25.322127   75577 logs.go:276] 0 containers: []
	W0920 18:19:25.322137   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:25.322144   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:25.322194   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:25.354812   75577 cri.go:89] found id: ""
	I0920 18:19:25.354840   75577 logs.go:276] 0 containers: []
	W0920 18:19:25.354851   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:25.354863   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:25.354928   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:25.387777   75577 cri.go:89] found id: ""
	I0920 18:19:25.387804   75577 logs.go:276] 0 containers: []
	W0920 18:19:25.387813   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:25.387819   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:25.387867   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:25.420325   75577 cri.go:89] found id: ""
	I0920 18:19:25.420354   75577 logs.go:276] 0 containers: []
	W0920 18:19:25.420364   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:25.420372   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:25.420437   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:25.454824   75577 cri.go:89] found id: ""
	I0920 18:19:25.454853   75577 logs.go:276] 0 containers: []
	W0920 18:19:25.454864   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:25.454871   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:25.454933   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:25.493520   75577 cri.go:89] found id: ""
	I0920 18:19:25.493548   75577 logs.go:276] 0 containers: []
	W0920 18:19:25.493559   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:25.493566   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:25.493639   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:25.528795   75577 cri.go:89] found id: ""
	I0920 18:19:25.528820   75577 logs.go:276] 0 containers: []
	W0920 18:19:25.528828   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:25.528836   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:25.528847   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:25.571411   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:25.571442   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:25.625648   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:25.625693   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:25.639891   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:25.639920   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:25.769263   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:25.769289   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:25.769303   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:19:24.691757   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:26.695197   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:27.524327   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:29.525192   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:32.024450   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:29.301400   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:31.301988   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:28.346894   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:28.359891   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:28.359967   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:28.396117   75577 cri.go:89] found id: ""
	I0920 18:19:28.396144   75577 logs.go:276] 0 containers: []
	W0920 18:19:28.396152   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:28.396158   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:28.396206   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:28.447888   75577 cri.go:89] found id: ""
	I0920 18:19:28.447923   75577 logs.go:276] 0 containers: []
	W0920 18:19:28.447933   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:28.447941   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:28.447998   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:28.485018   75577 cri.go:89] found id: ""
	I0920 18:19:28.485046   75577 logs.go:276] 0 containers: []
	W0920 18:19:28.485054   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:28.485060   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:28.485108   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:28.520904   75577 cri.go:89] found id: ""
	I0920 18:19:28.520934   75577 logs.go:276] 0 containers: []
	W0920 18:19:28.520942   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:28.520948   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:28.520996   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:28.561375   75577 cri.go:89] found id: ""
	I0920 18:19:28.561407   75577 logs.go:276] 0 containers: []
	W0920 18:19:28.561415   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:28.561421   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:28.561467   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:28.597643   75577 cri.go:89] found id: ""
	I0920 18:19:28.597671   75577 logs.go:276] 0 containers: []
	W0920 18:19:28.597679   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:28.597685   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:28.597747   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:28.632885   75577 cri.go:89] found id: ""
	I0920 18:19:28.632912   75577 logs.go:276] 0 containers: []
	W0920 18:19:28.632921   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:28.632926   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:28.632973   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:28.670639   75577 cri.go:89] found id: ""
	I0920 18:19:28.670665   75577 logs.go:276] 0 containers: []
	W0920 18:19:28.670675   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:28.670699   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:28.670714   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:19:28.749623   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:28.749659   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:28.790808   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:28.790831   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:28.842027   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:28.842070   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:28.855927   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:28.855954   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:28.935807   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:31.436321   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:31.450159   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:31.450221   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:31.485444   75577 cri.go:89] found id: ""
	I0920 18:19:31.485483   75577 logs.go:276] 0 containers: []
	W0920 18:19:31.485494   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:31.485502   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:31.485562   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:31.520888   75577 cri.go:89] found id: ""
	I0920 18:19:31.520918   75577 logs.go:276] 0 containers: []
	W0920 18:19:31.520927   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:31.520941   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:31.521007   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:31.555001   75577 cri.go:89] found id: ""
	I0920 18:19:31.555029   75577 logs.go:276] 0 containers: []
	W0920 18:19:31.555040   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:31.555047   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:31.555111   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:31.588766   75577 cri.go:89] found id: ""
	I0920 18:19:31.588794   75577 logs.go:276] 0 containers: []
	W0920 18:19:31.588802   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:31.588808   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:31.588872   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:31.622006   75577 cri.go:89] found id: ""
	I0920 18:19:31.622037   75577 logs.go:276] 0 containers: []
	W0920 18:19:31.622048   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:31.622056   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:31.622110   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:31.659475   75577 cri.go:89] found id: ""
	I0920 18:19:31.659508   75577 logs.go:276] 0 containers: []
	W0920 18:19:31.659519   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:31.659527   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:31.659589   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:31.695380   75577 cri.go:89] found id: ""
	I0920 18:19:31.695415   75577 logs.go:276] 0 containers: []
	W0920 18:19:31.695426   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:31.695436   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:31.695521   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:31.729507   75577 cri.go:89] found id: ""
	I0920 18:19:31.729540   75577 logs.go:276] 0 containers: []
	W0920 18:19:31.729550   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:31.729561   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:31.729574   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:31.781857   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:31.781908   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:31.795325   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:31.795356   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:31.868684   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:31.868708   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:31.868722   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:19:31.945334   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:31.945371   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:29.191780   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:31.192712   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:33.693996   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:34.024591   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:36.024999   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:33.801443   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:36.300715   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:34.485723   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:34.499039   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:34.499106   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:34.534664   75577 cri.go:89] found id: ""
	I0920 18:19:34.534697   75577 logs.go:276] 0 containers: []
	W0920 18:19:34.534709   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:34.534717   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:34.534777   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:34.567515   75577 cri.go:89] found id: ""
	I0920 18:19:34.567545   75577 logs.go:276] 0 containers: []
	W0920 18:19:34.567556   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:34.567564   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:34.567624   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:34.601653   75577 cri.go:89] found id: ""
	I0920 18:19:34.601693   75577 logs.go:276] 0 containers: []
	W0920 18:19:34.601704   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:34.601712   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:34.601775   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:34.635238   75577 cri.go:89] found id: ""
	I0920 18:19:34.635271   75577 logs.go:276] 0 containers: []
	W0920 18:19:34.635282   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:34.635291   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:34.635361   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:34.669665   75577 cri.go:89] found id: ""
	I0920 18:19:34.669689   75577 logs.go:276] 0 containers: []
	W0920 18:19:34.669697   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:34.669703   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:34.669751   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:34.705984   75577 cri.go:89] found id: ""
	I0920 18:19:34.706012   75577 logs.go:276] 0 containers: []
	W0920 18:19:34.706022   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:34.706032   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:34.706110   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:34.740052   75577 cri.go:89] found id: ""
	I0920 18:19:34.740079   75577 logs.go:276] 0 containers: []
	W0920 18:19:34.740087   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:34.740092   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:34.740139   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:34.774930   75577 cri.go:89] found id: ""
	I0920 18:19:34.774962   75577 logs.go:276] 0 containers: []
	W0920 18:19:34.774973   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:34.774984   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:34.775081   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:34.791674   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:34.791714   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:34.894988   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:34.895019   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:34.895035   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:19:34.973785   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:34.973823   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:35.010928   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:35.010964   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:37.561543   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:37.574865   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:37.574925   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:37.609145   75577 cri.go:89] found id: ""
	I0920 18:19:37.609170   75577 logs.go:276] 0 containers: []
	W0920 18:19:37.609178   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:37.609183   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:37.609247   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:37.645080   75577 cri.go:89] found id: ""
	I0920 18:19:37.645107   75577 logs.go:276] 0 containers: []
	W0920 18:19:37.645115   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:37.645121   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:37.645168   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:37.680060   75577 cri.go:89] found id: ""
	I0920 18:19:37.680094   75577 logs.go:276] 0 containers: []
	W0920 18:19:37.680105   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:37.680113   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:37.680173   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:37.713586   75577 cri.go:89] found id: ""
	I0920 18:19:37.713618   75577 logs.go:276] 0 containers: []
	W0920 18:19:37.713629   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:37.713636   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:37.713694   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:36.192483   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:38.692017   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:38.025974   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:40.524671   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:38.302403   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:40.802748   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:37.750249   75577 cri.go:89] found id: ""
	I0920 18:19:37.750274   75577 logs.go:276] 0 containers: []
	W0920 18:19:37.750282   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:37.750289   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:37.750351   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:37.785613   75577 cri.go:89] found id: ""
	I0920 18:19:37.785642   75577 logs.go:276] 0 containers: []
	W0920 18:19:37.785650   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:37.785656   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:37.785705   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:37.824240   75577 cri.go:89] found id: ""
	I0920 18:19:37.824267   75577 logs.go:276] 0 containers: []
	W0920 18:19:37.824278   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:37.824286   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:37.824348   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:37.858522   75577 cri.go:89] found id: ""
	I0920 18:19:37.858547   75577 logs.go:276] 0 containers: []
	W0920 18:19:37.858556   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:37.858564   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:37.858575   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:19:37.939852   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:37.939891   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:37.981029   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:37.981062   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:38.030606   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:38.030633   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:38.043914   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:38.043953   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:38.124846   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:40.625196   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:40.640638   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:40.640708   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:40.687107   75577 cri.go:89] found id: ""
	I0920 18:19:40.687131   75577 logs.go:276] 0 containers: []
	W0920 18:19:40.687140   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:40.687148   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:40.687206   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:40.725727   75577 cri.go:89] found id: ""
	I0920 18:19:40.725858   75577 logs.go:276] 0 containers: []
	W0920 18:19:40.725875   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:40.725889   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:40.725967   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:40.762965   75577 cri.go:89] found id: ""
	I0920 18:19:40.762988   75577 logs.go:276] 0 containers: []
	W0920 18:19:40.762996   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:40.763002   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:40.763049   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:40.797387   75577 cri.go:89] found id: ""
	I0920 18:19:40.797415   75577 logs.go:276] 0 containers: []
	W0920 18:19:40.797427   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:40.797439   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:40.797498   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:40.834482   75577 cri.go:89] found id: ""
	I0920 18:19:40.834513   75577 logs.go:276] 0 containers: []
	W0920 18:19:40.834521   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:40.834528   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:40.834578   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:40.870062   75577 cri.go:89] found id: ""
	I0920 18:19:40.870090   75577 logs.go:276] 0 containers: []
	W0920 18:19:40.870099   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:40.870106   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:40.870164   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:40.906613   75577 cri.go:89] found id: ""
	I0920 18:19:40.906642   75577 logs.go:276] 0 containers: []
	W0920 18:19:40.906653   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:40.906662   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:40.906721   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:40.953033   75577 cri.go:89] found id: ""
	I0920 18:19:40.953056   75577 logs.go:276] 0 containers: []
	W0920 18:19:40.953065   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:40.953073   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:40.953083   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:40.998490   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:40.998523   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:41.051488   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:41.051525   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:41.067908   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:41.067937   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:41.144854   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:41.144878   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:41.144890   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:19:40.692456   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:43.192532   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:43.024238   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:45.024998   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:47.025427   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:43.302334   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:45.801415   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:43.723613   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:43.736857   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:43.736924   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:43.772588   75577 cri.go:89] found id: ""
	I0920 18:19:43.772624   75577 logs.go:276] 0 containers: []
	W0920 18:19:43.772635   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:43.772643   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:43.772712   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:43.808580   75577 cri.go:89] found id: ""
	I0920 18:19:43.808611   75577 logs.go:276] 0 containers: []
	W0920 18:19:43.808622   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:43.808629   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:43.808695   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:43.847167   75577 cri.go:89] found id: ""
	I0920 18:19:43.847207   75577 logs.go:276] 0 containers: []
	W0920 18:19:43.847218   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:43.847243   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:43.847302   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:43.883614   75577 cri.go:89] found id: ""
	I0920 18:19:43.883646   75577 logs.go:276] 0 containers: []
	W0920 18:19:43.883658   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:43.883667   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:43.883738   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:43.917602   75577 cri.go:89] found id: ""
	I0920 18:19:43.917627   75577 logs.go:276] 0 containers: []
	W0920 18:19:43.917635   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:43.917641   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:43.917694   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:43.953232   75577 cri.go:89] found id: ""
	I0920 18:19:43.953259   75577 logs.go:276] 0 containers: []
	W0920 18:19:43.953268   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:43.953273   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:43.953325   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:43.991204   75577 cri.go:89] found id: ""
	I0920 18:19:43.991234   75577 logs.go:276] 0 containers: []
	W0920 18:19:43.991246   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:43.991253   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:43.991334   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:44.028117   75577 cri.go:89] found id: ""
	I0920 18:19:44.028140   75577 logs.go:276] 0 containers: []
	W0920 18:19:44.028147   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:44.028154   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:44.028164   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:44.068175   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:44.068203   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:44.119953   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:44.119993   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:44.134127   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:44.134154   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:44.211563   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:44.211592   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:44.211604   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:19:46.787328   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:46.803429   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:46.803516   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:46.844813   75577 cri.go:89] found id: ""
	I0920 18:19:46.844839   75577 logs.go:276] 0 containers: []
	W0920 18:19:46.844850   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:46.844856   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:46.844914   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:46.878441   75577 cri.go:89] found id: ""
	I0920 18:19:46.878483   75577 logs.go:276] 0 containers: []
	W0920 18:19:46.878497   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:46.878506   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:46.878580   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:46.913933   75577 cri.go:89] found id: ""
	I0920 18:19:46.913976   75577 logs.go:276] 0 containers: []
	W0920 18:19:46.913986   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:46.913993   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:46.914066   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:46.948577   75577 cri.go:89] found id: ""
	I0920 18:19:46.948609   75577 logs.go:276] 0 containers: []
	W0920 18:19:46.948618   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:46.948625   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:46.948689   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:46.982742   75577 cri.go:89] found id: ""
	I0920 18:19:46.982770   75577 logs.go:276] 0 containers: []
	W0920 18:19:46.982778   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:46.982785   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:46.982836   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:47.017075   75577 cri.go:89] found id: ""
	I0920 18:19:47.017107   75577 logs.go:276] 0 containers: []
	W0920 18:19:47.017120   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:47.017128   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:47.017190   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:47.055494   75577 cri.go:89] found id: ""
	I0920 18:19:47.055520   75577 logs.go:276] 0 containers: []
	W0920 18:19:47.055528   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:47.055534   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:47.055586   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:47.091966   75577 cri.go:89] found id: ""
	I0920 18:19:47.091998   75577 logs.go:276] 0 containers: []
	W0920 18:19:47.092006   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:47.092019   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:47.092035   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:47.142916   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:47.142955   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:47.158471   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:47.158502   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:47.241530   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:47.241553   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:47.241567   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:19:47.316958   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:47.316992   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:45.192724   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:47.692046   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:49.524915   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:52.026403   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:48.302289   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:50.303276   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:49.854403   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:49.867317   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:49.867431   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:49.903627   75577 cri.go:89] found id: ""
	I0920 18:19:49.903661   75577 logs.go:276] 0 containers: []
	W0920 18:19:49.903674   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:49.903682   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:49.903745   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:49.936857   75577 cri.go:89] found id: ""
	I0920 18:19:49.936892   75577 logs.go:276] 0 containers: []
	W0920 18:19:49.936902   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:49.936909   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:49.936975   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:49.974403   75577 cri.go:89] found id: ""
	I0920 18:19:49.974433   75577 logs.go:276] 0 containers: []
	W0920 18:19:49.974441   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:49.974447   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:49.974505   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:50.011810   75577 cri.go:89] found id: ""
	I0920 18:19:50.011839   75577 logs.go:276] 0 containers: []
	W0920 18:19:50.011850   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:50.011857   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:50.011921   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:50.050491   75577 cri.go:89] found id: ""
	I0920 18:19:50.050527   75577 logs.go:276] 0 containers: []
	W0920 18:19:50.050538   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:50.050546   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:50.050610   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:50.088270   75577 cri.go:89] found id: ""
	I0920 18:19:50.088299   75577 logs.go:276] 0 containers: []
	W0920 18:19:50.088308   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:50.088314   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:50.088375   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:50.126355   75577 cri.go:89] found id: ""
	I0920 18:19:50.126382   75577 logs.go:276] 0 containers: []
	W0920 18:19:50.126392   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:50.126399   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:50.126460   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:50.160770   75577 cri.go:89] found id: ""
	I0920 18:19:50.160798   75577 logs.go:276] 0 containers: []
	W0920 18:19:50.160808   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:50.160819   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:50.160834   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:50.212866   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:50.212914   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:50.227269   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:50.227295   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:50.301363   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:50.301392   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:50.301406   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:19:50.376293   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:50.376330   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:50.192002   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:52.192488   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:54.524436   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:56.526032   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:52.801396   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:54.802393   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:57.301066   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:52.916567   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:52.930445   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:52.930525   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:52.965360   75577 cri.go:89] found id: ""
	I0920 18:19:52.965392   75577 logs.go:276] 0 containers: []
	W0920 18:19:52.965403   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:52.965411   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:52.965468   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:53.000439   75577 cri.go:89] found id: ""
	I0920 18:19:53.000480   75577 logs.go:276] 0 containers: []
	W0920 18:19:53.000495   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:53.000503   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:53.000577   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:53.034598   75577 cri.go:89] found id: ""
	I0920 18:19:53.034630   75577 logs.go:276] 0 containers: []
	W0920 18:19:53.034640   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:53.034647   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:53.034744   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:53.068636   75577 cri.go:89] found id: ""
	I0920 18:19:53.068664   75577 logs.go:276] 0 containers: []
	W0920 18:19:53.068673   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:53.068678   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:53.068750   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:53.104381   75577 cri.go:89] found id: ""
	I0920 18:19:53.104408   75577 logs.go:276] 0 containers: []
	W0920 18:19:53.104416   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:53.104421   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:53.104474   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:53.137865   75577 cri.go:89] found id: ""
	I0920 18:19:53.137897   75577 logs.go:276] 0 containers: []
	W0920 18:19:53.137909   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:53.137922   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:53.137983   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:53.171825   75577 cri.go:89] found id: ""
	I0920 18:19:53.171861   75577 logs.go:276] 0 containers: []
	W0920 18:19:53.171874   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:53.171883   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:53.171952   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:53.210742   75577 cri.go:89] found id: ""
	I0920 18:19:53.210774   75577 logs.go:276] 0 containers: []
	W0920 18:19:53.210784   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:53.210796   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:53.210811   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:19:53.285597   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:53.285637   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:53.329821   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:53.329871   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:53.381362   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:53.381415   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:53.396044   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:53.396074   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:53.471582   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:55.972369   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:55.987205   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:55.987281   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:56.031322   75577 cri.go:89] found id: ""
	I0920 18:19:56.031355   75577 logs.go:276] 0 containers: []
	W0920 18:19:56.031363   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:56.031368   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:56.031420   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:56.066442   75577 cri.go:89] found id: ""
	I0920 18:19:56.066487   75577 logs.go:276] 0 containers: []
	W0920 18:19:56.066576   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:56.066617   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:56.066695   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:56.101818   75577 cri.go:89] found id: ""
	I0920 18:19:56.101864   75577 logs.go:276] 0 containers: []
	W0920 18:19:56.101876   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:56.101883   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:56.101947   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:56.139513   75577 cri.go:89] found id: ""
	I0920 18:19:56.139545   75577 logs.go:276] 0 containers: []
	W0920 18:19:56.139557   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:56.139565   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:56.139618   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:56.173638   75577 cri.go:89] found id: ""
	I0920 18:19:56.173669   75577 logs.go:276] 0 containers: []
	W0920 18:19:56.173680   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:56.173688   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:56.173752   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:56.210657   75577 cri.go:89] found id: ""
	I0920 18:19:56.210689   75577 logs.go:276] 0 containers: []
	W0920 18:19:56.210700   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:56.210709   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:56.210768   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:56.246035   75577 cri.go:89] found id: ""
	I0920 18:19:56.246063   75577 logs.go:276] 0 containers: []
	W0920 18:19:56.246071   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:56.246077   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:56.246123   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:56.280766   75577 cri.go:89] found id: ""
	I0920 18:19:56.280796   75577 logs.go:276] 0 containers: []
	W0920 18:19:56.280807   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:56.280818   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:56.280834   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:56.320511   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:56.320540   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:56.373746   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:56.373785   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:56.389294   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:56.389322   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:56.460079   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:56.460100   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:56.460112   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:19:54.692781   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:57.192706   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:59.025015   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:01.025196   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:59.302190   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:01.801923   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:59.044541   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:59.058395   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:59.058464   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:59.097442   75577 cri.go:89] found id: ""
	I0920 18:19:59.097482   75577 logs.go:276] 0 containers: []
	W0920 18:19:59.097495   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:59.097512   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:59.097593   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:59.133091   75577 cri.go:89] found id: ""
	I0920 18:19:59.133116   75577 logs.go:276] 0 containers: []
	W0920 18:19:59.133128   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:59.133135   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:59.133264   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:59.166902   75577 cri.go:89] found id: ""
	I0920 18:19:59.166927   75577 logs.go:276] 0 containers: []
	W0920 18:19:59.166938   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:59.166945   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:59.167008   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:59.202551   75577 cri.go:89] found id: ""
	I0920 18:19:59.202573   75577 logs.go:276] 0 containers: []
	W0920 18:19:59.202581   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:59.202586   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:59.202633   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:59.235197   75577 cri.go:89] found id: ""
	I0920 18:19:59.235222   75577 logs.go:276] 0 containers: []
	W0920 18:19:59.235230   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:59.235236   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:59.235286   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:59.269148   75577 cri.go:89] found id: ""
	I0920 18:19:59.269176   75577 logs.go:276] 0 containers: []
	W0920 18:19:59.269187   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:59.269196   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:59.269262   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:59.307670   75577 cri.go:89] found id: ""
	I0920 18:19:59.307692   75577 logs.go:276] 0 containers: []
	W0920 18:19:59.307699   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:59.307705   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:59.307753   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:59.340554   75577 cri.go:89] found id: ""
	I0920 18:19:59.340589   75577 logs.go:276] 0 containers: []
	W0920 18:19:59.340599   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:59.340610   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:59.340623   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:59.382339   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:59.382470   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:59.432176   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:59.432211   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:59.445889   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:59.445922   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:59.516564   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:59.516590   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:59.516609   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:02.101918   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:02.115064   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:02.115128   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:02.148718   75577 cri.go:89] found id: ""
	I0920 18:20:02.148749   75577 logs.go:276] 0 containers: []
	W0920 18:20:02.148757   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:02.148764   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:02.148822   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:02.182673   75577 cri.go:89] found id: ""
	I0920 18:20:02.182711   75577 logs.go:276] 0 containers: []
	W0920 18:20:02.182722   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:02.182729   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:02.182797   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:02.218734   75577 cri.go:89] found id: ""
	I0920 18:20:02.218771   75577 logs.go:276] 0 containers: []
	W0920 18:20:02.218782   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:02.218791   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:02.218856   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:02.258956   75577 cri.go:89] found id: ""
	I0920 18:20:02.258981   75577 logs.go:276] 0 containers: []
	W0920 18:20:02.258989   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:02.258995   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:02.259066   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:02.294879   75577 cri.go:89] found id: ""
	I0920 18:20:02.294910   75577 logs.go:276] 0 containers: []
	W0920 18:20:02.294919   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:02.294925   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:02.294971   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:02.331010   75577 cri.go:89] found id: ""
	I0920 18:20:02.331039   75577 logs.go:276] 0 containers: []
	W0920 18:20:02.331049   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:02.331057   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:02.331122   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:02.365023   75577 cri.go:89] found id: ""
	I0920 18:20:02.365056   75577 logs.go:276] 0 containers: []
	W0920 18:20:02.365066   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:02.365072   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:02.365122   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:02.400501   75577 cri.go:89] found id: ""
	I0920 18:20:02.400528   75577 logs.go:276] 0 containers: []
	W0920 18:20:02.400537   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:02.400545   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:02.400556   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:02.450597   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:02.450636   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:02.467637   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:02.467671   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:02.540668   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:02.540690   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:02.540706   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:02.629706   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:02.629752   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:59.192799   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:01.691537   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:03.692613   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:03.524517   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:05.525418   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:04.300845   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:06.301250   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:05.168119   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:05.182954   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:05.183043   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:05.221938   75577 cri.go:89] found id: ""
	I0920 18:20:05.221971   75577 logs.go:276] 0 containers: []
	W0920 18:20:05.221980   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:05.221990   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:05.222051   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:05.258454   75577 cri.go:89] found id: ""
	I0920 18:20:05.258479   75577 logs.go:276] 0 containers: []
	W0920 18:20:05.258487   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:05.258492   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:05.258540   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:05.293091   75577 cri.go:89] found id: ""
	I0920 18:20:05.293125   75577 logs.go:276] 0 containers: []
	W0920 18:20:05.293138   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:05.293146   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:05.293196   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:05.332992   75577 cri.go:89] found id: ""
	I0920 18:20:05.333025   75577 logs.go:276] 0 containers: []
	W0920 18:20:05.333034   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:05.333040   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:05.333088   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:05.368743   75577 cri.go:89] found id: ""
	I0920 18:20:05.368778   75577 logs.go:276] 0 containers: []
	W0920 18:20:05.368790   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:05.368798   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:05.368859   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:05.404913   75577 cri.go:89] found id: ""
	I0920 18:20:05.404941   75577 logs.go:276] 0 containers: []
	W0920 18:20:05.404948   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:05.404954   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:05.405003   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:05.442111   75577 cri.go:89] found id: ""
	I0920 18:20:05.442143   75577 logs.go:276] 0 containers: []
	W0920 18:20:05.442154   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:05.442163   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:05.442228   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:05.478808   75577 cri.go:89] found id: ""
	I0920 18:20:05.478842   75577 logs.go:276] 0 containers: []
	W0920 18:20:05.478853   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:05.478865   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:05.478879   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:05.531653   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:05.531691   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:05.545181   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:05.545210   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:05.615009   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:05.615041   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:05.615059   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:05.690842   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:05.690871   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:06.193177   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:08.691596   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:08.034009   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:10.524419   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:08.301457   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:10.801788   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:08.230851   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:08.244539   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:08.244609   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:08.281123   75577 cri.go:89] found id: ""
	I0920 18:20:08.281155   75577 logs.go:276] 0 containers: []
	W0920 18:20:08.281167   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:08.281174   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:08.281226   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:08.319704   75577 cri.go:89] found id: ""
	I0920 18:20:08.319740   75577 logs.go:276] 0 containers: []
	W0920 18:20:08.319754   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:08.319763   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:08.319828   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:08.354589   75577 cri.go:89] found id: ""
	I0920 18:20:08.354619   75577 logs.go:276] 0 containers: []
	W0920 18:20:08.354631   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:08.354638   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:08.354703   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:08.391580   75577 cri.go:89] found id: ""
	I0920 18:20:08.391603   75577 logs.go:276] 0 containers: []
	W0920 18:20:08.391612   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:08.391617   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:08.391666   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:08.425596   75577 cri.go:89] found id: ""
	I0920 18:20:08.425622   75577 logs.go:276] 0 containers: []
	W0920 18:20:08.425630   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:08.425638   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:08.425704   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:08.458720   75577 cri.go:89] found id: ""
	I0920 18:20:08.458747   75577 logs.go:276] 0 containers: []
	W0920 18:20:08.458758   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:08.458764   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:08.458812   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:08.496104   75577 cri.go:89] found id: ""
	I0920 18:20:08.496137   75577 logs.go:276] 0 containers: []
	W0920 18:20:08.496148   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:08.496155   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:08.496210   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:08.530961   75577 cri.go:89] found id: ""
	I0920 18:20:08.530989   75577 logs.go:276] 0 containers: []
	W0920 18:20:08.531000   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:08.531010   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:08.531023   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:08.568512   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:08.568541   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:08.619716   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:08.619754   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:08.634358   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:08.634390   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:08.721465   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:08.721488   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:08.721501   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:11.303942   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:11.316686   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:11.316759   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:11.353137   75577 cri.go:89] found id: ""
	I0920 18:20:11.353161   75577 logs.go:276] 0 containers: []
	W0920 18:20:11.353169   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:11.353176   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:11.353229   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:11.388271   75577 cri.go:89] found id: ""
	I0920 18:20:11.388298   75577 logs.go:276] 0 containers: []
	W0920 18:20:11.388315   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:11.388322   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:11.388388   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:11.422673   75577 cri.go:89] found id: ""
	I0920 18:20:11.422700   75577 logs.go:276] 0 containers: []
	W0920 18:20:11.422708   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:11.422714   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:11.422768   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:11.458869   75577 cri.go:89] found id: ""
	I0920 18:20:11.458900   75577 logs.go:276] 0 containers: []
	W0920 18:20:11.458910   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:11.458917   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:11.459068   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:11.494128   75577 cri.go:89] found id: ""
	I0920 18:20:11.494161   75577 logs.go:276] 0 containers: []
	W0920 18:20:11.494172   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:11.494180   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:11.494246   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:11.529111   75577 cri.go:89] found id: ""
	I0920 18:20:11.529135   75577 logs.go:276] 0 containers: []
	W0920 18:20:11.529150   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:11.529157   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:11.529223   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:11.562287   75577 cri.go:89] found id: ""
	I0920 18:20:11.562314   75577 logs.go:276] 0 containers: []
	W0920 18:20:11.562323   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:11.562329   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:11.562381   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:11.600066   75577 cri.go:89] found id: ""
	I0920 18:20:11.600106   75577 logs.go:276] 0 containers: []
	W0920 18:20:11.600117   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:11.600128   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:11.600143   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:11.681628   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:11.681665   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:11.722173   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:11.722205   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:11.773132   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:11.773171   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:11.787183   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:11.787215   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:11.856304   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:10.695174   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:13.191743   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:13.025069   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:15.525020   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:12.803677   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:15.301431   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:14.356881   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:14.371658   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:14.371729   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:14.406983   75577 cri.go:89] found id: ""
	I0920 18:20:14.407009   75577 logs.go:276] 0 containers: []
	W0920 18:20:14.407017   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:14.407022   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:14.407075   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:14.481048   75577 cri.go:89] found id: ""
	I0920 18:20:14.481075   75577 logs.go:276] 0 containers: []
	W0920 18:20:14.481086   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:14.481094   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:14.481156   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:14.513683   75577 cri.go:89] found id: ""
	I0920 18:20:14.513711   75577 logs.go:276] 0 containers: []
	W0920 18:20:14.513719   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:14.513725   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:14.513797   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:14.553330   75577 cri.go:89] found id: ""
	I0920 18:20:14.553363   75577 logs.go:276] 0 containers: []
	W0920 18:20:14.553375   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:14.553381   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:14.553446   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:14.590803   75577 cri.go:89] found id: ""
	I0920 18:20:14.590837   75577 logs.go:276] 0 containers: []
	W0920 18:20:14.590848   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:14.590856   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:14.590927   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:14.625100   75577 cri.go:89] found id: ""
	I0920 18:20:14.625130   75577 logs.go:276] 0 containers: []
	W0920 18:20:14.625141   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:14.625151   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:14.625219   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:14.659313   75577 cri.go:89] found id: ""
	I0920 18:20:14.659342   75577 logs.go:276] 0 containers: []
	W0920 18:20:14.659351   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:14.659357   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:14.659418   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:14.694901   75577 cri.go:89] found id: ""
	I0920 18:20:14.694931   75577 logs.go:276] 0 containers: []
	W0920 18:20:14.694939   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:14.694951   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:14.694966   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:14.708406   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:14.708437   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:14.785174   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:14.785200   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:14.785214   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:14.873622   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:14.873666   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:14.919130   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:14.919166   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:17.468595   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:17.483397   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:17.483473   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:17.522415   75577 cri.go:89] found id: ""
	I0920 18:20:17.522445   75577 logs.go:276] 0 containers: []
	W0920 18:20:17.522455   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:17.522463   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:17.522523   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:17.558957   75577 cri.go:89] found id: ""
	I0920 18:20:17.558991   75577 logs.go:276] 0 containers: []
	W0920 18:20:17.559002   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:17.559010   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:17.559174   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:17.595183   75577 cri.go:89] found id: ""
	I0920 18:20:17.595217   75577 logs.go:276] 0 containers: []
	W0920 18:20:17.595229   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:17.595237   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:17.595302   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:17.631727   75577 cri.go:89] found id: ""
	I0920 18:20:17.631757   75577 logs.go:276] 0 containers: []
	W0920 18:20:17.631768   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:17.631775   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:17.631894   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:17.666374   75577 cri.go:89] found id: ""
	I0920 18:20:17.666409   75577 logs.go:276] 0 containers: []
	W0920 18:20:17.666420   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:17.666427   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:17.666488   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:17.702256   75577 cri.go:89] found id: ""
	I0920 18:20:17.702281   75577 logs.go:276] 0 containers: []
	W0920 18:20:17.702291   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:17.702299   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:17.702370   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:15.191971   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:17.192990   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:17.525552   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:20.025229   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:17.802045   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:20.302538   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:17.742133   75577 cri.go:89] found id: ""
	I0920 18:20:17.742161   75577 logs.go:276] 0 containers: []
	W0920 18:20:17.742172   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:17.742179   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:17.742249   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:17.781269   75577 cri.go:89] found id: ""
	I0920 18:20:17.781300   75577 logs.go:276] 0 containers: []
	W0920 18:20:17.781311   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:17.781321   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:17.781336   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:17.835886   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:17.835923   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:17.851365   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:17.851396   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:17.932682   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:17.932711   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:17.932726   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:18.013680   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:18.013731   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:20.555074   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:20.568007   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:20.568071   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:20.602250   75577 cri.go:89] found id: ""
	I0920 18:20:20.602278   75577 logs.go:276] 0 containers: []
	W0920 18:20:20.602287   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:20.602293   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:20.602353   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:20.636711   75577 cri.go:89] found id: ""
	I0920 18:20:20.636739   75577 logs.go:276] 0 containers: []
	W0920 18:20:20.636748   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:20.636753   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:20.636807   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:20.669529   75577 cri.go:89] found id: ""
	I0920 18:20:20.669570   75577 logs.go:276] 0 containers: []
	W0920 18:20:20.669583   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:20.669593   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:20.669651   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:20.710330   75577 cri.go:89] found id: ""
	I0920 18:20:20.710364   75577 logs.go:276] 0 containers: []
	W0920 18:20:20.710372   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:20.710378   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:20.710427   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:20.747664   75577 cri.go:89] found id: ""
	I0920 18:20:20.747690   75577 logs.go:276] 0 containers: []
	W0920 18:20:20.747699   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:20.747705   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:20.747760   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:20.785680   75577 cri.go:89] found id: ""
	I0920 18:20:20.785717   75577 logs.go:276] 0 containers: []
	W0920 18:20:20.785726   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:20.785732   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:20.785794   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:20.822570   75577 cri.go:89] found id: ""
	I0920 18:20:20.822599   75577 logs.go:276] 0 containers: []
	W0920 18:20:20.822608   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:20.822613   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:20.822659   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:20.856739   75577 cri.go:89] found id: ""
	I0920 18:20:20.856772   75577 logs.go:276] 0 containers: []
	W0920 18:20:20.856786   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:20.856806   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:20.856822   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:20.896606   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:20.896643   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:20.948275   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:20.948313   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:20.962576   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:20.962606   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:21.033511   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:21.033534   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:21.033547   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:19.691723   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:21.694099   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:22.524982   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:24.527065   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:27.026374   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:22.803300   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:25.302416   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:23.615731   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:23.628619   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:23.628698   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:23.664344   75577 cri.go:89] found id: ""
	I0920 18:20:23.664367   75577 logs.go:276] 0 containers: []
	W0920 18:20:23.664375   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:23.664381   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:23.664431   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:23.698791   75577 cri.go:89] found id: ""
	I0920 18:20:23.698823   75577 logs.go:276] 0 containers: []
	W0920 18:20:23.698832   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:23.698839   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:23.698889   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:23.732552   75577 cri.go:89] found id: ""
	I0920 18:20:23.732590   75577 logs.go:276] 0 containers: []
	W0920 18:20:23.732602   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:23.732610   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:23.732676   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:23.769463   75577 cri.go:89] found id: ""
	I0920 18:20:23.769490   75577 logs.go:276] 0 containers: []
	W0920 18:20:23.769501   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:23.769508   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:23.769567   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:23.807329   75577 cri.go:89] found id: ""
	I0920 18:20:23.807361   75577 logs.go:276] 0 containers: []
	W0920 18:20:23.807374   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:23.807382   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:23.807442   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:23.847889   75577 cri.go:89] found id: ""
	I0920 18:20:23.847913   75577 logs.go:276] 0 containers: []
	W0920 18:20:23.847920   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:23.847927   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:23.847985   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:23.883287   75577 cri.go:89] found id: ""
	I0920 18:20:23.883314   75577 logs.go:276] 0 containers: []
	W0920 18:20:23.883322   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:23.883335   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:23.883389   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:23.920006   75577 cri.go:89] found id: ""
	I0920 18:20:23.920034   75577 logs.go:276] 0 containers: []
	W0920 18:20:23.920045   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:23.920057   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:23.920070   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:23.995572   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:23.995608   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:24.035953   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:24.035983   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:24.085803   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:24.085860   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:24.100226   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:24.100255   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:24.173555   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:26.673726   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:26.687372   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:26.687449   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:26.726544   75577 cri.go:89] found id: ""
	I0920 18:20:26.726574   75577 logs.go:276] 0 containers: []
	W0920 18:20:26.726583   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:26.726590   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:26.726651   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:26.761542   75577 cri.go:89] found id: ""
	I0920 18:20:26.761571   75577 logs.go:276] 0 containers: []
	W0920 18:20:26.761580   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:26.761587   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:26.761639   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:26.803869   75577 cri.go:89] found id: ""
	I0920 18:20:26.803896   75577 logs.go:276] 0 containers: []
	W0920 18:20:26.803904   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:26.803910   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:26.803970   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:26.854835   75577 cri.go:89] found id: ""
	I0920 18:20:26.854876   75577 logs.go:276] 0 containers: []
	W0920 18:20:26.854888   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:26.854896   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:26.854958   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:26.896113   75577 cri.go:89] found id: ""
	I0920 18:20:26.896151   75577 logs.go:276] 0 containers: []
	W0920 18:20:26.896162   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:26.896170   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:26.896255   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:26.942828   75577 cri.go:89] found id: ""
	I0920 18:20:26.942853   75577 logs.go:276] 0 containers: []
	W0920 18:20:26.942863   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:26.942930   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:26.943007   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:26.977246   75577 cri.go:89] found id: ""
	I0920 18:20:26.977278   75577 logs.go:276] 0 containers: []
	W0920 18:20:26.977289   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:26.977296   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:26.977367   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:27.012408   75577 cri.go:89] found id: ""
	I0920 18:20:27.012440   75577 logs.go:276] 0 containers: []
	W0920 18:20:27.012451   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:27.012462   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:27.012477   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:27.063970   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:27.064017   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:27.078082   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:27.078119   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:27.148050   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:27.148079   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:27.148094   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:27.230836   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:27.230880   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:24.192842   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:26.196350   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:28.692682   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:29.525508   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:32.025275   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:27.802268   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:30.301519   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:29.771845   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:29.785479   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:29.785553   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:29.823101   75577 cri.go:89] found id: ""
	I0920 18:20:29.823132   75577 logs.go:276] 0 containers: []
	W0920 18:20:29.823143   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:29.823150   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:29.823228   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:29.858679   75577 cri.go:89] found id: ""
	I0920 18:20:29.858713   75577 logs.go:276] 0 containers: []
	W0920 18:20:29.858724   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:29.858732   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:29.858796   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:29.901044   75577 cri.go:89] found id: ""
	I0920 18:20:29.901073   75577 logs.go:276] 0 containers: []
	W0920 18:20:29.901083   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:29.901091   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:29.901160   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:29.937743   75577 cri.go:89] found id: ""
	I0920 18:20:29.937775   75577 logs.go:276] 0 containers: []
	W0920 18:20:29.937792   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:29.937800   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:29.937884   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:29.972813   75577 cri.go:89] found id: ""
	I0920 18:20:29.972838   75577 logs.go:276] 0 containers: []
	W0920 18:20:29.972846   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:29.972852   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:29.972901   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:30.008024   75577 cri.go:89] found id: ""
	I0920 18:20:30.008053   75577 logs.go:276] 0 containers: []
	W0920 18:20:30.008062   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:30.008068   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:30.008117   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:30.048544   75577 cri.go:89] found id: ""
	I0920 18:20:30.048577   75577 logs.go:276] 0 containers: []
	W0920 18:20:30.048585   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:30.048591   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:30.048643   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:30.084501   75577 cri.go:89] found id: ""
	I0920 18:20:30.084525   75577 logs.go:276] 0 containers: []
	W0920 18:20:30.084534   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:30.084545   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:30.084559   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:30.136234   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:30.136279   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:30.149557   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:30.149591   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:30.229765   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:30.229792   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:30.229806   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:30.307786   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:30.307825   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:31.191475   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:33.192864   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:34.025515   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:36.027276   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:32.801952   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:34.802033   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:37.301059   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:32.845220   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:32.859734   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:32.859810   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:32.896390   75577 cri.go:89] found id: ""
	I0920 18:20:32.896434   75577 logs.go:276] 0 containers: []
	W0920 18:20:32.896446   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:32.896464   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:32.896538   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:32.934548   75577 cri.go:89] found id: ""
	I0920 18:20:32.934572   75577 logs.go:276] 0 containers: []
	W0920 18:20:32.934580   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:32.934587   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:32.934640   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:32.982987   75577 cri.go:89] found id: ""
	I0920 18:20:32.983013   75577 logs.go:276] 0 containers: []
	W0920 18:20:32.983020   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:32.983026   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:32.983079   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:33.019059   75577 cri.go:89] found id: ""
	I0920 18:20:33.019085   75577 logs.go:276] 0 containers: []
	W0920 18:20:33.019093   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:33.019099   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:33.019148   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:33.057704   75577 cri.go:89] found id: ""
	I0920 18:20:33.057738   75577 logs.go:276] 0 containers: []
	W0920 18:20:33.057750   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:33.057759   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:33.057821   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:33.093702   75577 cri.go:89] found id: ""
	I0920 18:20:33.093732   75577 logs.go:276] 0 containers: []
	W0920 18:20:33.093743   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:33.093751   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:33.093809   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:33.128476   75577 cri.go:89] found id: ""
	I0920 18:20:33.128504   75577 logs.go:276] 0 containers: []
	W0920 18:20:33.128516   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:33.128523   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:33.128591   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:33.165513   75577 cri.go:89] found id: ""
	I0920 18:20:33.165542   75577 logs.go:276] 0 containers: []
	W0920 18:20:33.165550   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:33.165559   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:33.165569   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:33.219613   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:33.219650   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:33.232291   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:33.232317   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:33.304172   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:33.304197   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:33.304212   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:33.380057   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:33.380094   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:35.920842   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:35.935500   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:35.935593   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:35.974445   75577 cri.go:89] found id: ""
	I0920 18:20:35.974471   75577 logs.go:276] 0 containers: []
	W0920 18:20:35.974479   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:35.974485   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:35.974548   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:36.009487   75577 cri.go:89] found id: ""
	I0920 18:20:36.009518   75577 logs.go:276] 0 containers: []
	W0920 18:20:36.009530   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:36.009538   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:36.009706   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:36.049428   75577 cri.go:89] found id: ""
	I0920 18:20:36.049451   75577 logs.go:276] 0 containers: []
	W0920 18:20:36.049463   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:36.049469   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:36.049518   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:36.083973   75577 cri.go:89] found id: ""
	I0920 18:20:36.084011   75577 logs.go:276] 0 containers: []
	W0920 18:20:36.084022   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:36.084030   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:36.084119   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:36.117952   75577 cri.go:89] found id: ""
	I0920 18:20:36.117985   75577 logs.go:276] 0 containers: []
	W0920 18:20:36.117993   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:36.117998   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:36.118052   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:36.151222   75577 cri.go:89] found id: ""
	I0920 18:20:36.151256   75577 logs.go:276] 0 containers: []
	W0920 18:20:36.151265   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:36.151271   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:36.151319   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:36.184665   75577 cri.go:89] found id: ""
	I0920 18:20:36.184697   75577 logs.go:276] 0 containers: []
	W0920 18:20:36.184708   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:36.184716   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:36.184771   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:36.222974   75577 cri.go:89] found id: ""
	I0920 18:20:36.223003   75577 logs.go:276] 0 containers: []
	W0920 18:20:36.223012   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:36.223021   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:36.223033   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:36.300403   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:36.300424   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:36.300437   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:36.381220   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:36.381260   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:36.419010   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:36.419042   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:36.471758   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:36.471799   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:35.192949   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:37.693384   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:38.526219   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:41.024309   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:39.301511   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:41.801558   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:38.985359   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:38.998817   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:38.998898   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:39.036705   75577 cri.go:89] found id: ""
	I0920 18:20:39.036737   75577 logs.go:276] 0 containers: []
	W0920 18:20:39.036747   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:39.036755   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:39.036815   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:39.074254   75577 cri.go:89] found id: ""
	I0920 18:20:39.074285   75577 logs.go:276] 0 containers: []
	W0920 18:20:39.074294   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:39.074300   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:39.074367   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:39.108385   75577 cri.go:89] found id: ""
	I0920 18:20:39.108420   75577 logs.go:276] 0 containers: []
	W0920 18:20:39.108493   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:39.108506   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:39.108561   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:39.142283   75577 cri.go:89] found id: ""
	I0920 18:20:39.142313   75577 logs.go:276] 0 containers: []
	W0920 18:20:39.142325   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:39.142332   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:39.142396   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:39.176881   75577 cri.go:89] found id: ""
	I0920 18:20:39.176918   75577 logs.go:276] 0 containers: []
	W0920 18:20:39.176933   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:39.176941   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:39.177002   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:39.217734   75577 cri.go:89] found id: ""
	I0920 18:20:39.217759   75577 logs.go:276] 0 containers: []
	W0920 18:20:39.217767   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:39.217773   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:39.217852   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:39.250823   75577 cri.go:89] found id: ""
	I0920 18:20:39.250852   75577 logs.go:276] 0 containers: []
	W0920 18:20:39.250860   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:39.250868   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:39.250936   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:39.287494   75577 cri.go:89] found id: ""
	I0920 18:20:39.287519   75577 logs.go:276] 0 containers: []
	W0920 18:20:39.287528   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:39.287540   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:39.287552   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:39.343091   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:39.343126   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:39.359028   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:39.359063   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:39.425293   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:39.425321   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:39.425336   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:39.503303   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:39.503350   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:42.044784   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:42.057764   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:42.057876   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:42.091798   75577 cri.go:89] found id: ""
	I0920 18:20:42.091825   75577 logs.go:276] 0 containers: []
	W0920 18:20:42.091833   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:42.091839   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:42.091888   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:42.125325   75577 cri.go:89] found id: ""
	I0920 18:20:42.125352   75577 logs.go:276] 0 containers: []
	W0920 18:20:42.125362   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:42.125370   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:42.125423   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:42.158641   75577 cri.go:89] found id: ""
	I0920 18:20:42.158680   75577 logs.go:276] 0 containers: []
	W0920 18:20:42.158692   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:42.158701   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:42.158771   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:42.194065   75577 cri.go:89] found id: ""
	I0920 18:20:42.194090   75577 logs.go:276] 0 containers: []
	W0920 18:20:42.194101   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:42.194109   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:42.194174   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:42.227237   75577 cri.go:89] found id: ""
	I0920 18:20:42.227262   75577 logs.go:276] 0 containers: []
	W0920 18:20:42.227270   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:42.227275   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:42.227324   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:42.260205   75577 cri.go:89] found id: ""
	I0920 18:20:42.260240   75577 logs.go:276] 0 containers: []
	W0920 18:20:42.260251   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:42.260258   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:42.260325   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:42.300124   75577 cri.go:89] found id: ""
	I0920 18:20:42.300159   75577 logs.go:276] 0 containers: []
	W0920 18:20:42.300169   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:42.300175   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:42.300273   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:42.335638   75577 cri.go:89] found id: ""
	I0920 18:20:42.335674   75577 logs.go:276] 0 containers: []
	W0920 18:20:42.335685   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:42.335695   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:42.335710   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:42.386176   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:42.386214   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:42.400113   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:42.400147   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:42.479877   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:42.479900   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:42.479912   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:42.554654   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:42.554701   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:40.191251   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:42.192910   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:43.026185   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:45.525362   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:43.801927   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:45.802309   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:45.093962   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:45.107747   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:45.107811   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:45.147091   75577 cri.go:89] found id: ""
	I0920 18:20:45.147120   75577 logs.go:276] 0 containers: []
	W0920 18:20:45.147132   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:45.147140   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:45.147205   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:45.185332   75577 cri.go:89] found id: ""
	I0920 18:20:45.185396   75577 logs.go:276] 0 containers: []
	W0920 18:20:45.185431   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:45.185446   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:45.185523   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:45.224496   75577 cri.go:89] found id: ""
	I0920 18:20:45.224525   75577 logs.go:276] 0 containers: []
	W0920 18:20:45.224535   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:45.224542   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:45.224612   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:45.260686   75577 cri.go:89] found id: ""
	I0920 18:20:45.260718   75577 logs.go:276] 0 containers: []
	W0920 18:20:45.260729   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:45.260737   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:45.260801   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:45.301231   75577 cri.go:89] found id: ""
	I0920 18:20:45.301259   75577 logs.go:276] 0 containers: []
	W0920 18:20:45.301269   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:45.301277   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:45.301343   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:45.346441   75577 cri.go:89] found id: ""
	I0920 18:20:45.346468   75577 logs.go:276] 0 containers: []
	W0920 18:20:45.346476   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:45.346482   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:45.346537   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:45.385042   75577 cri.go:89] found id: ""
	I0920 18:20:45.385071   75577 logs.go:276] 0 containers: []
	W0920 18:20:45.385082   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:45.385090   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:45.385149   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:45.422208   75577 cri.go:89] found id: ""
	I0920 18:20:45.422238   75577 logs.go:276] 0 containers: []
	W0920 18:20:45.422249   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:45.422260   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:45.422275   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:45.472692   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:45.472742   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:45.487981   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:45.488011   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:45.563012   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:45.563035   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:45.563051   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:45.640750   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:45.640786   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:44.691791   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:46.692359   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:48.693165   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:48.024959   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:50.025944   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:48.302699   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:50.801740   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:48.181093   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:48.194433   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:48.194516   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:48.234349   75577 cri.go:89] found id: ""
	I0920 18:20:48.234383   75577 logs.go:276] 0 containers: []
	W0920 18:20:48.234394   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:48.234403   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:48.234467   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:48.269424   75577 cri.go:89] found id: ""
	I0920 18:20:48.269450   75577 logs.go:276] 0 containers: []
	W0920 18:20:48.269457   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:48.269462   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:48.269514   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:48.303968   75577 cri.go:89] found id: ""
	I0920 18:20:48.303990   75577 logs.go:276] 0 containers: []
	W0920 18:20:48.303997   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:48.304002   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:48.304061   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:48.339149   75577 cri.go:89] found id: ""
	I0920 18:20:48.339180   75577 logs.go:276] 0 containers: []
	W0920 18:20:48.339191   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:48.339198   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:48.339259   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:48.374529   75577 cri.go:89] found id: ""
	I0920 18:20:48.374559   75577 logs.go:276] 0 containers: []
	W0920 18:20:48.374571   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:48.374578   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:48.374644   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:48.409170   75577 cri.go:89] found id: ""
	I0920 18:20:48.409203   75577 logs.go:276] 0 containers: []
	W0920 18:20:48.409211   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:48.409217   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:48.409292   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:48.443960   75577 cri.go:89] found id: ""
	I0920 18:20:48.443990   75577 logs.go:276] 0 containers: []
	W0920 18:20:48.444009   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:48.444017   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:48.444074   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:48.480934   75577 cri.go:89] found id: ""
	I0920 18:20:48.480965   75577 logs.go:276] 0 containers: []
	W0920 18:20:48.480978   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:48.480990   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:48.481006   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:48.533261   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:48.533295   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:48.546460   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:48.546488   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:48.615183   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:48.615206   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:48.615219   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:48.695299   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:48.695337   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:51.240343   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:51.255327   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:51.255411   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:51.294611   75577 cri.go:89] found id: ""
	I0920 18:20:51.294642   75577 logs.go:276] 0 containers: []
	W0920 18:20:51.294650   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:51.294656   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:51.294710   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:51.328569   75577 cri.go:89] found id: ""
	I0920 18:20:51.328598   75577 logs.go:276] 0 containers: []
	W0920 18:20:51.328613   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:51.328621   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:51.328675   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:51.364255   75577 cri.go:89] found id: ""
	I0920 18:20:51.364283   75577 logs.go:276] 0 containers: []
	W0920 18:20:51.364291   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:51.364297   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:51.364347   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:51.406178   75577 cri.go:89] found id: ""
	I0920 18:20:51.406204   75577 logs.go:276] 0 containers: []
	W0920 18:20:51.406215   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:51.406223   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:51.406284   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:51.449491   75577 cri.go:89] found id: ""
	I0920 18:20:51.449519   75577 logs.go:276] 0 containers: []
	W0920 18:20:51.449529   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:51.449536   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:51.449600   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:51.483243   75577 cri.go:89] found id: ""
	I0920 18:20:51.483269   75577 logs.go:276] 0 containers: []
	W0920 18:20:51.483278   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:51.483284   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:51.483334   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:51.517280   75577 cri.go:89] found id: ""
	I0920 18:20:51.517304   75577 logs.go:276] 0 containers: []
	W0920 18:20:51.517311   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:51.517316   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:51.517378   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:51.553517   75577 cri.go:89] found id: ""
	I0920 18:20:51.553545   75577 logs.go:276] 0 containers: []
	W0920 18:20:51.553556   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:51.553565   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:51.553576   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:51.607330   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:51.607369   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:51.620628   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:51.620662   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:51.687586   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:51.687618   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:51.687638   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:51.764882   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:51.764924   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:51.191732   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:53.193078   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:52.524529   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:54.526138   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:57.025365   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:52.802328   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:54.804955   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:57.301987   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:54.307276   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:54.321777   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:54.321860   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:54.355565   75577 cri.go:89] found id: ""
	I0920 18:20:54.355598   75577 logs.go:276] 0 containers: []
	W0920 18:20:54.355609   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:54.355618   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:54.355680   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:54.391184   75577 cri.go:89] found id: ""
	I0920 18:20:54.391216   75577 logs.go:276] 0 containers: []
	W0920 18:20:54.391227   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:54.391236   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:54.391301   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:54.425680   75577 cri.go:89] found id: ""
	I0920 18:20:54.425709   75577 logs.go:276] 0 containers: []
	W0920 18:20:54.425718   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:54.425725   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:54.425775   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:54.461514   75577 cri.go:89] found id: ""
	I0920 18:20:54.461541   75577 logs.go:276] 0 containers: []
	W0920 18:20:54.461552   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:54.461559   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:54.461625   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:54.495290   75577 cri.go:89] found id: ""
	I0920 18:20:54.495319   75577 logs.go:276] 0 containers: []
	W0920 18:20:54.495327   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:54.495333   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:54.495384   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:54.530014   75577 cri.go:89] found id: ""
	I0920 18:20:54.530038   75577 logs.go:276] 0 containers: []
	W0920 18:20:54.530046   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:54.530052   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:54.530103   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:54.562559   75577 cri.go:89] found id: ""
	I0920 18:20:54.562597   75577 logs.go:276] 0 containers: []
	W0920 18:20:54.562611   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:54.562621   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:54.562694   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:54.599894   75577 cri.go:89] found id: ""
	I0920 18:20:54.599920   75577 logs.go:276] 0 containers: []
	W0920 18:20:54.599928   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:54.599946   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:54.599982   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:54.636853   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:54.636880   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:54.687932   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:54.687978   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:54.701642   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:54.701673   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:54.769649   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:54.769678   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:54.769695   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:57.356015   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:57.368860   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:57.368923   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:57.409339   75577 cri.go:89] found id: ""
	I0920 18:20:57.409365   75577 logs.go:276] 0 containers: []
	W0920 18:20:57.409375   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:57.409382   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:57.409444   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:57.443055   75577 cri.go:89] found id: ""
	I0920 18:20:57.443085   75577 logs.go:276] 0 containers: []
	W0920 18:20:57.443095   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:57.443102   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:57.443158   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:57.481816   75577 cri.go:89] found id: ""
	I0920 18:20:57.481859   75577 logs.go:276] 0 containers: []
	W0920 18:20:57.481871   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:57.481879   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:57.481942   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:57.517327   75577 cri.go:89] found id: ""
	I0920 18:20:57.517361   75577 logs.go:276] 0 containers: []
	W0920 18:20:57.517372   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:57.517379   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:57.517442   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:57.555121   75577 cri.go:89] found id: ""
	I0920 18:20:57.555151   75577 logs.go:276] 0 containers: []
	W0920 18:20:57.555159   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:57.555164   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:57.555222   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:57.592632   75577 cri.go:89] found id: ""
	I0920 18:20:57.592666   75577 logs.go:276] 0 containers: []
	W0920 18:20:57.592679   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:57.592685   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:57.592734   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:57.627519   75577 cri.go:89] found id: ""
	I0920 18:20:57.627556   75577 logs.go:276] 0 containers: []
	W0920 18:20:57.627567   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:57.627574   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:57.627636   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:57.663817   75577 cri.go:89] found id: ""
	I0920 18:20:57.663844   75577 logs.go:276] 0 containers: []
	W0920 18:20:57.663853   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:57.663862   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:57.663877   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:57.704896   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:57.704924   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:55.692746   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:58.191973   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:59.524732   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:01.525346   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:59.801537   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:01.802088   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:57.757933   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:57.757972   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:57.772646   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:57.772673   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:57.850634   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:57.850665   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:57.850681   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:00.431504   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:00.445175   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:00.445241   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:00.482050   75577 cri.go:89] found id: ""
	I0920 18:21:00.482076   75577 logs.go:276] 0 containers: []
	W0920 18:21:00.482088   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:00.482095   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:00.482150   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:00.515793   75577 cri.go:89] found id: ""
	I0920 18:21:00.515822   75577 logs.go:276] 0 containers: []
	W0920 18:21:00.515833   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:00.515841   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:00.515903   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:00.550250   75577 cri.go:89] found id: ""
	I0920 18:21:00.550278   75577 logs.go:276] 0 containers: []
	W0920 18:21:00.550288   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:00.550296   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:00.550374   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:00.585977   75577 cri.go:89] found id: ""
	I0920 18:21:00.586011   75577 logs.go:276] 0 containers: []
	W0920 18:21:00.586034   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:00.586051   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:00.586118   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:00.621828   75577 cri.go:89] found id: ""
	I0920 18:21:00.621869   75577 logs.go:276] 0 containers: []
	W0920 18:21:00.621879   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:00.621886   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:00.621937   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:00.655492   75577 cri.go:89] found id: ""
	I0920 18:21:00.655528   75577 logs.go:276] 0 containers: []
	W0920 18:21:00.655540   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:00.655600   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:00.655673   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:00.709445   75577 cri.go:89] found id: ""
	I0920 18:21:00.709476   75577 logs.go:276] 0 containers: []
	W0920 18:21:00.709488   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:00.709496   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:00.709566   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:00.744098   75577 cri.go:89] found id: ""
	I0920 18:21:00.744123   75577 logs.go:276] 0 containers: []
	W0920 18:21:00.744134   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:00.744144   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:00.744164   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:00.792576   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:00.792612   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:00.807766   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:00.807794   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:00.883626   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:00.883649   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:00.883663   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:00.966233   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:00.966269   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:00.692181   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:03.192227   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:04.024582   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:06.029376   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:04.301078   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:06.301727   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:03.505502   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:03.520407   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:03.520534   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:03.557891   75577 cri.go:89] found id: ""
	I0920 18:21:03.557926   75577 logs.go:276] 0 containers: []
	W0920 18:21:03.557936   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:03.557944   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:03.558006   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:03.593869   75577 cri.go:89] found id: ""
	I0920 18:21:03.593898   75577 logs.go:276] 0 containers: []
	W0920 18:21:03.593908   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:03.593916   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:03.593982   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:03.629270   75577 cri.go:89] found id: ""
	I0920 18:21:03.629302   75577 logs.go:276] 0 containers: []
	W0920 18:21:03.629311   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:03.629317   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:03.629366   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:03.664658   75577 cri.go:89] found id: ""
	I0920 18:21:03.664688   75577 logs.go:276] 0 containers: []
	W0920 18:21:03.664699   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:03.664706   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:03.664769   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:03.700846   75577 cri.go:89] found id: ""
	I0920 18:21:03.700868   75577 logs.go:276] 0 containers: []
	W0920 18:21:03.700875   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:03.700882   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:03.700941   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:03.736311   75577 cri.go:89] found id: ""
	I0920 18:21:03.736345   75577 logs.go:276] 0 containers: []
	W0920 18:21:03.736355   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:03.736363   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:03.736421   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:03.770760   75577 cri.go:89] found id: ""
	I0920 18:21:03.770788   75577 logs.go:276] 0 containers: []
	W0920 18:21:03.770800   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:03.770808   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:03.770868   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:03.808724   75577 cri.go:89] found id: ""
	I0920 18:21:03.808749   75577 logs.go:276] 0 containers: []
	W0920 18:21:03.808756   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:03.808764   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:03.808775   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:03.851231   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:03.851265   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:03.899607   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:03.899641   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:03.915051   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:03.915079   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:03.984016   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:03.984038   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:03.984053   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:06.564776   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:06.578524   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:06.578604   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:06.614305   75577 cri.go:89] found id: ""
	I0920 18:21:06.614340   75577 logs.go:276] 0 containers: []
	W0920 18:21:06.614351   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:06.614365   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:06.614427   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:06.648929   75577 cri.go:89] found id: ""
	I0920 18:21:06.648958   75577 logs.go:276] 0 containers: []
	W0920 18:21:06.648968   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:06.648976   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:06.649036   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:06.689975   75577 cri.go:89] found id: ""
	I0920 18:21:06.690016   75577 logs.go:276] 0 containers: []
	W0920 18:21:06.690027   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:06.690034   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:06.690092   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:06.729721   75577 cri.go:89] found id: ""
	I0920 18:21:06.729747   75577 logs.go:276] 0 containers: []
	W0920 18:21:06.729755   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:06.729762   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:06.729808   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:06.766310   75577 cri.go:89] found id: ""
	I0920 18:21:06.766343   75577 logs.go:276] 0 containers: []
	W0920 18:21:06.766354   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:06.766361   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:06.766437   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:06.800801   75577 cri.go:89] found id: ""
	I0920 18:21:06.800829   75577 logs.go:276] 0 containers: []
	W0920 18:21:06.800839   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:06.800847   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:06.800909   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:06.836391   75577 cri.go:89] found id: ""
	I0920 18:21:06.836429   75577 logs.go:276] 0 containers: []
	W0920 18:21:06.836447   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:06.836455   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:06.836521   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:06.873014   75577 cri.go:89] found id: ""
	I0920 18:21:06.873041   75577 logs.go:276] 0 containers: []
	W0920 18:21:06.873049   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:06.873057   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:06.873070   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:06.953084   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:06.953110   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:06.953122   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:07.032398   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:07.032434   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:07.082196   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:07.082224   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:07.159604   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:07.159640   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:05.691350   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:07.692127   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:08.526757   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:10.526990   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:08.801736   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:11.301001   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:09.675022   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:09.687924   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:09.687999   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:09.722963   75577 cri.go:89] found id: ""
	I0920 18:21:09.722994   75577 logs.go:276] 0 containers: []
	W0920 18:21:09.723005   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:09.723017   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:09.723084   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:09.756433   75577 cri.go:89] found id: ""
	I0920 18:21:09.756458   75577 logs.go:276] 0 containers: []
	W0920 18:21:09.756472   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:09.756486   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:09.756549   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:09.792200   75577 cri.go:89] found id: ""
	I0920 18:21:09.792232   75577 logs.go:276] 0 containers: []
	W0920 18:21:09.792248   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:09.792256   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:09.792323   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:09.829058   75577 cri.go:89] found id: ""
	I0920 18:21:09.829081   75577 logs.go:276] 0 containers: []
	W0920 18:21:09.829098   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:09.829104   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:09.829150   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:09.863225   75577 cri.go:89] found id: ""
	I0920 18:21:09.863252   75577 logs.go:276] 0 containers: []
	W0920 18:21:09.863259   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:09.863265   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:09.863312   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:09.897682   75577 cri.go:89] found id: ""
	I0920 18:21:09.897708   75577 logs.go:276] 0 containers: []
	W0920 18:21:09.897725   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:09.897731   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:09.897789   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:09.936799   75577 cri.go:89] found id: ""
	I0920 18:21:09.936826   75577 logs.go:276] 0 containers: []
	W0920 18:21:09.936836   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:09.936843   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:09.936904   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:09.970502   75577 cri.go:89] found id: ""
	I0920 18:21:09.970539   75577 logs.go:276] 0 containers: []
	W0920 18:21:09.970550   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:09.970560   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:09.970574   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:09.983676   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:09.983703   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:10.058844   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:10.058865   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:10.058874   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:10.136998   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:10.137040   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:10.178902   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:10.178933   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:12.729619   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:09.692361   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:12.192257   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:13.025403   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:15.025667   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:13.302087   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:15.802019   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:12.743816   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:12.743876   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:12.779407   75577 cri.go:89] found id: ""
	I0920 18:21:12.779431   75577 logs.go:276] 0 containers: []
	W0920 18:21:12.779439   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:12.779446   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:12.779503   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:12.820700   75577 cri.go:89] found id: ""
	I0920 18:21:12.820730   75577 logs.go:276] 0 containers: []
	W0920 18:21:12.820740   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:12.820748   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:12.820812   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:12.862179   75577 cri.go:89] found id: ""
	I0920 18:21:12.862210   75577 logs.go:276] 0 containers: []
	W0920 18:21:12.862221   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:12.862228   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:12.862284   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:12.908976   75577 cri.go:89] found id: ""
	I0920 18:21:12.908999   75577 logs.go:276] 0 containers: []
	W0920 18:21:12.909007   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:12.909013   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:12.909072   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:12.953653   75577 cri.go:89] found id: ""
	I0920 18:21:12.953688   75577 logs.go:276] 0 containers: []
	W0920 18:21:12.953696   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:12.953702   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:12.953757   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:12.998519   75577 cri.go:89] found id: ""
	I0920 18:21:12.998546   75577 logs.go:276] 0 containers: []
	W0920 18:21:12.998557   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:12.998564   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:12.998640   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:13.035518   75577 cri.go:89] found id: ""
	I0920 18:21:13.035541   75577 logs.go:276] 0 containers: []
	W0920 18:21:13.035549   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:13.035555   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:13.035609   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:13.073551   75577 cri.go:89] found id: ""
	I0920 18:21:13.073591   75577 logs.go:276] 0 containers: []
	W0920 18:21:13.073605   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:13.073617   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:13.073638   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:13.125118   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:13.125155   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:13.139384   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:13.139415   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:13.204980   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:13.205006   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:13.205021   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:13.289400   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:13.289446   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:15.829405   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:15.842291   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:15.842354   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:15.875886   75577 cri.go:89] found id: ""
	I0920 18:21:15.875921   75577 logs.go:276] 0 containers: []
	W0920 18:21:15.875930   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:15.875937   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:15.875990   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:15.911706   75577 cri.go:89] found id: ""
	I0920 18:21:15.911746   75577 logs.go:276] 0 containers: []
	W0920 18:21:15.911760   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:15.911768   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:15.911831   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:15.950117   75577 cri.go:89] found id: ""
	I0920 18:21:15.950150   75577 logs.go:276] 0 containers: []
	W0920 18:21:15.950161   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:15.950170   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:15.950243   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:15.986029   75577 cri.go:89] found id: ""
	I0920 18:21:15.986066   75577 logs.go:276] 0 containers: []
	W0920 18:21:15.986083   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:15.986091   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:15.986159   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:16.020307   75577 cri.go:89] found id: ""
	I0920 18:21:16.020335   75577 logs.go:276] 0 containers: []
	W0920 18:21:16.020346   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:16.020354   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:16.020412   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:16.054744   75577 cri.go:89] found id: ""
	I0920 18:21:16.054769   75577 logs.go:276] 0 containers: []
	W0920 18:21:16.054777   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:16.054782   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:16.054831   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:16.091756   75577 cri.go:89] found id: ""
	I0920 18:21:16.091789   75577 logs.go:276] 0 containers: []
	W0920 18:21:16.091800   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:16.091807   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:16.091868   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:16.127818   75577 cri.go:89] found id: ""
	I0920 18:21:16.127843   75577 logs.go:276] 0 containers: []
	W0920 18:21:16.127851   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:16.127861   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:16.127877   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:16.200114   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:16.200138   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:16.200149   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:16.279473   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:16.279508   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:16.319139   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:16.319171   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:16.370721   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:16.370769   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:14.192782   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:16.192866   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:18.691599   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:17.524026   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:19.524385   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:21.525244   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:18.301696   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:20.301993   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:18.884966   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:18.900270   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:18.900355   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:18.938541   75577 cri.go:89] found id: ""
	I0920 18:21:18.938582   75577 logs.go:276] 0 containers: []
	W0920 18:21:18.938594   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:18.938602   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:18.938673   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:18.972343   75577 cri.go:89] found id: ""
	I0920 18:21:18.972380   75577 logs.go:276] 0 containers: []
	W0920 18:21:18.972391   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:18.972400   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:18.972458   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:19.011996   75577 cri.go:89] found id: ""
	I0920 18:21:19.012037   75577 logs.go:276] 0 containers: []
	W0920 18:21:19.012048   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:19.012054   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:19.012105   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:19.053789   75577 cri.go:89] found id: ""
	I0920 18:21:19.053818   75577 logs.go:276] 0 containers: []
	W0920 18:21:19.053839   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:19.053849   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:19.053907   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:19.094830   75577 cri.go:89] found id: ""
	I0920 18:21:19.094862   75577 logs.go:276] 0 containers: []
	W0920 18:21:19.094872   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:19.094881   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:19.094941   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:19.133894   75577 cri.go:89] found id: ""
	I0920 18:21:19.133923   75577 logs.go:276] 0 containers: []
	W0920 18:21:19.133934   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:19.133943   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:19.134001   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:19.171640   75577 cri.go:89] found id: ""
	I0920 18:21:19.171662   75577 logs.go:276] 0 containers: []
	W0920 18:21:19.171670   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:19.171676   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:19.171730   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:19.207821   75577 cri.go:89] found id: ""
	I0920 18:21:19.207852   75577 logs.go:276] 0 containers: []
	W0920 18:21:19.207861   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:19.207869   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:19.207880   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:19.292486   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:19.292530   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:19.332664   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:19.332695   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:19.383793   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:19.383828   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:19.399234   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:19.399269   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:19.470636   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:21.971772   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:21.985558   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:21.985630   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:22.018745   75577 cri.go:89] found id: ""
	I0920 18:21:22.018775   75577 logs.go:276] 0 containers: []
	W0920 18:21:22.018785   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:22.018795   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:22.018854   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:22.053586   75577 cri.go:89] found id: ""
	I0920 18:21:22.053617   75577 logs.go:276] 0 containers: []
	W0920 18:21:22.053627   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:22.053635   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:22.053697   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:22.088290   75577 cri.go:89] found id: ""
	I0920 18:21:22.088320   75577 logs.go:276] 0 containers: []
	W0920 18:21:22.088337   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:22.088344   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:22.088394   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:22.121026   75577 cri.go:89] found id: ""
	I0920 18:21:22.121050   75577 logs.go:276] 0 containers: []
	W0920 18:21:22.121057   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:22.121063   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:22.121117   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:22.153669   75577 cri.go:89] found id: ""
	I0920 18:21:22.153702   75577 logs.go:276] 0 containers: []
	W0920 18:21:22.153715   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:22.153725   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:22.153793   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:22.192179   75577 cri.go:89] found id: ""
	I0920 18:21:22.192208   75577 logs.go:276] 0 containers: []
	W0920 18:21:22.192218   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:22.192226   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:22.192294   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:22.226115   75577 cri.go:89] found id: ""
	I0920 18:21:22.226142   75577 logs.go:276] 0 containers: []
	W0920 18:21:22.226153   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:22.226161   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:22.226231   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:22.259032   75577 cri.go:89] found id: ""
	I0920 18:21:22.259059   75577 logs.go:276] 0 containers: []
	W0920 18:21:22.259070   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:22.259080   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:22.259094   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:22.308989   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:22.309020   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:22.322084   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:22.322113   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:22.397567   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:22.397587   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:22.397598   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:22.476551   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:22.476596   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:20.691837   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:23.191939   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:24.024785   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:26.024824   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:22.801102   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:24.801697   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:26.801789   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:25.017659   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:25.032637   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:25.032698   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:25.067671   75577 cri.go:89] found id: ""
	I0920 18:21:25.067701   75577 logs.go:276] 0 containers: []
	W0920 18:21:25.067711   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:25.067718   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:25.067774   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:25.109354   75577 cri.go:89] found id: ""
	I0920 18:21:25.109385   75577 logs.go:276] 0 containers: []
	W0920 18:21:25.109396   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:25.109403   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:25.109463   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:25.142888   75577 cri.go:89] found id: ""
	I0920 18:21:25.142924   75577 logs.go:276] 0 containers: []
	W0920 18:21:25.142935   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:25.142942   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:25.143004   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:25.179005   75577 cri.go:89] found id: ""
	I0920 18:21:25.179032   75577 logs.go:276] 0 containers: []
	W0920 18:21:25.179043   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:25.179050   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:25.179114   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:25.211639   75577 cri.go:89] found id: ""
	I0920 18:21:25.211662   75577 logs.go:276] 0 containers: []
	W0920 18:21:25.211669   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:25.211676   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:25.211729   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:25.246678   75577 cri.go:89] found id: ""
	I0920 18:21:25.246709   75577 logs.go:276] 0 containers: []
	W0920 18:21:25.246718   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:25.246724   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:25.246780   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:25.284187   75577 cri.go:89] found id: ""
	I0920 18:21:25.284216   75577 logs.go:276] 0 containers: []
	W0920 18:21:25.284240   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:25.284247   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:25.284309   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:25.318882   75577 cri.go:89] found id: ""
	I0920 18:21:25.318917   75577 logs.go:276] 0 containers: []
	W0920 18:21:25.318929   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:25.318940   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:25.318954   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:25.357017   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:25.357046   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:25.407584   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:25.407627   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:25.421405   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:25.421437   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:25.492849   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:25.492870   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:25.492881   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:25.192551   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:27.691730   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:28.025042   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:30.025806   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:29.301637   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:31.302080   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:28.076101   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:28.088741   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:28.088805   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:28.124889   75577 cri.go:89] found id: ""
	I0920 18:21:28.124925   75577 logs.go:276] 0 containers: []
	W0920 18:21:28.124944   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:28.124952   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:28.125013   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:28.158590   75577 cri.go:89] found id: ""
	I0920 18:21:28.158615   75577 logs.go:276] 0 containers: []
	W0920 18:21:28.158623   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:28.158629   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:28.158677   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:28.195636   75577 cri.go:89] found id: ""
	I0920 18:21:28.195670   75577 logs.go:276] 0 containers: []
	W0920 18:21:28.195683   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:28.195692   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:28.195762   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:28.228714   75577 cri.go:89] found id: ""
	I0920 18:21:28.228759   75577 logs.go:276] 0 containers: []
	W0920 18:21:28.228771   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:28.228780   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:28.228840   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:28.261488   75577 cri.go:89] found id: ""
	I0920 18:21:28.261519   75577 logs.go:276] 0 containers: []
	W0920 18:21:28.261531   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:28.261538   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:28.261606   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:28.293534   75577 cri.go:89] found id: ""
	I0920 18:21:28.293560   75577 logs.go:276] 0 containers: []
	W0920 18:21:28.293568   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:28.293573   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:28.293629   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:28.330095   75577 cri.go:89] found id: ""
	I0920 18:21:28.330120   75577 logs.go:276] 0 containers: []
	W0920 18:21:28.330128   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:28.330134   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:28.330191   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:28.365643   75577 cri.go:89] found id: ""
	I0920 18:21:28.365675   75577 logs.go:276] 0 containers: []
	W0920 18:21:28.365684   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:28.365693   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:28.365712   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:28.379982   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:28.380007   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:28.456355   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:28.456386   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:28.456402   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:28.538725   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:28.538759   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:28.576067   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:28.576104   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:31.129705   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:31.145814   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:31.145919   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:31.181469   75577 cri.go:89] found id: ""
	I0920 18:21:31.181497   75577 logs.go:276] 0 containers: []
	W0920 18:21:31.181507   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:31.181514   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:31.181562   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:31.216391   75577 cri.go:89] found id: ""
	I0920 18:21:31.216416   75577 logs.go:276] 0 containers: []
	W0920 18:21:31.216426   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:31.216433   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:31.216492   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:31.250236   75577 cri.go:89] found id: ""
	I0920 18:21:31.250266   75577 logs.go:276] 0 containers: []
	W0920 18:21:31.250277   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:31.250285   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:31.250357   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:31.283406   75577 cri.go:89] found id: ""
	I0920 18:21:31.283430   75577 logs.go:276] 0 containers: []
	W0920 18:21:31.283439   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:31.283446   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:31.283519   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:31.317276   75577 cri.go:89] found id: ""
	I0920 18:21:31.317308   75577 logs.go:276] 0 containers: []
	W0920 18:21:31.317327   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:31.317335   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:31.317400   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:31.351563   75577 cri.go:89] found id: ""
	I0920 18:21:31.351588   75577 logs.go:276] 0 containers: []
	W0920 18:21:31.351595   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:31.351600   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:31.351652   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:31.385932   75577 cri.go:89] found id: ""
	I0920 18:21:31.385972   75577 logs.go:276] 0 containers: []
	W0920 18:21:31.385984   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:31.385992   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:31.386056   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:31.422679   75577 cri.go:89] found id: ""
	I0920 18:21:31.422718   75577 logs.go:276] 0 containers: []
	W0920 18:21:31.422729   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:31.422740   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:31.422755   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:31.475827   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:31.475867   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:31.489324   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:31.489354   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:31.560357   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:31.560377   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:31.560390   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:31.642137   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:31.642171   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:30.191820   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:32.692178   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:32.524953   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:35.025736   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:33.801586   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:36.301145   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:34.179063   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:34.193077   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:34.193157   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:34.227451   75577 cri.go:89] found id: ""
	I0920 18:21:34.227484   75577 logs.go:276] 0 containers: []
	W0920 18:21:34.227495   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:34.227503   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:34.227571   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:34.263740   75577 cri.go:89] found id: ""
	I0920 18:21:34.263766   75577 logs.go:276] 0 containers: []
	W0920 18:21:34.263774   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:34.263779   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:34.263838   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:34.298094   75577 cri.go:89] found id: ""
	I0920 18:21:34.298121   75577 logs.go:276] 0 containers: []
	W0920 18:21:34.298132   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:34.298139   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:34.298202   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:34.334253   75577 cri.go:89] found id: ""
	I0920 18:21:34.334281   75577 logs.go:276] 0 containers: []
	W0920 18:21:34.334290   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:34.334297   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:34.334361   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:34.370656   75577 cri.go:89] found id: ""
	I0920 18:21:34.370688   75577 logs.go:276] 0 containers: []
	W0920 18:21:34.370699   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:34.370706   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:34.370759   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:34.405658   75577 cri.go:89] found id: ""
	I0920 18:21:34.405681   75577 logs.go:276] 0 containers: []
	W0920 18:21:34.405689   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:34.405695   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:34.405748   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:34.442284   75577 cri.go:89] found id: ""
	I0920 18:21:34.442311   75577 logs.go:276] 0 containers: []
	W0920 18:21:34.442319   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:34.442325   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:34.442394   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:34.477895   75577 cri.go:89] found id: ""
	I0920 18:21:34.477927   75577 logs.go:276] 0 containers: []
	W0920 18:21:34.477939   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:34.477950   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:34.477964   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:34.531225   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:34.531256   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:34.544325   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:34.544359   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:34.615914   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:34.615933   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:34.615948   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:34.692119   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:34.692160   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:37.228930   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:37.242070   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:37.242151   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:37.276489   75577 cri.go:89] found id: ""
	I0920 18:21:37.276531   75577 logs.go:276] 0 containers: []
	W0920 18:21:37.276543   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:37.276551   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:37.276617   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:37.311106   75577 cri.go:89] found id: ""
	I0920 18:21:37.311140   75577 logs.go:276] 0 containers: []
	W0920 18:21:37.311148   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:37.311156   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:37.311217   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:37.343147   75577 cri.go:89] found id: ""
	I0920 18:21:37.343173   75577 logs.go:276] 0 containers: []
	W0920 18:21:37.343181   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:37.343188   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:37.343237   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:37.378858   75577 cri.go:89] found id: ""
	I0920 18:21:37.378934   75577 logs.go:276] 0 containers: []
	W0920 18:21:37.378947   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:37.378955   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:37.379005   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:37.412321   75577 cri.go:89] found id: ""
	I0920 18:21:37.412355   75577 logs.go:276] 0 containers: []
	W0920 18:21:37.412366   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:37.412374   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:37.412443   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:37.447478   75577 cri.go:89] found id: ""
	I0920 18:21:37.447510   75577 logs.go:276] 0 containers: []
	W0920 18:21:37.447520   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:37.447526   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:37.447580   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:37.481172   75577 cri.go:89] found id: ""
	I0920 18:21:37.481201   75577 logs.go:276] 0 containers: []
	W0920 18:21:37.481209   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:37.481216   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:37.481269   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:37.518001   75577 cri.go:89] found id: ""
	I0920 18:21:37.518032   75577 logs.go:276] 0 containers: []
	W0920 18:21:37.518041   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:37.518050   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:37.518062   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:37.567675   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:37.567707   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:37.582279   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:37.582308   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:37.654514   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:37.654546   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:37.654563   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:37.735895   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:37.735929   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:35.192406   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:37.692460   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:37.524744   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:39.526068   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:42.024627   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:38.301432   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:40.302489   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:40.276416   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:40.291651   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:40.291713   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:40.328309   75577 cri.go:89] found id: ""
	I0920 18:21:40.328346   75577 logs.go:276] 0 containers: []
	W0920 18:21:40.328359   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:40.328378   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:40.328441   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:40.368126   75577 cri.go:89] found id: ""
	I0920 18:21:40.368154   75577 logs.go:276] 0 containers: []
	W0920 18:21:40.368162   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:40.368167   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:40.368217   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:40.408324   75577 cri.go:89] found id: ""
	I0920 18:21:40.408359   75577 logs.go:276] 0 containers: []
	W0920 18:21:40.408371   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:40.408380   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:40.408448   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:40.453867   75577 cri.go:89] found id: ""
	I0920 18:21:40.453892   75577 logs.go:276] 0 containers: []
	W0920 18:21:40.453900   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:40.453906   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:40.453969   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:40.500625   75577 cri.go:89] found id: ""
	I0920 18:21:40.500660   75577 logs.go:276] 0 containers: []
	W0920 18:21:40.500670   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:40.500678   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:40.500750   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:40.533998   75577 cri.go:89] found id: ""
	I0920 18:21:40.534028   75577 logs.go:276] 0 containers: []
	W0920 18:21:40.534039   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:40.534048   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:40.534111   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:40.566279   75577 cri.go:89] found id: ""
	I0920 18:21:40.566308   75577 logs.go:276] 0 containers: []
	W0920 18:21:40.566319   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:40.566326   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:40.566392   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:40.599154   75577 cri.go:89] found id: ""
	I0920 18:21:40.599179   75577 logs.go:276] 0 containers: []
	W0920 18:21:40.599186   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:40.599194   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:40.599210   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:40.668568   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:40.668596   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:40.668608   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:40.747895   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:40.747969   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:40.789568   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:40.789604   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:40.839816   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:40.839852   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:39.693537   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:42.192617   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:44.025059   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:46.524700   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:42.801135   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:44.802083   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:46.802537   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:43.354847   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:43.369077   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:43.369167   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:43.404667   75577 cri.go:89] found id: ""
	I0920 18:21:43.404698   75577 logs.go:276] 0 containers: []
	W0920 18:21:43.404710   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:43.404718   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:43.404778   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:43.440968   75577 cri.go:89] found id: ""
	I0920 18:21:43.441001   75577 logs.go:276] 0 containers: []
	W0920 18:21:43.441012   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:43.441021   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:43.441088   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:43.474481   75577 cri.go:89] found id: ""
	I0920 18:21:43.474511   75577 logs.go:276] 0 containers: []
	W0920 18:21:43.474520   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:43.474529   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:43.474592   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:43.508196   75577 cri.go:89] found id: ""
	I0920 18:21:43.508228   75577 logs.go:276] 0 containers: []
	W0920 18:21:43.508241   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:43.508248   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:43.508307   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:43.543679   75577 cri.go:89] found id: ""
	I0920 18:21:43.543710   75577 logs.go:276] 0 containers: []
	W0920 18:21:43.543721   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:43.543728   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:43.543788   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:43.577115   75577 cri.go:89] found id: ""
	I0920 18:21:43.577138   75577 logs.go:276] 0 containers: []
	W0920 18:21:43.577145   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:43.577152   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:43.577198   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:43.611552   75577 cri.go:89] found id: ""
	I0920 18:21:43.611588   75577 logs.go:276] 0 containers: []
	W0920 18:21:43.611602   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:43.611616   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:43.611685   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:43.647913   75577 cri.go:89] found id: ""
	I0920 18:21:43.647946   75577 logs.go:276] 0 containers: []
	W0920 18:21:43.647957   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:43.647969   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:43.647983   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:43.661014   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:43.661043   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:43.736584   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:43.736607   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:43.736620   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:43.814340   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:43.814380   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:43.855968   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:43.855996   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:46.410577   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:46.423733   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:46.423800   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:46.456819   75577 cri.go:89] found id: ""
	I0920 18:21:46.456847   75577 logs.go:276] 0 containers: []
	W0920 18:21:46.456855   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:46.456861   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:46.456927   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:46.493259   75577 cri.go:89] found id: ""
	I0920 18:21:46.493291   75577 logs.go:276] 0 containers: []
	W0920 18:21:46.493301   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:46.493307   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:46.493372   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:46.531200   75577 cri.go:89] found id: ""
	I0920 18:21:46.531233   75577 logs.go:276] 0 containers: []
	W0920 18:21:46.531241   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:46.531247   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:46.531294   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:46.567508   75577 cri.go:89] found id: ""
	I0920 18:21:46.567530   75577 logs.go:276] 0 containers: []
	W0920 18:21:46.567538   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:46.567544   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:46.567601   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:46.600257   75577 cri.go:89] found id: ""
	I0920 18:21:46.600290   75577 logs.go:276] 0 containers: []
	W0920 18:21:46.600303   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:46.600311   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:46.600375   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:46.635568   75577 cri.go:89] found id: ""
	I0920 18:21:46.635598   75577 logs.go:276] 0 containers: []
	W0920 18:21:46.635606   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:46.635612   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:46.635668   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:46.668257   75577 cri.go:89] found id: ""
	I0920 18:21:46.668304   75577 logs.go:276] 0 containers: []
	W0920 18:21:46.668316   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:46.668324   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:46.668392   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:46.703631   75577 cri.go:89] found id: ""
	I0920 18:21:46.703654   75577 logs.go:276] 0 containers: []
	W0920 18:21:46.703662   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:46.703671   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:46.703680   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:46.780232   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:46.780295   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:46.816623   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:46.816647   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:46.868911   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:46.868954   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:46.882716   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:46.882743   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:46.965166   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:44.692655   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:47.192969   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:48.524828   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:50.525045   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:49.301263   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:51.802343   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:49.466238   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:49.480167   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:49.480228   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:49.513507   75577 cri.go:89] found id: ""
	I0920 18:21:49.513537   75577 logs.go:276] 0 containers: []
	W0920 18:21:49.513549   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:49.513558   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:49.513620   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:49.548708   75577 cri.go:89] found id: ""
	I0920 18:21:49.548739   75577 logs.go:276] 0 containers: []
	W0920 18:21:49.548750   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:49.548758   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:49.548810   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:49.583586   75577 cri.go:89] found id: ""
	I0920 18:21:49.583614   75577 logs.go:276] 0 containers: []
	W0920 18:21:49.583625   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:49.583633   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:49.583695   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:49.617574   75577 cri.go:89] found id: ""
	I0920 18:21:49.617605   75577 logs.go:276] 0 containers: []
	W0920 18:21:49.617616   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:49.617623   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:49.617684   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:49.656517   75577 cri.go:89] found id: ""
	I0920 18:21:49.656552   75577 logs.go:276] 0 containers: []
	W0920 18:21:49.656563   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:49.656571   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:49.656634   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:49.692848   75577 cri.go:89] found id: ""
	I0920 18:21:49.692870   75577 logs.go:276] 0 containers: []
	W0920 18:21:49.692877   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:49.692883   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:49.692950   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:49.728593   75577 cri.go:89] found id: ""
	I0920 18:21:49.728620   75577 logs.go:276] 0 containers: []
	W0920 18:21:49.728630   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:49.728643   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:49.728705   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:49.764187   75577 cri.go:89] found id: ""
	I0920 18:21:49.764218   75577 logs.go:276] 0 containers: []
	W0920 18:21:49.764229   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:49.764242   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:49.764260   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:49.837741   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:49.837764   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:49.837777   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:49.914941   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:49.914978   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:49.957609   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:49.957639   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:50.012075   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:50.012115   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:52.527722   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:52.542125   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:52.542206   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:52.579884   75577 cri.go:89] found id: ""
	I0920 18:21:52.579910   75577 logs.go:276] 0 containers: []
	W0920 18:21:52.579919   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:52.579924   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:52.579982   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:52.619138   75577 cri.go:89] found id: ""
	I0920 18:21:52.619167   75577 logs.go:276] 0 containers: []
	W0920 18:21:52.619180   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:52.619188   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:52.619246   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:52.654460   75577 cri.go:89] found id: ""
	I0920 18:21:52.654498   75577 logs.go:276] 0 containers: []
	W0920 18:21:52.654508   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:52.654515   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:52.654578   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:52.708015   75577 cri.go:89] found id: ""
	I0920 18:21:52.708042   75577 logs.go:276] 0 containers: []
	W0920 18:21:52.708051   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:52.708057   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:52.708127   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:49.692309   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:52.192466   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:53.025700   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:55.026088   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:53.802620   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:55.802856   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:52.743917   75577 cri.go:89] found id: ""
	I0920 18:21:52.743952   75577 logs.go:276] 0 containers: []
	W0920 18:21:52.743964   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:52.743972   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:52.744025   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:52.783458   75577 cri.go:89] found id: ""
	I0920 18:21:52.783481   75577 logs.go:276] 0 containers: []
	W0920 18:21:52.783488   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:52.783495   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:52.783552   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:52.817716   75577 cri.go:89] found id: ""
	I0920 18:21:52.817749   75577 logs.go:276] 0 containers: []
	W0920 18:21:52.817762   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:52.817771   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:52.817882   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:52.857141   75577 cri.go:89] found id: ""
	I0920 18:21:52.857169   75577 logs.go:276] 0 containers: []
	W0920 18:21:52.857180   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:52.857190   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:52.857204   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:52.910555   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:52.910597   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:52.923843   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:52.923873   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:52.994263   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:52.994296   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:52.994313   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:53.079782   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:53.079829   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:55.619418   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:55.633854   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:55.633922   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:55.669778   75577 cri.go:89] found id: ""
	I0920 18:21:55.669813   75577 logs.go:276] 0 containers: []
	W0920 18:21:55.669824   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:55.669853   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:55.669920   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:55.705687   75577 cri.go:89] found id: ""
	I0920 18:21:55.705724   75577 logs.go:276] 0 containers: []
	W0920 18:21:55.705736   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:55.705746   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:55.705811   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:55.739249   75577 cri.go:89] found id: ""
	I0920 18:21:55.739288   75577 logs.go:276] 0 containers: []
	W0920 18:21:55.739296   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:55.739302   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:55.739438   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:55.775077   75577 cri.go:89] found id: ""
	I0920 18:21:55.775109   75577 logs.go:276] 0 containers: []
	W0920 18:21:55.775121   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:55.775130   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:55.775181   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:55.810291   75577 cri.go:89] found id: ""
	I0920 18:21:55.810329   75577 logs.go:276] 0 containers: []
	W0920 18:21:55.810340   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:55.810349   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:55.810415   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:55.844440   75577 cri.go:89] found id: ""
	I0920 18:21:55.844468   75577 logs.go:276] 0 containers: []
	W0920 18:21:55.844476   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:55.844482   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:55.844551   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:55.878611   75577 cri.go:89] found id: ""
	I0920 18:21:55.878647   75577 logs.go:276] 0 containers: []
	W0920 18:21:55.878659   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:55.878667   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:55.878733   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:55.922064   75577 cri.go:89] found id: ""
	I0920 18:21:55.922101   75577 logs.go:276] 0 containers: []
	W0920 18:21:55.922114   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:55.922127   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:55.922147   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:55.979406   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:55.979445   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:55.993082   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:55.993111   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:56.065650   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:56.065701   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:56.065717   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:56.142640   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:56.142684   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:54.691482   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:56.691912   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:57.525280   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:00.025224   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:02.025722   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:58.300359   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:00.301252   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:02.302209   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:58.683524   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:58.698775   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:58.698833   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:58.737369   75577 cri.go:89] found id: ""
	I0920 18:21:58.737398   75577 logs.go:276] 0 containers: []
	W0920 18:21:58.737409   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:58.737417   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:58.737476   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:58.779504   75577 cri.go:89] found id: ""
	I0920 18:21:58.779533   75577 logs.go:276] 0 containers: []
	W0920 18:21:58.779544   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:58.779552   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:58.779610   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:58.814363   75577 cri.go:89] found id: ""
	I0920 18:21:58.814393   75577 logs.go:276] 0 containers: []
	W0920 18:21:58.814401   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:58.814407   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:58.814454   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:58.846219   75577 cri.go:89] found id: ""
	I0920 18:21:58.846242   75577 logs.go:276] 0 containers: []
	W0920 18:21:58.846251   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:58.846256   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:58.846307   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:58.882387   75577 cri.go:89] found id: ""
	I0920 18:21:58.882423   75577 logs.go:276] 0 containers: []
	W0920 18:21:58.882431   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:58.882437   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:58.882497   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:58.918632   75577 cri.go:89] found id: ""
	I0920 18:21:58.918668   75577 logs.go:276] 0 containers: []
	W0920 18:21:58.918679   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:58.918686   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:58.918751   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:58.953521   75577 cri.go:89] found id: ""
	I0920 18:21:58.953547   75577 logs.go:276] 0 containers: []
	W0920 18:21:58.953557   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:58.953564   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:58.953624   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:58.987412   75577 cri.go:89] found id: ""
	I0920 18:21:58.987439   75577 logs.go:276] 0 containers: []
	W0920 18:21:58.987447   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:58.987457   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:58.987471   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:59.077108   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:59.077169   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:59.141723   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:59.141758   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:59.208035   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:59.208081   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:59.221380   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:59.221404   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:59.297151   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:22:01.797684   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:22:01.812275   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:22:01.812338   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:22:01.846431   75577 cri.go:89] found id: ""
	I0920 18:22:01.846459   75577 logs.go:276] 0 containers: []
	W0920 18:22:01.846477   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:22:01.846483   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:22:01.846535   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:22:01.889016   75577 cri.go:89] found id: ""
	I0920 18:22:01.889049   75577 logs.go:276] 0 containers: []
	W0920 18:22:01.889061   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:22:01.889069   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:22:01.889144   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:22:01.926048   75577 cri.go:89] found id: ""
	I0920 18:22:01.926078   75577 logs.go:276] 0 containers: []
	W0920 18:22:01.926090   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:22:01.926108   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:22:01.926185   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:22:01.964510   75577 cri.go:89] found id: ""
	I0920 18:22:01.964536   75577 logs.go:276] 0 containers: []
	W0920 18:22:01.964547   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:22:01.964555   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:22:01.964624   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:22:02.005549   75577 cri.go:89] found id: ""
	I0920 18:22:02.005575   75577 logs.go:276] 0 containers: []
	W0920 18:22:02.005583   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:22:02.005588   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:22:02.005642   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:22:02.047359   75577 cri.go:89] found id: ""
	I0920 18:22:02.047385   75577 logs.go:276] 0 containers: []
	W0920 18:22:02.047393   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:22:02.047399   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:22:02.047455   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:22:02.082961   75577 cri.go:89] found id: ""
	I0920 18:22:02.082999   75577 logs.go:276] 0 containers: []
	W0920 18:22:02.083008   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:22:02.083015   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:22:02.083062   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:22:02.117696   75577 cri.go:89] found id: ""
	I0920 18:22:02.117732   75577 logs.go:276] 0 containers: []
	W0920 18:22:02.117741   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:22:02.117753   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:22:02.117769   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:22:02.131282   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:22:02.131311   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:22:02.200738   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:22:02.200760   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:22:02.200776   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:22:02.281162   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:22:02.281200   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:22:02.321854   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:22:02.321888   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:59.191434   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:01.192475   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:01.685900   75086 pod_ready.go:82] duration metric: took 4m0.000752648s for pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace to be "Ready" ...
	E0920 18:22:01.685945   75086 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace to be "Ready" (will not retry!)
	I0920 18:22:01.685972   75086 pod_ready.go:39] duration metric: took 4m13.051752581s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:22:01.686033   75086 kubeadm.go:597] duration metric: took 4m20.498687791s to restartPrimaryControlPlane
	W0920 18:22:01.686114   75086 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0920 18:22:01.686150   75086 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0920 18:22:04.026729   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:06.524404   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:04.803249   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:07.301696   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:04.874353   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:22:04.889710   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:22:04.889773   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:22:04.945719   75577 cri.go:89] found id: ""
	I0920 18:22:04.945745   75577 logs.go:276] 0 containers: []
	W0920 18:22:04.945753   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:22:04.945759   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:22:04.945802   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:22:04.995492   75577 cri.go:89] found id: ""
	I0920 18:22:04.995535   75577 logs.go:276] 0 containers: []
	W0920 18:22:04.995547   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:22:04.995555   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:22:04.995615   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:22:05.037827   75577 cri.go:89] found id: ""
	I0920 18:22:05.037871   75577 logs.go:276] 0 containers: []
	W0920 18:22:05.037882   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:22:05.037888   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:22:05.037935   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:22:05.077652   75577 cri.go:89] found id: ""
	I0920 18:22:05.077691   75577 logs.go:276] 0 containers: []
	W0920 18:22:05.077704   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:22:05.077712   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:22:05.077772   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:22:05.114472   75577 cri.go:89] found id: ""
	I0920 18:22:05.114509   75577 logs.go:276] 0 containers: []
	W0920 18:22:05.114520   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:22:05.114527   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:22:05.114590   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:22:05.150809   75577 cri.go:89] found id: ""
	I0920 18:22:05.150841   75577 logs.go:276] 0 containers: []
	W0920 18:22:05.150853   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:22:05.150860   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:22:05.150908   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:22:05.188880   75577 cri.go:89] found id: ""
	I0920 18:22:05.188910   75577 logs.go:276] 0 containers: []
	W0920 18:22:05.188921   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:22:05.188929   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:22:05.188990   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:22:05.224990   75577 cri.go:89] found id: ""
	I0920 18:22:05.225015   75577 logs.go:276] 0 containers: []
	W0920 18:22:05.225023   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:22:05.225032   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:22:05.225043   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:22:05.298000   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:22:05.298023   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:22:05.298037   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:22:05.382969   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:22:05.383008   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:22:05.424911   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:22:05.424940   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:22:05.475988   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:22:05.476024   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:22:08.525098   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:10.525321   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:09.801936   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:12.301073   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:07.990660   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:22:08.005572   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:22:08.005637   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:22:08.044354   75577 cri.go:89] found id: ""
	I0920 18:22:08.044381   75577 logs.go:276] 0 containers: []
	W0920 18:22:08.044390   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:22:08.044401   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:22:08.044449   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:22:08.082899   75577 cri.go:89] found id: ""
	I0920 18:22:08.082928   75577 logs.go:276] 0 containers: []
	W0920 18:22:08.082939   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:22:08.082948   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:22:08.083009   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:22:08.129297   75577 cri.go:89] found id: ""
	I0920 18:22:08.129325   75577 logs.go:276] 0 containers: []
	W0920 18:22:08.129335   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:22:08.129343   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:22:08.129404   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:22:08.168744   75577 cri.go:89] found id: ""
	I0920 18:22:08.168774   75577 logs.go:276] 0 containers: []
	W0920 18:22:08.168787   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:22:08.168792   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:22:08.168849   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:22:08.203830   75577 cri.go:89] found id: ""
	I0920 18:22:08.203860   75577 logs.go:276] 0 containers: []
	W0920 18:22:08.203871   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:22:08.203879   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:22:08.203940   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:22:08.238226   75577 cri.go:89] found id: ""
	I0920 18:22:08.238250   75577 logs.go:276] 0 containers: []
	W0920 18:22:08.238258   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:22:08.238263   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:22:08.238331   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:22:08.271457   75577 cri.go:89] found id: ""
	I0920 18:22:08.271485   75577 logs.go:276] 0 containers: []
	W0920 18:22:08.271495   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:22:08.271502   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:22:08.271563   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:22:08.306661   75577 cri.go:89] found id: ""
	I0920 18:22:08.306692   75577 logs.go:276] 0 containers: []
	W0920 18:22:08.306703   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:22:08.306712   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:22:08.306730   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:22:08.357760   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:22:08.357793   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:22:08.372625   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:22:08.372660   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:22:08.442477   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:22:08.442501   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:22:08.442517   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:22:08.528381   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:22:08.528412   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:22:11.064842   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:22:11.078086   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:22:11.078158   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:22:11.113454   75577 cri.go:89] found id: ""
	I0920 18:22:11.113485   75577 logs.go:276] 0 containers: []
	W0920 18:22:11.113497   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:22:11.113511   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:22:11.113575   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:22:11.148039   75577 cri.go:89] found id: ""
	I0920 18:22:11.148064   75577 logs.go:276] 0 containers: []
	W0920 18:22:11.148072   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:22:11.148078   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:22:11.148127   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:22:11.182667   75577 cri.go:89] found id: ""
	I0920 18:22:11.182697   75577 logs.go:276] 0 containers: []
	W0920 18:22:11.182708   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:22:11.182715   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:22:11.182776   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:22:11.217022   75577 cri.go:89] found id: ""
	I0920 18:22:11.217055   75577 logs.go:276] 0 containers: []
	W0920 18:22:11.217067   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:22:11.217075   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:22:11.217141   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:22:11.253970   75577 cri.go:89] found id: ""
	I0920 18:22:11.254001   75577 logs.go:276] 0 containers: []
	W0920 18:22:11.254012   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:22:11.254019   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:22:11.254085   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:22:11.288070   75577 cri.go:89] found id: ""
	I0920 18:22:11.288103   75577 logs.go:276] 0 containers: []
	W0920 18:22:11.288114   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:22:11.288121   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:22:11.288189   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:22:11.324215   75577 cri.go:89] found id: ""
	I0920 18:22:11.324240   75577 logs.go:276] 0 containers: []
	W0920 18:22:11.324254   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:22:11.324261   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:22:11.324319   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:22:11.359377   75577 cri.go:89] found id: ""
	I0920 18:22:11.359406   75577 logs.go:276] 0 containers: []
	W0920 18:22:11.359414   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:22:11.359423   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:22:11.359433   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:22:11.416479   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:22:11.416520   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:22:11.430240   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:22:11.430269   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:22:11.501531   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:22:11.501553   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:22:11.501565   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:22:11.580748   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:22:11.580787   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:22:13.024330   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:15.025065   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:17.026642   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:14.301652   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:16.802017   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:14.119084   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:22:14.132428   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:22:14.132505   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:22:14.167674   75577 cri.go:89] found id: ""
	I0920 18:22:14.167699   75577 logs.go:276] 0 containers: []
	W0920 18:22:14.167710   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:22:14.167717   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:22:14.167772   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:22:14.204570   75577 cri.go:89] found id: ""
	I0920 18:22:14.204595   75577 logs.go:276] 0 containers: []
	W0920 18:22:14.204603   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:22:14.204608   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:22:14.204655   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:22:14.241687   75577 cri.go:89] found id: ""
	I0920 18:22:14.241716   75577 logs.go:276] 0 containers: []
	W0920 18:22:14.241724   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:22:14.241731   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:22:14.241789   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:22:14.275776   75577 cri.go:89] found id: ""
	I0920 18:22:14.275802   75577 logs.go:276] 0 containers: []
	W0920 18:22:14.275810   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:22:14.275816   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:22:14.275872   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:22:14.309565   75577 cri.go:89] found id: ""
	I0920 18:22:14.309589   75577 logs.go:276] 0 containers: []
	W0920 18:22:14.309596   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:22:14.309602   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:22:14.309662   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:22:14.341858   75577 cri.go:89] found id: ""
	I0920 18:22:14.341884   75577 logs.go:276] 0 containers: []
	W0920 18:22:14.341892   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:22:14.341898   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:22:14.341963   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:22:14.377863   75577 cri.go:89] found id: ""
	I0920 18:22:14.377896   75577 logs.go:276] 0 containers: []
	W0920 18:22:14.377906   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:22:14.377912   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:22:14.377988   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:22:14.413182   75577 cri.go:89] found id: ""
	I0920 18:22:14.413207   75577 logs.go:276] 0 containers: []
	W0920 18:22:14.413214   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:22:14.413236   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:22:14.413253   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:22:14.466782   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:22:14.466820   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:22:14.480627   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:22:14.480668   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:22:14.552071   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:22:14.552104   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:22:14.552121   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:22:14.626481   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:22:14.626518   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:22:17.163264   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:22:17.181609   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:22:17.181684   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:22:17.219871   75577 cri.go:89] found id: ""
	I0920 18:22:17.219899   75577 logs.go:276] 0 containers: []
	W0920 18:22:17.219923   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:22:17.219932   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:22:17.220013   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:22:17.256315   75577 cri.go:89] found id: ""
	I0920 18:22:17.256346   75577 logs.go:276] 0 containers: []
	W0920 18:22:17.256356   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:22:17.256364   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:22:17.256434   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:22:17.294326   75577 cri.go:89] found id: ""
	I0920 18:22:17.294352   75577 logs.go:276] 0 containers: []
	W0920 18:22:17.294360   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:22:17.294366   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:22:17.294421   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:22:17.336461   75577 cri.go:89] found id: ""
	I0920 18:22:17.336491   75577 logs.go:276] 0 containers: []
	W0920 18:22:17.336500   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:22:17.336506   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:22:17.336568   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:22:17.376242   75577 cri.go:89] found id: ""
	I0920 18:22:17.376283   75577 logs.go:276] 0 containers: []
	W0920 18:22:17.376295   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:22:17.376302   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:22:17.376363   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:22:17.409147   75577 cri.go:89] found id: ""
	I0920 18:22:17.409180   75577 logs.go:276] 0 containers: []
	W0920 18:22:17.409190   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:22:17.409198   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:22:17.409269   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:22:17.444698   75577 cri.go:89] found id: ""
	I0920 18:22:17.444725   75577 logs.go:276] 0 containers: []
	W0920 18:22:17.444734   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:22:17.444738   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:22:17.444791   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:22:17.485914   75577 cri.go:89] found id: ""
	I0920 18:22:17.485948   75577 logs.go:276] 0 containers: []
	W0920 18:22:17.485959   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:22:17.485970   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:22:17.485983   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:22:17.523567   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:22:17.523601   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:22:17.588864   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:22:17.588905   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:22:17.605052   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:22:17.605092   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:22:17.677012   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:22:17.677039   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:22:17.677056   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:22:19.525154   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:22.025422   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:18.802052   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:20.804024   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:20.252258   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:22:20.267607   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:22:20.267698   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:22:20.302260   75577 cri.go:89] found id: ""
	I0920 18:22:20.302291   75577 logs.go:276] 0 containers: []
	W0920 18:22:20.302301   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:22:20.302309   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:22:20.302373   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:22:20.335343   75577 cri.go:89] found id: ""
	I0920 18:22:20.335377   75577 logs.go:276] 0 containers: []
	W0920 18:22:20.335389   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:22:20.335397   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:22:20.335460   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:22:20.369612   75577 cri.go:89] found id: ""
	I0920 18:22:20.369641   75577 logs.go:276] 0 containers: []
	W0920 18:22:20.369649   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:22:20.369655   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:22:20.369703   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:22:20.403703   75577 cri.go:89] found id: ""
	I0920 18:22:20.403732   75577 logs.go:276] 0 containers: []
	W0920 18:22:20.403740   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:22:20.403746   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:22:20.403804   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:22:20.436288   75577 cri.go:89] found id: ""
	I0920 18:22:20.436316   75577 logs.go:276] 0 containers: []
	W0920 18:22:20.436328   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:22:20.436336   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:22:20.436399   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:22:20.471536   75577 cri.go:89] found id: ""
	I0920 18:22:20.471572   75577 logs.go:276] 0 containers: []
	W0920 18:22:20.471584   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:22:20.471593   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:22:20.471657   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:22:20.506012   75577 cri.go:89] found id: ""
	I0920 18:22:20.506107   75577 logs.go:276] 0 containers: []
	W0920 18:22:20.506134   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:22:20.506143   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:22:20.506199   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:22:20.542614   75577 cri.go:89] found id: ""
	I0920 18:22:20.542650   75577 logs.go:276] 0 containers: []
	W0920 18:22:20.542660   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:22:20.542671   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:22:20.542687   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:22:20.596316   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:22:20.596357   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:22:20.610438   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:22:20.610465   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:22:20.688061   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:22:20.688079   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:22:20.688091   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:22:20.768249   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:22:20.768296   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:22:24.525259   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:25.524857   75264 pod_ready.go:82] duration metric: took 4m0.006563805s for pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace to be "Ready" ...
	E0920 18:22:25.524883   75264 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0920 18:22:25.524891   75264 pod_ready.go:39] duration metric: took 4m8.542422056s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:22:25.524906   75264 api_server.go:52] waiting for apiserver process to appear ...
	I0920 18:22:25.524977   75264 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:22:25.525029   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:22:25.579894   75264 cri.go:89] found id: "0ea0cfbd9902ae8e3797c77592bfe05a5f8162f75a391f5c2dc992eb5154c4ad"
	I0920 18:22:25.579914   75264 cri.go:89] found id: ""
	I0920 18:22:25.579923   75264 logs.go:276] 1 containers: [0ea0cfbd9902ae8e3797c77592bfe05a5f8162f75a391f5c2dc992eb5154c4ad]
	I0920 18:22:25.579979   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:25.585095   75264 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:22:25.585176   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:22:25.621769   75264 cri.go:89] found id: "65da0bae1c849a395cc241962bb6cbdf9f3edf11cdc95f4495ac2e427f9602d4"
	I0920 18:22:25.621796   75264 cri.go:89] found id: ""
	I0920 18:22:25.621805   75264 logs.go:276] 1 containers: [65da0bae1c849a395cc241962bb6cbdf9f3edf11cdc95f4495ac2e427f9602d4]
	I0920 18:22:25.621881   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:25.626318   75264 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:22:25.626400   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:22:25.662732   75264 cri.go:89] found id: "606f7c8a9a095f499ae422de76f76105607db72e57b9b0a6db3e3c5a81656c5c"
	I0920 18:22:25.662753   75264 cri.go:89] found id: ""
	I0920 18:22:25.662760   75264 logs.go:276] 1 containers: [606f7c8a9a095f499ae422de76f76105607db72e57b9b0a6db3e3c5a81656c5c]
	I0920 18:22:25.662818   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:25.667212   75264 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:22:25.667299   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:22:25.703639   75264 cri.go:89] found id: "6ba313deffc6185e59dac50051e933f6b2ea70d630cf36b2b8c2c07042f793ba"
	I0920 18:22:25.703663   75264 cri.go:89] found id: ""
	I0920 18:22:25.703670   75264 logs.go:276] 1 containers: [6ba313deffc6185e59dac50051e933f6b2ea70d630cf36b2b8c2c07042f793ba]
	I0920 18:22:25.703721   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:25.708034   75264 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:22:25.708115   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:22:25.745358   75264 cri.go:89] found id: "702f7f440eb6016c2c758a6be4e1072274ab6dcd169d7a081e5464869a7c299c"
	I0920 18:22:25.745387   75264 cri.go:89] found id: ""
	I0920 18:22:25.745397   75264 logs.go:276] 1 containers: [702f7f440eb6016c2c758a6be4e1072274ab6dcd169d7a081e5464869a7c299c]
	I0920 18:22:25.745455   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:25.749702   75264 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:22:25.749787   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:22:25.790592   75264 cri.go:89] found id: "4ca303b795795bd920f261c88630f30fd51a91b9694a9a6e67af8164bab8242b"
	I0920 18:22:25.790615   75264 cri.go:89] found id: ""
	I0920 18:22:25.790623   75264 logs.go:276] 1 containers: [4ca303b795795bd920f261c88630f30fd51a91b9694a9a6e67af8164bab8242b]
	I0920 18:22:25.790686   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:25.794993   75264 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:22:25.795062   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:22:25.836621   75264 cri.go:89] found id: ""
	I0920 18:22:25.836646   75264 logs.go:276] 0 containers: []
	W0920 18:22:25.836654   75264 logs.go:278] No container was found matching "kindnet"
	I0920 18:22:25.836661   75264 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0920 18:22:25.836723   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0920 18:22:25.874191   75264 cri.go:89] found id: "001bdc98537f0917f021c14a5d0733bd31cf19be1b4862502458a7a90e4300da"
	I0920 18:22:25.874215   75264 cri.go:89] found id: "c42201b6e3d55b6fd8beaf04d416658b228bf627e519ddb022dad6d17219cc1c"
	I0920 18:22:25.874220   75264 cri.go:89] found id: ""
	I0920 18:22:25.874229   75264 logs.go:276] 2 containers: [001bdc98537f0917f021c14a5d0733bd31cf19be1b4862502458a7a90e4300da c42201b6e3d55b6fd8beaf04d416658b228bf627e519ddb022dad6d17219cc1c]
	I0920 18:22:25.874316   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:25.878788   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:25.882840   75264 logs.go:123] Gathering logs for kube-proxy [702f7f440eb6016c2c758a6be4e1072274ab6dcd169d7a081e5464869a7c299c] ...
	I0920 18:22:25.882869   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 702f7f440eb6016c2c758a6be4e1072274ab6dcd169d7a081e5464869a7c299c"
	I0920 18:22:25.921609   75264 logs.go:123] Gathering logs for kube-controller-manager [4ca303b795795bd920f261c88630f30fd51a91b9694a9a6e67af8164bab8242b] ...
	I0920 18:22:25.921640   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ca303b795795bd920f261c88630f30fd51a91b9694a9a6e67af8164bab8242b"
	I0920 18:22:25.987199   75264 logs.go:123] Gathering logs for kubelet ...
	I0920 18:22:25.987236   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:22:26.061857   75264 logs.go:123] Gathering logs for kube-apiserver [0ea0cfbd9902ae8e3797c77592bfe05a5f8162f75a391f5c2dc992eb5154c4ad] ...
	I0920 18:22:26.061897   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ea0cfbd9902ae8e3797c77592bfe05a5f8162f75a391f5c2dc992eb5154c4ad"
	I0920 18:22:26.104901   75264 logs.go:123] Gathering logs for etcd [65da0bae1c849a395cc241962bb6cbdf9f3edf11cdc95f4495ac2e427f9602d4] ...
	I0920 18:22:26.104935   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 65da0bae1c849a395cc241962bb6cbdf9f3edf11cdc95f4495ac2e427f9602d4"
	I0920 18:22:26.160795   75264 logs.go:123] Gathering logs for kube-scheduler [6ba313deffc6185e59dac50051e933f6b2ea70d630cf36b2b8c2c07042f793ba] ...
	I0920 18:22:26.160833   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ba313deffc6185e59dac50051e933f6b2ea70d630cf36b2b8c2c07042f793ba"
	I0920 18:22:26.199286   75264 logs.go:123] Gathering logs for storage-provisioner [001bdc98537f0917f021c14a5d0733bd31cf19be1b4862502458a7a90e4300da] ...
	I0920 18:22:26.199316   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 001bdc98537f0917f021c14a5d0733bd31cf19be1b4862502458a7a90e4300da"
	I0920 18:22:26.239560   75264 logs.go:123] Gathering logs for storage-provisioner [c42201b6e3d55b6fd8beaf04d416658b228bf627e519ddb022dad6d17219cc1c] ...
	I0920 18:22:26.239591   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c42201b6e3d55b6fd8beaf04d416658b228bf627e519ddb022dad6d17219cc1c"
	I0920 18:22:26.275424   75264 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:22:26.275450   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:22:26.786849   75264 logs.go:123] Gathering logs for container status ...
	I0920 18:22:26.786895   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:22:26.848751   75264 logs.go:123] Gathering logs for dmesg ...
	I0920 18:22:26.848783   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:22:26.867329   75264 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:22:26.867375   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 18:22:27.017385   75264 logs.go:123] Gathering logs for coredns [606f7c8a9a095f499ae422de76f76105607db72e57b9b0a6db3e3c5a81656c5c] ...
	I0920 18:22:27.017419   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 606f7c8a9a095f499ae422de76f76105607db72e57b9b0a6db3e3c5a81656c5c"
	I0920 18:22:23.300658   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:25.302131   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:27.303929   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:23.307149   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:22:23.319889   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:22:23.319972   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:22:23.356613   75577 cri.go:89] found id: ""
	I0920 18:22:23.356645   75577 logs.go:276] 0 containers: []
	W0920 18:22:23.356656   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:22:23.356663   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:22:23.356728   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:22:23.391632   75577 cri.go:89] found id: ""
	I0920 18:22:23.391663   75577 logs.go:276] 0 containers: []
	W0920 18:22:23.391675   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:22:23.391683   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:22:23.391748   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:22:23.426908   75577 cri.go:89] found id: ""
	I0920 18:22:23.426936   75577 logs.go:276] 0 containers: []
	W0920 18:22:23.426946   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:22:23.426952   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:22:23.427013   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:22:23.461890   75577 cri.go:89] found id: ""
	I0920 18:22:23.461925   75577 logs.go:276] 0 containers: []
	W0920 18:22:23.461938   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:22:23.461947   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:22:23.462014   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:22:23.503517   75577 cri.go:89] found id: ""
	I0920 18:22:23.503549   75577 logs.go:276] 0 containers: []
	W0920 18:22:23.503560   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:22:23.503566   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:22:23.503615   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:22:23.540699   75577 cri.go:89] found id: ""
	I0920 18:22:23.540722   75577 logs.go:276] 0 containers: []
	W0920 18:22:23.540731   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:22:23.540736   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:22:23.540783   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:22:23.575485   75577 cri.go:89] found id: ""
	I0920 18:22:23.575509   75577 logs.go:276] 0 containers: []
	W0920 18:22:23.575517   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:22:23.575523   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:22:23.575576   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:22:23.610998   75577 cri.go:89] found id: ""
	I0920 18:22:23.611028   75577 logs.go:276] 0 containers: []
	W0920 18:22:23.611039   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:22:23.611051   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:22:23.611065   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:22:23.687072   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:22:23.687109   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:22:23.724790   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:22:23.724824   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:22:23.778905   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:22:23.778945   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:22:23.794298   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:22:23.794329   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:22:23.873341   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:22:26.374541   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:22:26.388151   75577 kubeadm.go:597] duration metric: took 4m2.956358154s to restartPrimaryControlPlane
	W0920 18:22:26.388229   75577 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0920 18:22:26.388260   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0920 18:22:27.454521   75577 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.066231187s)
	I0920 18:22:27.454602   75577 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:22:27.469409   75577 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 18:22:27.480314   75577 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 18:22:27.491389   75577 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 18:22:27.491416   75577 kubeadm.go:157] found existing configuration files:
	
	I0920 18:22:27.491468   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 18:22:27.501448   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 18:22:27.501534   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 18:22:27.511370   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 18:22:27.521767   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 18:22:27.521855   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 18:22:27.531717   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 18:22:27.540517   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 18:22:27.540575   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 18:22:27.549644   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 18:22:27.558412   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 18:22:27.558480   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 18:22:27.567702   75577 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 18:22:28.070939   75086 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.384759009s)
	I0920 18:22:28.071025   75086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:22:28.087712   75086 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 18:22:28.097709   75086 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 18:22:28.107217   75086 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 18:22:28.107243   75086 kubeadm.go:157] found existing configuration files:
	
	I0920 18:22:28.107297   75086 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 18:22:28.116947   75086 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 18:22:28.117013   75086 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 18:22:28.126559   75086 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 18:22:28.135261   75086 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 18:22:28.135338   75086 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 18:22:28.144456   75086 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 18:22:28.153412   75086 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 18:22:28.153465   75086 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 18:22:28.165825   75086 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 18:22:28.175003   75086 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 18:22:28.175068   75086 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 18:22:28.184917   75086 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 18:22:28.236825   75086 kubeadm.go:310] W0920 18:22:28.216092    2540 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 18:22:28.237654   75086 kubeadm.go:310] W0920 18:22:28.216986    2540 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 18:22:28.352695   75086 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 18:22:29.557496   75264 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:22:29.578981   75264 api_server.go:72] duration metric: took 4m20.322735753s to wait for apiserver process to appear ...
	I0920 18:22:29.579009   75264 api_server.go:88] waiting for apiserver healthz status ...
	I0920 18:22:29.579044   75264 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:22:29.579093   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:22:29.616252   75264 cri.go:89] found id: "0ea0cfbd9902ae8e3797c77592bfe05a5f8162f75a391f5c2dc992eb5154c4ad"
	I0920 18:22:29.616281   75264 cri.go:89] found id: ""
	I0920 18:22:29.616292   75264 logs.go:276] 1 containers: [0ea0cfbd9902ae8e3797c77592bfe05a5f8162f75a391f5c2dc992eb5154c4ad]
	I0920 18:22:29.616345   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:29.620605   75264 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:22:29.620678   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:22:29.658077   75264 cri.go:89] found id: "65da0bae1c849a395cc241962bb6cbdf9f3edf11cdc95f4495ac2e427f9602d4"
	I0920 18:22:29.658106   75264 cri.go:89] found id: ""
	I0920 18:22:29.658114   75264 logs.go:276] 1 containers: [65da0bae1c849a395cc241962bb6cbdf9f3edf11cdc95f4495ac2e427f9602d4]
	I0920 18:22:29.658170   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:29.662227   75264 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:22:29.662288   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:22:29.695906   75264 cri.go:89] found id: "606f7c8a9a095f499ae422de76f76105607db72e57b9b0a6db3e3c5a81656c5c"
	I0920 18:22:29.695927   75264 cri.go:89] found id: ""
	I0920 18:22:29.695934   75264 logs.go:276] 1 containers: [606f7c8a9a095f499ae422de76f76105607db72e57b9b0a6db3e3c5a81656c5c]
	I0920 18:22:29.695979   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:29.699986   75264 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:22:29.700083   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:22:29.736560   75264 cri.go:89] found id: "6ba313deffc6185e59dac50051e933f6b2ea70d630cf36b2b8c2c07042f793ba"
	I0920 18:22:29.736589   75264 cri.go:89] found id: ""
	I0920 18:22:29.736600   75264 logs.go:276] 1 containers: [6ba313deffc6185e59dac50051e933f6b2ea70d630cf36b2b8c2c07042f793ba]
	I0920 18:22:29.736660   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:29.740751   75264 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:22:29.740803   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:22:29.778930   75264 cri.go:89] found id: "702f7f440eb6016c2c758a6be4e1072274ab6dcd169d7a081e5464869a7c299c"
	I0920 18:22:29.778952   75264 cri.go:89] found id: ""
	I0920 18:22:29.778959   75264 logs.go:276] 1 containers: [702f7f440eb6016c2c758a6be4e1072274ab6dcd169d7a081e5464869a7c299c]
	I0920 18:22:29.779011   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:29.783022   75264 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:22:29.783092   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:22:29.829491   75264 cri.go:89] found id: "4ca303b795795bd920f261c88630f30fd51a91b9694a9a6e67af8164bab8242b"
	I0920 18:22:29.829521   75264 cri.go:89] found id: ""
	I0920 18:22:29.829531   75264 logs.go:276] 1 containers: [4ca303b795795bd920f261c88630f30fd51a91b9694a9a6e67af8164bab8242b]
	I0920 18:22:29.829597   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:29.833853   75264 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:22:29.833924   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:22:29.872376   75264 cri.go:89] found id: ""
	I0920 18:22:29.872404   75264 logs.go:276] 0 containers: []
	W0920 18:22:29.872412   75264 logs.go:278] No container was found matching "kindnet"
	I0920 18:22:29.872419   75264 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0920 18:22:29.872482   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0920 18:22:29.923096   75264 cri.go:89] found id: "001bdc98537f0917f021c14a5d0733bd31cf19be1b4862502458a7a90e4300da"
	I0920 18:22:29.923136   75264 cri.go:89] found id: "c42201b6e3d55b6fd8beaf04d416658b228bf627e519ddb022dad6d17219cc1c"
	I0920 18:22:29.923143   75264 cri.go:89] found id: ""
	I0920 18:22:29.923152   75264 logs.go:276] 2 containers: [001bdc98537f0917f021c14a5d0733bd31cf19be1b4862502458a7a90e4300da c42201b6e3d55b6fd8beaf04d416658b228bf627e519ddb022dad6d17219cc1c]
	I0920 18:22:29.923226   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:29.930023   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:29.935091   75264 logs.go:123] Gathering logs for coredns [606f7c8a9a095f499ae422de76f76105607db72e57b9b0a6db3e3c5a81656c5c] ...
	I0920 18:22:29.935119   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 606f7c8a9a095f499ae422de76f76105607db72e57b9b0a6db3e3c5a81656c5c"
	I0920 18:22:29.974064   75264 logs.go:123] Gathering logs for kube-scheduler [6ba313deffc6185e59dac50051e933f6b2ea70d630cf36b2b8c2c07042f793ba] ...
	I0920 18:22:29.974101   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ba313deffc6185e59dac50051e933f6b2ea70d630cf36b2b8c2c07042f793ba"
	I0920 18:22:30.018770   75264 logs.go:123] Gathering logs for kube-controller-manager [4ca303b795795bd920f261c88630f30fd51a91b9694a9a6e67af8164bab8242b] ...
	I0920 18:22:30.018805   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ca303b795795bd920f261c88630f30fd51a91b9694a9a6e67af8164bab8242b"
	I0920 18:22:30.077578   75264 logs.go:123] Gathering logs for storage-provisioner [c42201b6e3d55b6fd8beaf04d416658b228bf627e519ddb022dad6d17219cc1c] ...
	I0920 18:22:30.077625   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c42201b6e3d55b6fd8beaf04d416658b228bf627e519ddb022dad6d17219cc1c"
	I0920 18:22:30.123874   75264 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:22:30.123910   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:22:30.553215   75264 logs.go:123] Gathering logs for kube-proxy [702f7f440eb6016c2c758a6be4e1072274ab6dcd169d7a081e5464869a7c299c] ...
	I0920 18:22:30.553261   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 702f7f440eb6016c2c758a6be4e1072274ab6dcd169d7a081e5464869a7c299c"
	I0920 18:22:30.597397   75264 logs.go:123] Gathering logs for storage-provisioner [001bdc98537f0917f021c14a5d0733bd31cf19be1b4862502458a7a90e4300da] ...
	I0920 18:22:30.597428   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 001bdc98537f0917f021c14a5d0733bd31cf19be1b4862502458a7a90e4300da"
	I0920 18:22:30.643777   75264 logs.go:123] Gathering logs for container status ...
	I0920 18:22:30.643807   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:22:30.688174   75264 logs.go:123] Gathering logs for kubelet ...
	I0920 18:22:30.688205   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:22:30.760547   75264 logs.go:123] Gathering logs for dmesg ...
	I0920 18:22:30.760591   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:22:30.776702   75264 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:22:30.776729   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 18:22:30.913200   75264 logs.go:123] Gathering logs for kube-apiserver [0ea0cfbd9902ae8e3797c77592bfe05a5f8162f75a391f5c2dc992eb5154c4ad] ...
	I0920 18:22:30.913240   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ea0cfbd9902ae8e3797c77592bfe05a5f8162f75a391f5c2dc992eb5154c4ad"
	I0920 18:22:30.957548   75264 logs.go:123] Gathering logs for etcd [65da0bae1c849a395cc241962bb6cbdf9f3edf11cdc95f4495ac2e427f9602d4] ...
	I0920 18:22:30.957589   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 65da0bae1c849a395cc241962bb6cbdf9f3edf11cdc95f4495ac2e427f9602d4"
	I0920 18:22:29.803036   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:32.301986   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:27.790386   75577 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 18:22:36.572379   75086 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 18:22:36.572452   75086 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 18:22:36.572558   75086 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 18:22:36.572702   75086 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 18:22:36.572826   75086 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 18:22:36.572926   75086 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 18:22:36.574903   75086 out.go:235]   - Generating certificates and keys ...
	I0920 18:22:36.575032   75086 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 18:22:36.575117   75086 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 18:22:36.575224   75086 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 18:22:36.575342   75086 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 18:22:36.575446   75086 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 18:22:36.575535   75086 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 18:22:36.575606   75086 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 18:22:36.575687   75086 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 18:22:36.575790   75086 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 18:22:36.575867   75086 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 18:22:36.575903   75086 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 18:22:36.575970   75086 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 18:22:36.576054   75086 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 18:22:36.576141   75086 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 18:22:36.576223   75086 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 18:22:36.576317   75086 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 18:22:36.576373   75086 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 18:22:36.576442   75086 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 18:22:36.576506   75086 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 18:22:36.577982   75086 out.go:235]   - Booting up control plane ...
	I0920 18:22:36.578071   75086 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 18:22:36.578147   75086 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 18:22:36.578204   75086 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 18:22:36.578312   75086 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 18:22:36.578400   75086 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 18:22:36.578438   75086 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 18:22:36.578568   75086 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 18:22:36.578660   75086 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 18:22:36.578719   75086 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.174126ms
	I0920 18:22:36.578788   75086 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 18:22:36.578839   75086 kubeadm.go:310] [api-check] The API server is healthy after 5.501599925s
	I0920 18:22:36.578933   75086 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 18:22:36.579037   75086 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 18:22:36.579090   75086 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 18:22:36.579268   75086 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-768431 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 18:22:36.579335   75086 kubeadm.go:310] [bootstrap-token] Using token: junoe1.zmowu95mn9yb1z1f
	I0920 18:22:36.580810   75086 out.go:235]   - Configuring RBAC rules ...
	I0920 18:22:36.580919   75086 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 18:22:36.580989   75086 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 18:22:36.581127   75086 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 18:22:36.581290   75086 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 18:22:36.581456   75086 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 18:22:36.581589   75086 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 18:22:36.581742   75086 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 18:22:36.581827   75086 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 18:22:36.581916   75086 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 18:22:36.581925   75086 kubeadm.go:310] 
	I0920 18:22:36.581980   75086 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 18:22:36.581986   75086 kubeadm.go:310] 
	I0920 18:22:36.582086   75086 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 18:22:36.582097   75086 kubeadm.go:310] 
	I0920 18:22:36.582137   75086 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 18:22:36.582204   75086 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 18:22:36.582287   75086 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 18:22:36.582297   75086 kubeadm.go:310] 
	I0920 18:22:36.582389   75086 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 18:22:36.582398   75086 kubeadm.go:310] 
	I0920 18:22:36.582479   75086 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 18:22:36.582495   75086 kubeadm.go:310] 
	I0920 18:22:36.582540   75086 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 18:22:36.582605   75086 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 18:22:36.582682   75086 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 18:22:36.582697   75086 kubeadm.go:310] 
	I0920 18:22:36.582796   75086 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 18:22:36.582892   75086 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 18:22:36.582901   75086 kubeadm.go:310] 
	I0920 18:22:36.583026   75086 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token junoe1.zmowu95mn9yb1z1f \
	I0920 18:22:36.583163   75086 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:736f3fb521855813decb4a7214f2b8beff5f81c3971b60decfa20a8807626a2d \
	I0920 18:22:36.583199   75086 kubeadm.go:310] 	--control-plane 
	I0920 18:22:36.583208   75086 kubeadm.go:310] 
	I0920 18:22:36.583316   75086 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 18:22:36.583335   75086 kubeadm.go:310] 
	I0920 18:22:36.583489   75086 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token junoe1.zmowu95mn9yb1z1f \
	I0920 18:22:36.583648   75086 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:736f3fb521855813decb4a7214f2b8beff5f81c3971b60decfa20a8807626a2d 
	I0920 18:22:36.583664   75086 cni.go:84] Creating CNI manager for ""
	I0920 18:22:36.583673   75086 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:22:36.585378   75086 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 18:22:33.523585   75264 api_server.go:253] Checking apiserver healthz at https://192.168.72.190:8444/healthz ...
	I0920 18:22:33.528658   75264 api_server.go:279] https://192.168.72.190:8444/healthz returned 200:
	ok
	I0920 18:22:33.529826   75264 api_server.go:141] control plane version: v1.31.1
	I0920 18:22:33.529861   75264 api_server.go:131] duration metric: took 3.95084435s to wait for apiserver health ...
	I0920 18:22:33.529870   75264 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 18:22:33.529894   75264 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:22:33.529938   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:22:33.571762   75264 cri.go:89] found id: "0ea0cfbd9902ae8e3797c77592bfe05a5f8162f75a391f5c2dc992eb5154c4ad"
	I0920 18:22:33.571783   75264 cri.go:89] found id: ""
	I0920 18:22:33.571789   75264 logs.go:276] 1 containers: [0ea0cfbd9902ae8e3797c77592bfe05a5f8162f75a391f5c2dc992eb5154c4ad]
	I0920 18:22:33.571840   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:33.576180   75264 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:22:33.576268   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:22:33.621887   75264 cri.go:89] found id: "65da0bae1c849a395cc241962bb6cbdf9f3edf11cdc95f4495ac2e427f9602d4"
	I0920 18:22:33.621915   75264 cri.go:89] found id: ""
	I0920 18:22:33.621926   75264 logs.go:276] 1 containers: [65da0bae1c849a395cc241962bb6cbdf9f3edf11cdc95f4495ac2e427f9602d4]
	I0920 18:22:33.622013   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:33.626552   75264 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:22:33.626609   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:22:33.664879   75264 cri.go:89] found id: "606f7c8a9a095f499ae422de76f76105607db72e57b9b0a6db3e3c5a81656c5c"
	I0920 18:22:33.664902   75264 cri.go:89] found id: ""
	I0920 18:22:33.664912   75264 logs.go:276] 1 containers: [606f7c8a9a095f499ae422de76f76105607db72e57b9b0a6db3e3c5a81656c5c]
	I0920 18:22:33.664980   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:33.669155   75264 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:22:33.669231   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:22:33.710930   75264 cri.go:89] found id: "6ba313deffc6185e59dac50051e933f6b2ea70d630cf36b2b8c2c07042f793ba"
	I0920 18:22:33.710957   75264 cri.go:89] found id: ""
	I0920 18:22:33.710968   75264 logs.go:276] 1 containers: [6ba313deffc6185e59dac50051e933f6b2ea70d630cf36b2b8c2c07042f793ba]
	I0920 18:22:33.711030   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:33.715618   75264 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:22:33.715689   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:22:33.751646   75264 cri.go:89] found id: "702f7f440eb6016c2c758a6be4e1072274ab6dcd169d7a081e5464869a7c299c"
	I0920 18:22:33.751672   75264 cri.go:89] found id: ""
	I0920 18:22:33.751690   75264 logs.go:276] 1 containers: [702f7f440eb6016c2c758a6be4e1072274ab6dcd169d7a081e5464869a7c299c]
	I0920 18:22:33.751758   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:33.756376   75264 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:22:33.756441   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:22:33.803246   75264 cri.go:89] found id: "4ca303b795795bd920f261c88630f30fd51a91b9694a9a6e67af8164bab8242b"
	I0920 18:22:33.803267   75264 cri.go:89] found id: ""
	I0920 18:22:33.803276   75264 logs.go:276] 1 containers: [4ca303b795795bd920f261c88630f30fd51a91b9694a9a6e67af8164bab8242b]
	I0920 18:22:33.803334   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:33.807825   75264 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:22:33.807892   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:22:33.849531   75264 cri.go:89] found id: ""
	I0920 18:22:33.849559   75264 logs.go:276] 0 containers: []
	W0920 18:22:33.849569   75264 logs.go:278] No container was found matching "kindnet"
	I0920 18:22:33.849583   75264 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0920 18:22:33.849645   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0920 18:22:33.891058   75264 cri.go:89] found id: "001bdc98537f0917f021c14a5d0733bd31cf19be1b4862502458a7a90e4300da"
	I0920 18:22:33.891084   75264 cri.go:89] found id: "c42201b6e3d55b6fd8beaf04d416658b228bf627e519ddb022dad6d17219cc1c"
	I0920 18:22:33.891090   75264 cri.go:89] found id: ""
	I0920 18:22:33.891099   75264 logs.go:276] 2 containers: [001bdc98537f0917f021c14a5d0733bd31cf19be1b4862502458a7a90e4300da c42201b6e3d55b6fd8beaf04d416658b228bf627e519ddb022dad6d17219cc1c]
	I0920 18:22:33.891157   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:33.896028   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:33.900174   75264 logs.go:123] Gathering logs for container status ...
	I0920 18:22:33.900209   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:22:33.944777   75264 logs.go:123] Gathering logs for dmesg ...
	I0920 18:22:33.944803   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:22:33.960062   75264 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:22:33.960094   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 18:22:34.087507   75264 logs.go:123] Gathering logs for etcd [65da0bae1c849a395cc241962bb6cbdf9f3edf11cdc95f4495ac2e427f9602d4] ...
	I0920 18:22:34.087556   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 65da0bae1c849a395cc241962bb6cbdf9f3edf11cdc95f4495ac2e427f9602d4"
	I0920 18:22:34.140051   75264 logs.go:123] Gathering logs for kube-proxy [702f7f440eb6016c2c758a6be4e1072274ab6dcd169d7a081e5464869a7c299c] ...
	I0920 18:22:34.140088   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 702f7f440eb6016c2c758a6be4e1072274ab6dcd169d7a081e5464869a7c299c"
	I0920 18:22:34.183540   75264 logs.go:123] Gathering logs for kube-controller-manager [4ca303b795795bd920f261c88630f30fd51a91b9694a9a6e67af8164bab8242b] ...
	I0920 18:22:34.183582   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ca303b795795bd920f261c88630f30fd51a91b9694a9a6e67af8164bab8242b"
	I0920 18:22:34.251978   75264 logs.go:123] Gathering logs for storage-provisioner [001bdc98537f0917f021c14a5d0733bd31cf19be1b4862502458a7a90e4300da] ...
	I0920 18:22:34.252025   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 001bdc98537f0917f021c14a5d0733bd31cf19be1b4862502458a7a90e4300da"
	I0920 18:22:34.296522   75264 logs.go:123] Gathering logs for storage-provisioner [c42201b6e3d55b6fd8beaf04d416658b228bf627e519ddb022dad6d17219cc1c] ...
	I0920 18:22:34.296562   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c42201b6e3d55b6fd8beaf04d416658b228bf627e519ddb022dad6d17219cc1c"
	I0920 18:22:34.338227   75264 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:22:34.338254   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:22:34.703986   75264 logs.go:123] Gathering logs for kubelet ...
	I0920 18:22:34.704037   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:22:34.788091   75264 logs.go:123] Gathering logs for kube-apiserver [0ea0cfbd9902ae8e3797c77592bfe05a5f8162f75a391f5c2dc992eb5154c4ad] ...
	I0920 18:22:34.788139   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ea0cfbd9902ae8e3797c77592bfe05a5f8162f75a391f5c2dc992eb5154c4ad"
	I0920 18:22:34.861403   75264 logs.go:123] Gathering logs for coredns [606f7c8a9a095f499ae422de76f76105607db72e57b9b0a6db3e3c5a81656c5c] ...
	I0920 18:22:34.861435   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 606f7c8a9a095f499ae422de76f76105607db72e57b9b0a6db3e3c5a81656c5c"
	I0920 18:22:34.906740   75264 logs.go:123] Gathering logs for kube-scheduler [6ba313deffc6185e59dac50051e933f6b2ea70d630cf36b2b8c2c07042f793ba] ...
	I0920 18:22:34.906773   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ba313deffc6185e59dac50051e933f6b2ea70d630cf36b2b8c2c07042f793ba"
	I0920 18:22:37.454205   75264 system_pods.go:59] 8 kube-system pods found
	I0920 18:22:37.454243   75264 system_pods.go:61] "coredns-7c65d6cfc9-dmdfb" [8e4ffdb3-37e0-4498-bdf5-d6e7dbcad020] Running
	I0920 18:22:37.454252   75264 system_pods.go:61] "etcd-default-k8s-diff-port-553719" [e8de0e96-5f3a-4f3f-ae69-c5da7d3c2eb7] Running
	I0920 18:22:37.454257   75264 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-553719" [5164e26c-7fcd-41dc-865f-429046a9ad61] Running
	I0920 18:22:37.454263   75264 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-553719" [b2c00a76-9645-4c63-9812-43e6ed9f1d4f] Running
	I0920 18:22:37.454268   75264 system_pods.go:61] "kube-proxy-p9crq" [83e0f53d-6960-42c4-904d-ea85ba9160f4] Running
	I0920 18:22:37.454273   75264 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-553719" [42155f7b-95bb-467b-acf5-89eb4f80cf76] Running
	I0920 18:22:37.454289   75264 system_pods.go:61] "metrics-server-6867b74b74-vtl79" [29e0b6eb-22a9-4e37-97f9-83b48cc38193] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 18:22:37.454296   75264 system_pods.go:61] "storage-provisioner" [6fad2d07-f99e-45ac-9657-bce6d73d7fce] Running
	I0920 18:22:37.454310   75264 system_pods.go:74] duration metric: took 3.924431145s to wait for pod list to return data ...
	I0920 18:22:37.454324   75264 default_sa.go:34] waiting for default service account to be created ...
	I0920 18:22:37.457599   75264 default_sa.go:45] found service account: "default"
	I0920 18:22:37.457619   75264 default_sa.go:55] duration metric: took 3.289936ms for default service account to be created ...
	I0920 18:22:37.457627   75264 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 18:22:37.461977   75264 system_pods.go:86] 8 kube-system pods found
	I0920 18:22:37.462001   75264 system_pods.go:89] "coredns-7c65d6cfc9-dmdfb" [8e4ffdb3-37e0-4498-bdf5-d6e7dbcad020] Running
	I0920 18:22:37.462006   75264 system_pods.go:89] "etcd-default-k8s-diff-port-553719" [e8de0e96-5f3a-4f3f-ae69-c5da7d3c2eb7] Running
	I0920 18:22:37.462011   75264 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-553719" [5164e26c-7fcd-41dc-865f-429046a9ad61] Running
	I0920 18:22:37.462015   75264 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-553719" [b2c00a76-9645-4c63-9812-43e6ed9f1d4f] Running
	I0920 18:22:37.462019   75264 system_pods.go:89] "kube-proxy-p9crq" [83e0f53d-6960-42c4-904d-ea85ba9160f4] Running
	I0920 18:22:37.462022   75264 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-553719" [42155f7b-95bb-467b-acf5-89eb4f80cf76] Running
	I0920 18:22:37.462028   75264 system_pods.go:89] "metrics-server-6867b74b74-vtl79" [29e0b6eb-22a9-4e37-97f9-83b48cc38193] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 18:22:37.462032   75264 system_pods.go:89] "storage-provisioner" [6fad2d07-f99e-45ac-9657-bce6d73d7fce] Running
	I0920 18:22:37.462040   75264 system_pods.go:126] duration metric: took 4.409042ms to wait for k8s-apps to be running ...
	I0920 18:22:37.462047   75264 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 18:22:37.462087   75264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:22:37.479350   75264 system_svc.go:56] duration metric: took 17.294926ms WaitForService to wait for kubelet
	I0920 18:22:37.479380   75264 kubeadm.go:582] duration metric: took 4m28.223143222s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:22:37.479402   75264 node_conditions.go:102] verifying NodePressure condition ...
	I0920 18:22:37.482497   75264 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 18:22:37.482517   75264 node_conditions.go:123] node cpu capacity is 2
	I0920 18:22:37.482532   75264 node_conditions.go:105] duration metric: took 3.125658ms to run NodePressure ...
	I0920 18:22:37.482543   75264 start.go:241] waiting for startup goroutines ...
	I0920 18:22:37.482549   75264 start.go:246] waiting for cluster config update ...
	I0920 18:22:37.482561   75264 start.go:255] writing updated cluster config ...
	I0920 18:22:37.482817   75264 ssh_runner.go:195] Run: rm -f paused
	I0920 18:22:37.532982   75264 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 18:22:37.534936   75264 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-553719" cluster and "default" namespace by default
	I0920 18:22:34.302204   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:36.802991   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:36.586809   75086 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 18:22:36.599904   75086 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 18:22:36.620050   75086 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 18:22:36.620133   75086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:22:36.620149   75086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-768431 minikube.k8s.io/updated_at=2024_09_20T18_22_36_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=0626f22cf0d915d75e291a5bce701f94395056e1 minikube.k8s.io/name=embed-certs-768431 minikube.k8s.io/primary=true
	I0920 18:22:36.636625   75086 ops.go:34] apiserver oom_adj: -16
	I0920 18:22:36.817434   75086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:22:37.318133   75086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:22:37.818515   75086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:22:38.318005   75086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:22:38.817759   75086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:22:39.318481   75086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:22:39.817943   75086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:22:40.317957   75086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:22:40.423342   75086 kubeadm.go:1113] duration metric: took 3.803277177s to wait for elevateKubeSystemPrivileges
	I0920 18:22:40.423389   75086 kubeadm.go:394] duration metric: took 4m59.290098215s to StartCluster
	I0920 18:22:40.423413   75086 settings.go:142] acquiring lock: {Name:mk2a13c58cdc0faf8cddca5d6716175d45db9bfd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:22:40.423516   75086 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19672-8777/kubeconfig
	I0920 18:22:40.426054   75086 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/kubeconfig: {Name:mkf32a4c736808e023459b2f0e40188618a38db1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:22:40.426360   75086 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.202 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:22:40.426456   75086 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 18:22:40.426546   75086 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-768431"
	I0920 18:22:40.426566   75086 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-768431"
	W0920 18:22:40.426578   75086 addons.go:243] addon storage-provisioner should already be in state true
	I0920 18:22:40.426576   75086 addons.go:69] Setting default-storageclass=true in profile "embed-certs-768431"
	I0920 18:22:40.426608   75086 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-768431"
	I0920 18:22:40.426622   75086 host.go:66] Checking if "embed-certs-768431" exists ...
	I0920 18:22:40.426621   75086 addons.go:69] Setting metrics-server=true in profile "embed-certs-768431"
	I0920 18:22:40.426662   75086 config.go:182] Loaded profile config "embed-certs-768431": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:22:40.426669   75086 addons.go:234] Setting addon metrics-server=true in "embed-certs-768431"
	W0920 18:22:40.426699   75086 addons.go:243] addon metrics-server should already be in state true
	I0920 18:22:40.426739   75086 host.go:66] Checking if "embed-certs-768431" exists ...
	I0920 18:22:40.427075   75086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:22:40.427116   75086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:22:40.427120   75086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:22:40.427155   75086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:22:40.427161   75086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:22:40.427251   75086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:22:40.427977   75086 out.go:177] * Verifying Kubernetes components...
	I0920 18:22:40.429647   75086 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:22:40.443468   75086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40103
	I0920 18:22:40.443663   75086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37871
	I0920 18:22:40.444055   75086 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:22:40.444062   75086 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:22:40.444562   75086 main.go:141] libmachine: Using API Version  1
	I0920 18:22:40.444586   75086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:22:40.444732   75086 main.go:141] libmachine: Using API Version  1
	I0920 18:22:40.444750   75086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:22:40.445016   75086 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:22:40.445110   75086 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:22:40.445584   75086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:22:40.445634   75086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:22:40.445652   75086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:22:40.445654   75086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38771
	I0920 18:22:40.445684   75086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:22:40.446227   75086 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:22:40.446872   75086 main.go:141] libmachine: Using API Version  1
	I0920 18:22:40.446892   75086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:22:40.447280   75086 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:22:40.447692   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetState
	I0920 18:22:40.451332   75086 addons.go:234] Setting addon default-storageclass=true in "embed-certs-768431"
	W0920 18:22:40.451355   75086 addons.go:243] addon default-storageclass should already be in state true
	I0920 18:22:40.451382   75086 host.go:66] Checking if "embed-certs-768431" exists ...
	I0920 18:22:40.451753   75086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:22:40.451803   75086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:22:40.463317   75086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39353
	I0920 18:22:40.463816   75086 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:22:40.464544   75086 main.go:141] libmachine: Using API Version  1
	I0920 18:22:40.464577   75086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:22:40.464961   75086 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:22:40.467445   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetState
	I0920 18:22:40.469290   75086 main.go:141] libmachine: (embed-certs-768431) Calling .DriverName
	I0920 18:22:40.471212   75086 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0920 18:22:40.471862   75086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40689
	I0920 18:22:40.472254   75086 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:22:40.472515   75086 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 18:22:40.472535   75086 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 18:22:40.472557   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHHostname
	I0920 18:22:40.472718   75086 main.go:141] libmachine: Using API Version  1
	I0920 18:22:40.472741   75086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:22:40.473259   75086 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:22:40.473477   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetState
	I0920 18:22:40.473753   75086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36495
	I0920 18:22:40.474173   75086 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:22:40.474778   75086 main.go:141] libmachine: Using API Version  1
	I0920 18:22:40.474796   75086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:22:40.475166   75086 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:22:40.475726   75086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:22:40.475766   75086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:22:40.475984   75086 main.go:141] libmachine: (embed-certs-768431) Calling .DriverName
	I0920 18:22:40.476244   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:22:40.476583   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:22:40.476605   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:22:40.476773   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHPort
	I0920 18:22:40.476953   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:22:40.477053   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHUsername
	I0920 18:22:40.477196   75086 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/embed-certs-768431/id_rsa Username:docker}
	I0920 18:22:40.478244   75086 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:22:40.479443   75086 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 18:22:40.479459   75086 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 18:22:40.479476   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHHostname
	I0920 18:22:40.482119   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:22:40.482400   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:22:40.482423   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:22:40.482639   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHPort
	I0920 18:22:40.482840   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:22:40.482968   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHUsername
	I0920 18:22:40.483145   75086 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/embed-certs-768431/id_rsa Username:docker}
	I0920 18:22:40.494454   75086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35971
	I0920 18:22:40.494948   75086 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:22:40.495425   75086 main.go:141] libmachine: Using API Version  1
	I0920 18:22:40.495444   75086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:22:40.495798   75086 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:22:40.495990   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetState
	I0920 18:22:40.497591   75086 main.go:141] libmachine: (embed-certs-768431) Calling .DriverName
	I0920 18:22:40.497878   75086 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 18:22:40.497894   75086 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 18:22:40.497913   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHHostname
	I0920 18:22:40.500755   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:22:40.501130   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:22:40.501153   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:22:40.501355   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHPort
	I0920 18:22:40.501548   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:22:40.501725   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHUsername
	I0920 18:22:40.502091   75086 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/embed-certs-768431/id_rsa Username:docker}
	I0920 18:22:40.646738   75086 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:22:40.673285   75086 node_ready.go:35] waiting up to 6m0s for node "embed-certs-768431" to be "Ready" ...
	I0920 18:22:40.684618   75086 node_ready.go:49] node "embed-certs-768431" has status "Ready":"True"
	I0920 18:22:40.684643   75086 node_ready.go:38] duration metric: took 11.298825ms for node "embed-certs-768431" to be "Ready" ...
	I0920 18:22:40.684653   75086 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:22:40.689151   75086 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:22:40.734882   75086 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 18:22:40.744479   75086 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 18:22:40.884534   75086 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 18:22:40.884556   75086 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0920 18:22:41.018287   75086 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 18:22:41.018313   75086 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 18:22:41.091216   75086 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 18:22:41.091247   75086 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 18:22:41.132887   75086 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 18:22:41.538445   75086 main.go:141] libmachine: Making call to close driver server
	I0920 18:22:41.538472   75086 main.go:141] libmachine: (embed-certs-768431) Calling .Close
	I0920 18:22:41.538774   75086 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:22:41.538795   75086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:22:41.538806   75086 main.go:141] libmachine: Making call to close driver server
	I0920 18:22:41.538815   75086 main.go:141] libmachine: (embed-certs-768431) Calling .Close
	I0920 18:22:41.539010   75086 main.go:141] libmachine: Making call to close driver server
	I0920 18:22:41.539027   75086 main.go:141] libmachine: (embed-certs-768431) Calling .Close
	I0920 18:22:41.539118   75086 main.go:141] libmachine: (embed-certs-768431) DBG | Closing plugin on server side
	I0920 18:22:41.539137   75086 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:22:41.539205   75086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:22:41.539273   75086 main.go:141] libmachine: (embed-certs-768431) DBG | Closing plugin on server side
	I0920 18:22:41.539286   75086 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:22:41.539300   75086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:22:41.539309   75086 main.go:141] libmachine: Making call to close driver server
	I0920 18:22:41.539348   75086 main.go:141] libmachine: (embed-certs-768431) Calling .Close
	I0920 18:22:41.539629   75086 main.go:141] libmachine: (embed-certs-768431) DBG | Closing plugin on server side
	I0920 18:22:41.539693   75086 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:22:41.539712   75086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:22:41.559164   75086 main.go:141] libmachine: Making call to close driver server
	I0920 18:22:41.559191   75086 main.go:141] libmachine: (embed-certs-768431) Calling .Close
	I0920 18:22:41.559517   75086 main.go:141] libmachine: (embed-certs-768431) DBG | Closing plugin on server side
	I0920 18:22:41.559580   75086 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:22:41.559596   75086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:22:41.880601   75086 main.go:141] libmachine: Making call to close driver server
	I0920 18:22:41.880640   75086 main.go:141] libmachine: (embed-certs-768431) Calling .Close
	I0920 18:22:41.880998   75086 main.go:141] libmachine: (embed-certs-768431) DBG | Closing plugin on server side
	I0920 18:22:41.881026   75086 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:22:41.881039   75086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:22:41.881047   75086 main.go:141] libmachine: Making call to close driver server
	I0920 18:22:41.881052   75086 main.go:141] libmachine: (embed-certs-768431) Calling .Close
	I0920 18:22:41.881304   75086 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:22:41.881322   75086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:22:41.881336   75086 addons.go:475] Verifying addon metrics-server=true in "embed-certs-768431"
	I0920 18:22:41.883315   75086 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0920 18:22:39.302453   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:41.802428   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:41.885367   75086 addons.go:510] duration metric: took 1.45891561s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0920 18:22:42.696266   75086 pod_ready.go:103] pod "etcd-embed-certs-768431" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:44.300965   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:46.301969   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:45.195160   75086 pod_ready.go:103] pod "etcd-embed-certs-768431" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:46.200142   75086 pod_ready.go:93] pod "etcd-embed-certs-768431" in "kube-system" namespace has status "Ready":"True"
	I0920 18:22:46.200166   75086 pod_ready.go:82] duration metric: took 5.510989222s for pod "etcd-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:22:46.200175   75086 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:22:46.205619   75086 pod_ready.go:93] pod "kube-apiserver-embed-certs-768431" in "kube-system" namespace has status "Ready":"True"
	I0920 18:22:46.205647   75086 pod_ready.go:82] duration metric: took 5.464252ms for pod "kube-apiserver-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:22:46.205659   75086 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:22:46.210040   75086 pod_ready.go:93] pod "kube-controller-manager-embed-certs-768431" in "kube-system" namespace has status "Ready":"True"
	I0920 18:22:46.210066   75086 pod_ready.go:82] duration metric: took 4.397776ms for pod "kube-controller-manager-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:22:46.210079   75086 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:22:48.216587   75086 pod_ready.go:103] pod "kube-scheduler-embed-certs-768431" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:49.217996   75086 pod_ready.go:93] pod "kube-scheduler-embed-certs-768431" in "kube-system" namespace has status "Ready":"True"
	I0920 18:22:49.218019   75086 pod_ready.go:82] duration metric: took 3.007931797s for pod "kube-scheduler-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:22:49.218025   75086 pod_ready.go:39] duration metric: took 8.533358652s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:22:49.218039   75086 api_server.go:52] waiting for apiserver process to appear ...
	I0920 18:22:49.218100   75086 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:22:49.235456   75086 api_server.go:72] duration metric: took 8.809056301s to wait for apiserver process to appear ...
	I0920 18:22:49.235480   75086 api_server.go:88] waiting for apiserver healthz status ...
	I0920 18:22:49.235499   75086 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0920 18:22:49.239562   75086 api_server.go:279] https://192.168.61.202:8443/healthz returned 200:
	ok
	I0920 18:22:49.240999   75086 api_server.go:141] control plane version: v1.31.1
	I0920 18:22:49.241024   75086 api_server.go:131] duration metric: took 5.537177ms to wait for apiserver health ...
	I0920 18:22:49.241033   75086 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 18:22:49.246297   75086 system_pods.go:59] 9 kube-system pods found
	I0920 18:22:49.246335   75086 system_pods.go:61] "coredns-7c65d6cfc9-g5tkc" [8877c0e8-6c8f-4a62-94bd-508982faee3c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 18:22:49.246342   75086 system_pods.go:61] "coredns-7c65d6cfc9-jkkdn" [f64288e4-009c-4ba8-93e7-30ca5296af46] Running
	I0920 18:22:49.246348   75086 system_pods.go:61] "etcd-embed-certs-768431" [f0ba7c9a-cc8b-43d4-aa48-a8f923d3c515] Running
	I0920 18:22:49.246352   75086 system_pods.go:61] "kube-apiserver-embed-certs-768431" [dba3b6cc-95db-47a0-ba41-3aa3ed3e8adb] Running
	I0920 18:22:49.246356   75086 system_pods.go:61] "kube-controller-manager-embed-certs-768431" [ff933e33-4c5c-4a8c-8ee4-6f0cb3ea4c3a] Running
	I0920 18:22:49.246359   75086 system_pods.go:61] "kube-proxy-c4527" [2e2d5102-0c42-4b87-8a27-dd53b8eb41f9] Running
	I0920 18:22:49.246362   75086 system_pods.go:61] "kube-scheduler-embed-certs-768431" [8cca3406-a395-4dcd-97ac-d8ce3e8deac6] Running
	I0920 18:22:49.246367   75086 system_pods.go:61] "metrics-server-6867b74b74-9snmf" [5fb654f5-5e73-436e-bc9d-04ef5077deb4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 18:22:49.246373   75086 system_pods.go:61] "storage-provisioner" [09227b45-ea2a-4fcc-b082-9978e5f00a4b] Running
	I0920 18:22:49.246379   75086 system_pods.go:74] duration metric: took 5.340615ms to wait for pod list to return data ...
	I0920 18:22:49.246388   75086 default_sa.go:34] waiting for default service account to be created ...
	I0920 18:22:49.249593   75086 default_sa.go:45] found service account: "default"
	I0920 18:22:49.249617   75086 default_sa.go:55] duration metric: took 3.222486ms for default service account to be created ...
	I0920 18:22:49.249626   75086 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 18:22:49.255747   75086 system_pods.go:86] 9 kube-system pods found
	I0920 18:22:49.255780   75086 system_pods.go:89] "coredns-7c65d6cfc9-g5tkc" [8877c0e8-6c8f-4a62-94bd-508982faee3c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 18:22:49.255790   75086 system_pods.go:89] "coredns-7c65d6cfc9-jkkdn" [f64288e4-009c-4ba8-93e7-30ca5296af46] Running
	I0920 18:22:49.255799   75086 system_pods.go:89] "etcd-embed-certs-768431" [f0ba7c9a-cc8b-43d4-aa48-a8f923d3c515] Running
	I0920 18:22:49.255805   75086 system_pods.go:89] "kube-apiserver-embed-certs-768431" [dba3b6cc-95db-47a0-ba41-3aa3ed3e8adb] Running
	I0920 18:22:49.255811   75086 system_pods.go:89] "kube-controller-manager-embed-certs-768431" [ff933e33-4c5c-4a8c-8ee4-6f0cb3ea4c3a] Running
	I0920 18:22:49.255817   75086 system_pods.go:89] "kube-proxy-c4527" [2e2d5102-0c42-4b87-8a27-dd53b8eb41f9] Running
	I0920 18:22:49.255826   75086 system_pods.go:89] "kube-scheduler-embed-certs-768431" [8cca3406-a395-4dcd-97ac-d8ce3e8deac6] Running
	I0920 18:22:49.255834   75086 system_pods.go:89] "metrics-server-6867b74b74-9snmf" [5fb654f5-5e73-436e-bc9d-04ef5077deb4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 18:22:49.255845   75086 system_pods.go:89] "storage-provisioner" [09227b45-ea2a-4fcc-b082-9978e5f00a4b] Running
	I0920 18:22:49.255852   75086 system_pods.go:126] duration metric: took 6.220419ms to wait for k8s-apps to be running ...
	I0920 18:22:49.255863   75086 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 18:22:49.255918   75086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:22:49.271916   75086 system_svc.go:56] duration metric: took 16.046857ms WaitForService to wait for kubelet
	I0920 18:22:49.271946   75086 kubeadm.go:582] duration metric: took 8.845551531s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:22:49.271962   75086 node_conditions.go:102] verifying NodePressure condition ...
	I0920 18:22:49.277150   75086 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 18:22:49.277190   75086 node_conditions.go:123] node cpu capacity is 2
	I0920 18:22:49.277203   75086 node_conditions.go:105] duration metric: took 5.234938ms to run NodePressure ...
	I0920 18:22:49.277216   75086 start.go:241] waiting for startup goroutines ...
	I0920 18:22:49.277226   75086 start.go:246] waiting for cluster config update ...
	I0920 18:22:49.277240   75086 start.go:255] writing updated cluster config ...
	I0920 18:22:49.278062   75086 ssh_runner.go:195] Run: rm -f paused
	I0920 18:22:49.329596   75086 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 18:22:49.331593   75086 out.go:177] * Done! kubectl is now configured to use "embed-certs-768431" cluster and "default" namespace by default
	I0920 18:22:48.802304   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:51.301288   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:53.801957   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:56.301310   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:58.302165   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:23:00.801931   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:23:03.301289   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:23:05.801777   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:23:08.301539   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:23:08.801990   74753 pod_ready.go:82] duration metric: took 4m0.006972987s for pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace to be "Ready" ...
	E0920 18:23:08.802010   74753 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0920 18:23:08.802019   74753 pod_ready.go:39] duration metric: took 4m2.40678087s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:23:08.802035   74753 api_server.go:52] waiting for apiserver process to appear ...
	I0920 18:23:08.802062   74753 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:23:08.802110   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:23:08.848687   74753 cri.go:89] found id: "334e4df5baa4f8f7d58af2308fab7f78de22f7e07d6dfbcbeaa0df2a52f6b207"
	I0920 18:23:08.848709   74753 cri.go:89] found id: ""
	I0920 18:23:08.848716   74753 logs.go:276] 1 containers: [334e4df5baa4f8f7d58af2308fab7f78de22f7e07d6dfbcbeaa0df2a52f6b207]
	I0920 18:23:08.848764   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:08.853180   74753 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:23:08.853239   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:23:08.889768   74753 cri.go:89] found id: "98aa96314cf8f863121b9953ad6335628481f536fcab49b7c00505a17eb35ff2"
	I0920 18:23:08.889793   74753 cri.go:89] found id: ""
	I0920 18:23:08.889803   74753 logs.go:276] 1 containers: [98aa96314cf8f863121b9953ad6335628481f536fcab49b7c00505a17eb35ff2]
	I0920 18:23:08.889869   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:08.893865   74753 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:23:08.893937   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:23:08.936592   74753 cri.go:89] found id: "35f0d8dd053d4c376ccf21215492d8509ff701e7d09b345fdbe06d505ba2d6b4"
	I0920 18:23:08.936619   74753 cri.go:89] found id: ""
	I0920 18:23:08.936629   74753 logs.go:276] 1 containers: [35f0d8dd053d4c376ccf21215492d8509ff701e7d09b345fdbe06d505ba2d6b4]
	I0920 18:23:08.936768   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:08.941222   74753 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:23:08.941311   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:23:08.978894   74753 cri.go:89] found id: "8153479cebb05353bec31c41778746c485cb294bf41c3f98e30559ad7da64c64"
	I0920 18:23:08.978918   74753 cri.go:89] found id: ""
	I0920 18:23:08.978929   74753 logs.go:276] 1 containers: [8153479cebb05353bec31c41778746c485cb294bf41c3f98e30559ad7da64c64]
	I0920 18:23:08.978977   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:08.982783   74753 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:23:08.982845   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:23:09.018400   74753 cri.go:89] found id: "6df198ca54e8004a532de9b15cfe7a48a659ae06623c00a78cad528762944497"
	I0920 18:23:09.018422   74753 cri.go:89] found id: ""
	I0920 18:23:09.018430   74753 logs.go:276] 1 containers: [6df198ca54e8004a532de9b15cfe7a48a659ae06623c00a78cad528762944497]
	I0920 18:23:09.018485   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:09.022403   74753 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:23:09.022470   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:23:09.062315   74753 cri.go:89] found id: "3ebf4c520d684127cd0b6146d44e863090b9c2db3ac866e9ce3ef2d10d2d6da1"
	I0920 18:23:09.062385   74753 cri.go:89] found id: ""
	I0920 18:23:09.062397   74753 logs.go:276] 1 containers: [3ebf4c520d684127cd0b6146d44e863090b9c2db3ac866e9ce3ef2d10d2d6da1]
	I0920 18:23:09.062453   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:09.066976   74753 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:23:09.067049   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:23:09.107467   74753 cri.go:89] found id: ""
	I0920 18:23:09.107504   74753 logs.go:276] 0 containers: []
	W0920 18:23:09.107515   74753 logs.go:278] No container was found matching "kindnet"
	I0920 18:23:09.107523   74753 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0920 18:23:09.107583   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0920 18:23:09.150489   74753 cri.go:89] found id: "179e4a02f3459a613d3307806b477d54581555b18babca62d0c5e553b9562442"
	I0920 18:23:09.150519   74753 cri.go:89] found id: "3eb9abdf57de51979118cfa77b613c22e28122fcb1da9e72fbf6f3a946ed3531"
	I0920 18:23:09.150526   74753 cri.go:89] found id: ""
	I0920 18:23:09.150536   74753 logs.go:276] 2 containers: [179e4a02f3459a613d3307806b477d54581555b18babca62d0c5e553b9562442 3eb9abdf57de51979118cfa77b613c22e28122fcb1da9e72fbf6f3a946ed3531]
	I0920 18:23:09.150589   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:09.154618   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:09.158682   74753 logs.go:123] Gathering logs for container status ...
	I0920 18:23:09.158708   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:23:09.198393   74753 logs.go:123] Gathering logs for kubelet ...
	I0920 18:23:09.198420   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:23:09.266161   74753 logs.go:123] Gathering logs for kube-apiserver [334e4df5baa4f8f7d58af2308fab7f78de22f7e07d6dfbcbeaa0df2a52f6b207] ...
	I0920 18:23:09.266197   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 334e4df5baa4f8f7d58af2308fab7f78de22f7e07d6dfbcbeaa0df2a52f6b207"
	I0920 18:23:09.311406   74753 logs.go:123] Gathering logs for etcd [98aa96314cf8f863121b9953ad6335628481f536fcab49b7c00505a17eb35ff2] ...
	I0920 18:23:09.311438   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 98aa96314cf8f863121b9953ad6335628481f536fcab49b7c00505a17eb35ff2"
	I0920 18:23:09.353233   74753 logs.go:123] Gathering logs for kube-controller-manager [3ebf4c520d684127cd0b6146d44e863090b9c2db3ac866e9ce3ef2d10d2d6da1] ...
	I0920 18:23:09.353271   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ebf4c520d684127cd0b6146d44e863090b9c2db3ac866e9ce3ef2d10d2d6da1"
	I0920 18:23:09.415966   74753 logs.go:123] Gathering logs for storage-provisioner [3eb9abdf57de51979118cfa77b613c22e28122fcb1da9e72fbf6f3a946ed3531] ...
	I0920 18:23:09.416010   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3eb9abdf57de51979118cfa77b613c22e28122fcb1da9e72fbf6f3a946ed3531"
	I0920 18:23:09.462018   74753 logs.go:123] Gathering logs for storage-provisioner [179e4a02f3459a613d3307806b477d54581555b18babca62d0c5e553b9562442] ...
	I0920 18:23:09.462054   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 179e4a02f3459a613d3307806b477d54581555b18babca62d0c5e553b9562442"
	I0920 18:23:09.499851   74753 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:23:09.499883   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:23:10.018536   74753 logs.go:123] Gathering logs for dmesg ...
	I0920 18:23:10.018586   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:23:10.033607   74753 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:23:10.033639   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 18:23:10.167598   74753 logs.go:123] Gathering logs for coredns [35f0d8dd053d4c376ccf21215492d8509ff701e7d09b345fdbe06d505ba2d6b4] ...
	I0920 18:23:10.167645   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35f0d8dd053d4c376ccf21215492d8509ff701e7d09b345fdbe06d505ba2d6b4"
	I0920 18:23:10.201819   74753 logs.go:123] Gathering logs for kube-scheduler [8153479cebb05353bec31c41778746c485cb294bf41c3f98e30559ad7da64c64] ...
	I0920 18:23:10.201856   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8153479cebb05353bec31c41778746c485cb294bf41c3f98e30559ad7da64c64"
	I0920 18:23:10.237079   74753 logs.go:123] Gathering logs for kube-proxy [6df198ca54e8004a532de9b15cfe7a48a659ae06623c00a78cad528762944497] ...
	I0920 18:23:10.237108   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6df198ca54e8004a532de9b15cfe7a48a659ae06623c00a78cad528762944497"
	I0920 18:23:12.778360   74753 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:23:12.796411   74753 api_server.go:72] duration metric: took 4m13.647076581s to wait for apiserver process to appear ...
	I0920 18:23:12.796438   74753 api_server.go:88] waiting for apiserver healthz status ...
	I0920 18:23:12.796475   74753 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:23:12.796525   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:23:12.833259   74753 cri.go:89] found id: "334e4df5baa4f8f7d58af2308fab7f78de22f7e07d6dfbcbeaa0df2a52f6b207"
	I0920 18:23:12.833279   74753 cri.go:89] found id: ""
	I0920 18:23:12.833287   74753 logs.go:276] 1 containers: [334e4df5baa4f8f7d58af2308fab7f78de22f7e07d6dfbcbeaa0df2a52f6b207]
	I0920 18:23:12.833334   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:12.837497   74753 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:23:12.837568   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:23:12.877602   74753 cri.go:89] found id: "98aa96314cf8f863121b9953ad6335628481f536fcab49b7c00505a17eb35ff2"
	I0920 18:23:12.877628   74753 cri.go:89] found id: ""
	I0920 18:23:12.877639   74753 logs.go:276] 1 containers: [98aa96314cf8f863121b9953ad6335628481f536fcab49b7c00505a17eb35ff2]
	I0920 18:23:12.877688   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:12.881869   74753 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:23:12.881951   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:23:12.924617   74753 cri.go:89] found id: "35f0d8dd053d4c376ccf21215492d8509ff701e7d09b345fdbe06d505ba2d6b4"
	I0920 18:23:12.924641   74753 cri.go:89] found id: ""
	I0920 18:23:12.924650   74753 logs.go:276] 1 containers: [35f0d8dd053d4c376ccf21215492d8509ff701e7d09b345fdbe06d505ba2d6b4]
	I0920 18:23:12.924710   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:12.929317   74753 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:23:12.929381   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:23:12.968642   74753 cri.go:89] found id: "8153479cebb05353bec31c41778746c485cb294bf41c3f98e30559ad7da64c64"
	I0920 18:23:12.968672   74753 cri.go:89] found id: ""
	I0920 18:23:12.968682   74753 logs.go:276] 1 containers: [8153479cebb05353bec31c41778746c485cb294bf41c3f98e30559ad7da64c64]
	I0920 18:23:12.968745   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:12.974588   74753 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:23:12.974665   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:23:13.010036   74753 cri.go:89] found id: "6df198ca54e8004a532de9b15cfe7a48a659ae06623c00a78cad528762944497"
	I0920 18:23:13.010067   74753 cri.go:89] found id: ""
	I0920 18:23:13.010078   74753 logs.go:276] 1 containers: [6df198ca54e8004a532de9b15cfe7a48a659ae06623c00a78cad528762944497]
	I0920 18:23:13.010136   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:13.014300   74753 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:23:13.014358   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:23:13.055560   74753 cri.go:89] found id: "3ebf4c520d684127cd0b6146d44e863090b9c2db3ac866e9ce3ef2d10d2d6da1"
	I0920 18:23:13.055591   74753 cri.go:89] found id: ""
	I0920 18:23:13.055601   74753 logs.go:276] 1 containers: [3ebf4c520d684127cd0b6146d44e863090b9c2db3ac866e9ce3ef2d10d2d6da1]
	I0920 18:23:13.055663   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:13.059828   74753 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:23:13.059887   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:23:13.100161   74753 cri.go:89] found id: ""
	I0920 18:23:13.100189   74753 logs.go:276] 0 containers: []
	W0920 18:23:13.100199   74753 logs.go:278] No container was found matching "kindnet"
	I0920 18:23:13.100207   74753 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0920 18:23:13.100269   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0920 18:23:13.136788   74753 cri.go:89] found id: "179e4a02f3459a613d3307806b477d54581555b18babca62d0c5e553b9562442"
	I0920 18:23:13.136818   74753 cri.go:89] found id: "3eb9abdf57de51979118cfa77b613c22e28122fcb1da9e72fbf6f3a946ed3531"
	I0920 18:23:13.136824   74753 cri.go:89] found id: ""
	I0920 18:23:13.136831   74753 logs.go:276] 2 containers: [179e4a02f3459a613d3307806b477d54581555b18babca62d0c5e553b9562442 3eb9abdf57de51979118cfa77b613c22e28122fcb1da9e72fbf6f3a946ed3531]
	I0920 18:23:13.136894   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:13.140956   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:13.145991   74753 logs.go:123] Gathering logs for container status ...
	I0920 18:23:13.146023   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:23:13.187274   74753 logs.go:123] Gathering logs for kubelet ...
	I0920 18:23:13.187303   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:23:13.257111   74753 logs.go:123] Gathering logs for dmesg ...
	I0920 18:23:13.257147   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:23:13.271678   74753 logs.go:123] Gathering logs for kube-controller-manager [3ebf4c520d684127cd0b6146d44e863090b9c2db3ac866e9ce3ef2d10d2d6da1] ...
	I0920 18:23:13.271704   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ebf4c520d684127cd0b6146d44e863090b9c2db3ac866e9ce3ef2d10d2d6da1"
	I0920 18:23:13.327163   74753 logs.go:123] Gathering logs for storage-provisioner [179e4a02f3459a613d3307806b477d54581555b18babca62d0c5e553b9562442] ...
	I0920 18:23:13.327191   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 179e4a02f3459a613d3307806b477d54581555b18babca62d0c5e553b9562442"
	I0920 18:23:13.362002   74753 logs.go:123] Gathering logs for storage-provisioner [3eb9abdf57de51979118cfa77b613c22e28122fcb1da9e72fbf6f3a946ed3531] ...
	I0920 18:23:13.362039   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3eb9abdf57de51979118cfa77b613c22e28122fcb1da9e72fbf6f3a946ed3531"
	I0920 18:23:13.409907   74753 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:23:13.409938   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:23:13.840233   74753 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:23:13.840275   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 18:23:13.956466   74753 logs.go:123] Gathering logs for kube-apiserver [334e4df5baa4f8f7d58af2308fab7f78de22f7e07d6dfbcbeaa0df2a52f6b207] ...
	I0920 18:23:13.956499   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 334e4df5baa4f8f7d58af2308fab7f78de22f7e07d6dfbcbeaa0df2a52f6b207"
	I0920 18:23:14.006028   74753 logs.go:123] Gathering logs for etcd [98aa96314cf8f863121b9953ad6335628481f536fcab49b7c00505a17eb35ff2] ...
	I0920 18:23:14.006063   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 98aa96314cf8f863121b9953ad6335628481f536fcab49b7c00505a17eb35ff2"
	I0920 18:23:14.051354   74753 logs.go:123] Gathering logs for coredns [35f0d8dd053d4c376ccf21215492d8509ff701e7d09b345fdbe06d505ba2d6b4] ...
	I0920 18:23:14.051382   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35f0d8dd053d4c376ccf21215492d8509ff701e7d09b345fdbe06d505ba2d6b4"
	I0920 18:23:14.086486   74753 logs.go:123] Gathering logs for kube-scheduler [8153479cebb05353bec31c41778746c485cb294bf41c3f98e30559ad7da64c64] ...
	I0920 18:23:14.086518   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8153479cebb05353bec31c41778746c485cb294bf41c3f98e30559ad7da64c64"
	I0920 18:23:14.128119   74753 logs.go:123] Gathering logs for kube-proxy [6df198ca54e8004a532de9b15cfe7a48a659ae06623c00a78cad528762944497] ...
	I0920 18:23:14.128149   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6df198ca54e8004a532de9b15cfe7a48a659ae06623c00a78cad528762944497"
	I0920 18:23:16.668901   74753 api_server.go:253] Checking apiserver healthz at https://192.168.50.47:8443/healthz ...
	I0920 18:23:16.674235   74753 api_server.go:279] https://192.168.50.47:8443/healthz returned 200:
	ok
	I0920 18:23:16.675431   74753 api_server.go:141] control plane version: v1.31.1
	I0920 18:23:16.675449   74753 api_server.go:131] duration metric: took 3.879005012s to wait for apiserver health ...
	I0920 18:23:16.675456   74753 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 18:23:16.675481   74753 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:23:16.675527   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:23:16.711645   74753 cri.go:89] found id: "334e4df5baa4f8f7d58af2308fab7f78de22f7e07d6dfbcbeaa0df2a52f6b207"
	I0920 18:23:16.711673   74753 cri.go:89] found id: ""
	I0920 18:23:16.711683   74753 logs.go:276] 1 containers: [334e4df5baa4f8f7d58af2308fab7f78de22f7e07d6dfbcbeaa0df2a52f6b207]
	I0920 18:23:16.711743   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:16.716466   74753 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:23:16.716536   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:23:16.763061   74753 cri.go:89] found id: "98aa96314cf8f863121b9953ad6335628481f536fcab49b7c00505a17eb35ff2"
	I0920 18:23:16.763085   74753 cri.go:89] found id: ""
	I0920 18:23:16.763095   74753 logs.go:276] 1 containers: [98aa96314cf8f863121b9953ad6335628481f536fcab49b7c00505a17eb35ff2]
	I0920 18:23:16.763155   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:16.767018   74753 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:23:16.767075   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:23:16.801111   74753 cri.go:89] found id: "35f0d8dd053d4c376ccf21215492d8509ff701e7d09b345fdbe06d505ba2d6b4"
	I0920 18:23:16.801131   74753 cri.go:89] found id: ""
	I0920 18:23:16.801138   74753 logs.go:276] 1 containers: [35f0d8dd053d4c376ccf21215492d8509ff701e7d09b345fdbe06d505ba2d6b4]
	I0920 18:23:16.801192   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:16.805372   74753 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:23:16.805436   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:23:16.843368   74753 cri.go:89] found id: "8153479cebb05353bec31c41778746c485cb294bf41c3f98e30559ad7da64c64"
	I0920 18:23:16.843399   74753 cri.go:89] found id: ""
	I0920 18:23:16.843410   74753 logs.go:276] 1 containers: [8153479cebb05353bec31c41778746c485cb294bf41c3f98e30559ad7da64c64]
	I0920 18:23:16.843461   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:16.847284   74753 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:23:16.847341   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:23:16.882749   74753 cri.go:89] found id: "6df198ca54e8004a532de9b15cfe7a48a659ae06623c00a78cad528762944497"
	I0920 18:23:16.882770   74753 cri.go:89] found id: ""
	I0920 18:23:16.882777   74753 logs.go:276] 1 containers: [6df198ca54e8004a532de9b15cfe7a48a659ae06623c00a78cad528762944497]
	I0920 18:23:16.882838   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:16.887003   74753 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:23:16.887083   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:23:16.923261   74753 cri.go:89] found id: "3ebf4c520d684127cd0b6146d44e863090b9c2db3ac866e9ce3ef2d10d2d6da1"
	I0920 18:23:16.923289   74753 cri.go:89] found id: ""
	I0920 18:23:16.923303   74753 logs.go:276] 1 containers: [3ebf4c520d684127cd0b6146d44e863090b9c2db3ac866e9ce3ef2d10d2d6da1]
	I0920 18:23:16.923357   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:16.927722   74753 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:23:16.927782   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:23:16.963753   74753 cri.go:89] found id: ""
	I0920 18:23:16.963780   74753 logs.go:276] 0 containers: []
	W0920 18:23:16.963791   74753 logs.go:278] No container was found matching "kindnet"
	I0920 18:23:16.963799   74753 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0920 18:23:16.963866   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0920 18:23:17.000548   74753 cri.go:89] found id: "179e4a02f3459a613d3307806b477d54581555b18babca62d0c5e553b9562442"
	I0920 18:23:17.000566   74753 cri.go:89] found id: "3eb9abdf57de51979118cfa77b613c22e28122fcb1da9e72fbf6f3a946ed3531"
	I0920 18:23:17.000571   74753 cri.go:89] found id: ""
	I0920 18:23:17.000578   74753 logs.go:276] 2 containers: [179e4a02f3459a613d3307806b477d54581555b18babca62d0c5e553b9562442 3eb9abdf57de51979118cfa77b613c22e28122fcb1da9e72fbf6f3a946ed3531]
	I0920 18:23:17.000627   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:17.004708   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:17.008489   74753 logs.go:123] Gathering logs for coredns [35f0d8dd053d4c376ccf21215492d8509ff701e7d09b345fdbe06d505ba2d6b4] ...
	I0920 18:23:17.008518   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35f0d8dd053d4c376ccf21215492d8509ff701e7d09b345fdbe06d505ba2d6b4"
	I0920 18:23:17.045971   74753 logs.go:123] Gathering logs for kube-scheduler [8153479cebb05353bec31c41778746c485cb294bf41c3f98e30559ad7da64c64] ...
	I0920 18:23:17.046005   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8153479cebb05353bec31c41778746c485cb294bf41c3f98e30559ad7da64c64"
	I0920 18:23:17.083976   74753 logs.go:123] Gathering logs for kube-controller-manager [3ebf4c520d684127cd0b6146d44e863090b9c2db3ac866e9ce3ef2d10d2d6da1] ...
	I0920 18:23:17.084005   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ebf4c520d684127cd0b6146d44e863090b9c2db3ac866e9ce3ef2d10d2d6da1"
	I0920 18:23:17.137192   74753 logs.go:123] Gathering logs for storage-provisioner [179e4a02f3459a613d3307806b477d54581555b18babca62d0c5e553b9562442] ...
	I0920 18:23:17.137228   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 179e4a02f3459a613d3307806b477d54581555b18babca62d0c5e553b9562442"
	I0920 18:23:17.177203   74753 logs.go:123] Gathering logs for dmesg ...
	I0920 18:23:17.177234   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:23:17.192612   74753 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:23:17.192639   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 18:23:17.300681   74753 logs.go:123] Gathering logs for kube-apiserver [334e4df5baa4f8f7d58af2308fab7f78de22f7e07d6dfbcbeaa0df2a52f6b207] ...
	I0920 18:23:17.300714   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 334e4df5baa4f8f7d58af2308fab7f78de22f7e07d6dfbcbeaa0df2a52f6b207"
	I0920 18:23:17.346835   74753 logs.go:123] Gathering logs for storage-provisioner [3eb9abdf57de51979118cfa77b613c22e28122fcb1da9e72fbf6f3a946ed3531] ...
	I0920 18:23:17.346866   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3eb9abdf57de51979118cfa77b613c22e28122fcb1da9e72fbf6f3a946ed3531"
	I0920 18:23:17.382625   74753 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:23:17.382650   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:23:17.750803   74753 logs.go:123] Gathering logs for container status ...
	I0920 18:23:17.750853   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:23:17.801114   74753 logs.go:123] Gathering logs for kubelet ...
	I0920 18:23:17.801142   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:23:17.866547   74753 logs.go:123] Gathering logs for etcd [98aa96314cf8f863121b9953ad6335628481f536fcab49b7c00505a17eb35ff2] ...
	I0920 18:23:17.866589   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 98aa96314cf8f863121b9953ad6335628481f536fcab49b7c00505a17eb35ff2"
	I0920 18:23:17.909489   74753 logs.go:123] Gathering logs for kube-proxy [6df198ca54e8004a532de9b15cfe7a48a659ae06623c00a78cad528762944497] ...
	I0920 18:23:17.909519   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6df198ca54e8004a532de9b15cfe7a48a659ae06623c00a78cad528762944497"
	I0920 18:23:20.456014   74753 system_pods.go:59] 8 kube-system pods found
	I0920 18:23:20.456055   74753 system_pods.go:61] "coredns-7c65d6cfc9-j2t5h" [dd2636f1-3200-4f22-957c-046277c9be8c] Running
	I0920 18:23:20.456063   74753 system_pods.go:61] "etcd-no-preload-956403" [daf84be0-e673-4685-a998-47ec64664484] Running
	I0920 18:23:20.456068   74753 system_pods.go:61] "kube-apiserver-no-preload-956403" [416217e6-3c91-49ec-bd69-812a49b4afd9] Running
	I0920 18:23:20.456074   74753 system_pods.go:61] "kube-controller-manager-no-preload-956403" [30fae740-b073-4619-bca5-010ca90b2667] Running
	I0920 18:23:20.456079   74753 system_pods.go:61] "kube-proxy-sz4bm" [269600fb-ef65-4b17-8c07-76c79e35f5a8] Running
	I0920 18:23:20.456085   74753 system_pods.go:61] "kube-scheduler-no-preload-956403" [46432927-e506-4665-8825-c5f6a2ed0458] Running
	I0920 18:23:20.456095   74753 system_pods.go:61] "metrics-server-6867b74b74-tfsff" [599ba06a-6d4d-483b-b390-a3595a814757] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 18:23:20.456101   74753 system_pods.go:61] "storage-provisioner" [df661627-7d32-4467-9805-1ae65d4fa35c] Running
	I0920 18:23:20.456109   74753 system_pods.go:74] duration metric: took 3.780646412s to wait for pod list to return data ...
	I0920 18:23:20.456116   74753 default_sa.go:34] waiting for default service account to be created ...
	I0920 18:23:20.459571   74753 default_sa.go:45] found service account: "default"
	I0920 18:23:20.459598   74753 default_sa.go:55] duration metric: took 3.475906ms for default service account to be created ...
	I0920 18:23:20.459607   74753 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 18:23:20.464453   74753 system_pods.go:86] 8 kube-system pods found
	I0920 18:23:20.464489   74753 system_pods.go:89] "coredns-7c65d6cfc9-j2t5h" [dd2636f1-3200-4f22-957c-046277c9be8c] Running
	I0920 18:23:20.464495   74753 system_pods.go:89] "etcd-no-preload-956403" [daf84be0-e673-4685-a998-47ec64664484] Running
	I0920 18:23:20.464499   74753 system_pods.go:89] "kube-apiserver-no-preload-956403" [416217e6-3c91-49ec-bd69-812a49b4afd9] Running
	I0920 18:23:20.464544   74753 system_pods.go:89] "kube-controller-manager-no-preload-956403" [30fae740-b073-4619-bca5-010ca90b2667] Running
	I0920 18:23:20.464559   74753 system_pods.go:89] "kube-proxy-sz4bm" [269600fb-ef65-4b17-8c07-76c79e35f5a8] Running
	I0920 18:23:20.464563   74753 system_pods.go:89] "kube-scheduler-no-preload-956403" [46432927-e506-4665-8825-c5f6a2ed0458] Running
	I0920 18:23:20.464570   74753 system_pods.go:89] "metrics-server-6867b74b74-tfsff" [599ba06a-6d4d-483b-b390-a3595a814757] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 18:23:20.464577   74753 system_pods.go:89] "storage-provisioner" [df661627-7d32-4467-9805-1ae65d4fa35c] Running
	I0920 18:23:20.464583   74753 system_pods.go:126] duration metric: took 4.971732ms to wait for k8s-apps to be running ...
	I0920 18:23:20.464592   74753 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 18:23:20.464637   74753 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:23:20.483425   74753 system_svc.go:56] duration metric: took 18.826162ms WaitForService to wait for kubelet
	I0920 18:23:20.483452   74753 kubeadm.go:582] duration metric: took 4m21.334124064s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:23:20.483469   74753 node_conditions.go:102] verifying NodePressure condition ...
	I0920 18:23:20.486703   74753 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 18:23:20.486725   74753 node_conditions.go:123] node cpu capacity is 2
	I0920 18:23:20.486736   74753 node_conditions.go:105] duration metric: took 3.263035ms to run NodePressure ...
	I0920 18:23:20.486745   74753 start.go:241] waiting for startup goroutines ...
	I0920 18:23:20.486752   74753 start.go:246] waiting for cluster config update ...
	I0920 18:23:20.486763   74753 start.go:255] writing updated cluster config ...
	I0920 18:23:20.487023   74753 ssh_runner.go:195] Run: rm -f paused
	I0920 18:23:20.538569   74753 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 18:23:20.540783   74753 out.go:177] * Done! kubectl is now configured to use "no-preload-956403" cluster and "default" namespace by default
	I0920 18:24:23.891635   75577 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0920 18:24:23.891735   75577 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0920 18:24:23.893591   75577 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0920 18:24:23.893681   75577 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 18:24:23.893785   75577 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 18:24:23.893913   75577 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 18:24:23.894056   75577 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0920 18:24:23.894134   75577 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 18:24:23.895929   75577 out.go:235]   - Generating certificates and keys ...
	I0920 18:24:23.896024   75577 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 18:24:23.896127   75577 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 18:24:23.896245   75577 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 18:24:23.896329   75577 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 18:24:23.896426   75577 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 18:24:23.896510   75577 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 18:24:23.896603   75577 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 18:24:23.896686   75577 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 18:24:23.896783   75577 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 18:24:23.896865   75577 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 18:24:23.896916   75577 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 18:24:23.896984   75577 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 18:24:23.897054   75577 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 18:24:23.897112   75577 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 18:24:23.897174   75577 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 18:24:23.897237   75577 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 18:24:23.897367   75577 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 18:24:23.897488   75577 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 18:24:23.897554   75577 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 18:24:23.897660   75577 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 18:24:23.899056   75577 out.go:235]   - Booting up control plane ...
	I0920 18:24:23.899173   75577 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 18:24:23.899287   75577 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 18:24:23.899371   75577 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 18:24:23.899460   75577 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 18:24:23.899639   75577 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0920 18:24:23.899719   75577 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0920 18:24:23.899823   75577 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:24:23.900047   75577 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:24:23.900124   75577 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:24:23.900322   75577 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:24:23.900392   75577 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:24:23.900556   75577 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:24:23.900634   75577 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:24:23.900826   75577 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:24:23.900884   75577 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:24:23.901044   75577 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:24:23.901052   75577 kubeadm.go:310] 
	I0920 18:24:23.901094   75577 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0920 18:24:23.901129   75577 kubeadm.go:310] 		timed out waiting for the condition
	I0920 18:24:23.901135   75577 kubeadm.go:310] 
	I0920 18:24:23.901179   75577 kubeadm.go:310] 	This error is likely caused by:
	I0920 18:24:23.901209   75577 kubeadm.go:310] 		- The kubelet is not running
	I0920 18:24:23.901314   75577 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0920 18:24:23.901335   75577 kubeadm.go:310] 
	I0920 18:24:23.901458   75577 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0920 18:24:23.901515   75577 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0920 18:24:23.901547   75577 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0920 18:24:23.901556   75577 kubeadm.go:310] 
	I0920 18:24:23.901719   75577 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0920 18:24:23.901859   75577 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0920 18:24:23.901870   75577 kubeadm.go:310] 
	I0920 18:24:23.902030   75577 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0920 18:24:23.902122   75577 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0920 18:24:23.902186   75577 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0920 18:24:23.902264   75577 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0920 18:24:23.902299   75577 kubeadm.go:310] 
	W0920 18:24:23.902388   75577 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0920 18:24:23.902435   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0920 18:24:24.370490   75577 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:24:24.385383   75577 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 18:24:24.395443   75577 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 18:24:24.395463   75577 kubeadm.go:157] found existing configuration files:
	
	I0920 18:24:24.395505   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 18:24:24.404947   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 18:24:24.405021   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 18:24:24.415145   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 18:24:24.424199   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 18:24:24.424279   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 18:24:24.435108   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 18:24:24.446684   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 18:24:24.446742   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 18:24:24.457199   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 18:24:24.467240   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 18:24:24.467315   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 18:24:24.479293   75577 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 18:24:24.704601   75577 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 18:26:21.167677   75577 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0920 18:26:21.167760   75577 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0920 18:26:21.170176   75577 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0920 18:26:21.170249   75577 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 18:26:21.170353   75577 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 18:26:21.170485   75577 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 18:26:21.170653   75577 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0920 18:26:21.170778   75577 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 18:26:21.172479   75577 out.go:235]   - Generating certificates and keys ...
	I0920 18:26:21.172559   75577 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 18:26:21.172623   75577 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 18:26:21.172734   75577 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 18:26:21.172820   75577 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 18:26:21.172901   75577 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 18:26:21.172948   75577 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 18:26:21.173054   75577 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 18:26:21.173145   75577 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 18:26:21.173238   75577 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 18:26:21.173325   75577 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 18:26:21.173381   75577 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 18:26:21.173468   75577 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 18:26:21.173517   75577 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 18:26:21.173562   75577 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 18:26:21.173657   75577 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 18:26:21.173753   75577 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 18:26:21.173931   75577 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 18:26:21.174042   75577 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 18:26:21.174120   75577 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 18:26:21.174226   75577 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 18:26:21.175785   75577 out.go:235]   - Booting up control plane ...
	I0920 18:26:21.175897   75577 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 18:26:21.176062   75577 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 18:26:21.176179   75577 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 18:26:21.176283   75577 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 18:26:21.176482   75577 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0920 18:26:21.176567   75577 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0920 18:26:21.176654   75577 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:26:21.176850   75577 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:26:21.176952   75577 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:26:21.177146   75577 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:26:21.177255   75577 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:26:21.177460   75577 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:26:21.177527   75577 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:26:21.177681   75577 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:26:21.177758   75577 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:26:21.177952   75577 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:26:21.177966   75577 kubeadm.go:310] 
	I0920 18:26:21.178001   75577 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0920 18:26:21.178035   75577 kubeadm.go:310] 		timed out waiting for the condition
	I0920 18:26:21.178041   75577 kubeadm.go:310] 
	I0920 18:26:21.178070   75577 kubeadm.go:310] 	This error is likely caused by:
	I0920 18:26:21.178099   75577 kubeadm.go:310] 		- The kubelet is not running
	I0920 18:26:21.178239   75577 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0920 18:26:21.178251   75577 kubeadm.go:310] 
	I0920 18:26:21.178348   75577 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0920 18:26:21.178378   75577 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0920 18:26:21.178415   75577 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0920 18:26:21.178421   75577 kubeadm.go:310] 
	I0920 18:26:21.178505   75577 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0920 18:26:21.178581   75577 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0920 18:26:21.178590   75577 kubeadm.go:310] 
	I0920 18:26:21.178695   75577 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0920 18:26:21.178770   75577 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0920 18:26:21.178832   75577 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0920 18:26:21.178893   75577 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0920 18:26:21.178950   75577 kubeadm.go:394] duration metric: took 7m57.80706374s to StartCluster
	I0920 18:26:21.178961   75577 kubeadm.go:310] 
	I0920 18:26:21.178989   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:26:21.179047   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:26:21.229528   75577 cri.go:89] found id: ""
	I0920 18:26:21.229554   75577 logs.go:276] 0 containers: []
	W0920 18:26:21.229564   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:26:21.229572   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:26:21.229644   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:26:21.265997   75577 cri.go:89] found id: ""
	I0920 18:26:21.266021   75577 logs.go:276] 0 containers: []
	W0920 18:26:21.266029   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:26:21.266034   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:26:21.266081   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:26:21.309358   75577 cri.go:89] found id: ""
	I0920 18:26:21.309391   75577 logs.go:276] 0 containers: []
	W0920 18:26:21.309402   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:26:21.309409   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:26:21.309469   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:26:21.349418   75577 cri.go:89] found id: ""
	I0920 18:26:21.349442   75577 logs.go:276] 0 containers: []
	W0920 18:26:21.349451   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:26:21.349457   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:26:21.349537   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:26:21.388067   75577 cri.go:89] found id: ""
	I0920 18:26:21.388092   75577 logs.go:276] 0 containers: []
	W0920 18:26:21.388105   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:26:21.388111   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:26:21.388165   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:26:21.428474   75577 cri.go:89] found id: ""
	I0920 18:26:21.428501   75577 logs.go:276] 0 containers: []
	W0920 18:26:21.428512   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:26:21.428519   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:26:21.428580   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:26:21.466188   75577 cri.go:89] found id: ""
	I0920 18:26:21.466215   75577 logs.go:276] 0 containers: []
	W0920 18:26:21.466225   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:26:21.466232   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:26:21.466291   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:26:21.504396   75577 cri.go:89] found id: ""
	I0920 18:26:21.504422   75577 logs.go:276] 0 containers: []
	W0920 18:26:21.504432   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:26:21.504443   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:26:21.504457   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:26:21.608057   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:26:21.608098   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:26:21.672536   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:26:21.672564   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:26:21.723219   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:26:21.723251   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:26:21.736436   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:26:21.736463   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:26:21.809055   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0920 18:26:21.809079   75577 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0920 18:26:21.809128   75577 out.go:270] * 
	W0920 18:26:21.809197   75577 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0920 18:26:21.809217   75577 out.go:270] * 
	W0920 18:26:21.810193   75577 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 18:26:21.814385   75577 out.go:201] 
	W0920 18:26:21.816118   75577 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0920 18:26:21.816186   75577 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0920 18:26:21.816225   75577 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0920 18:26:21.818478   75577 out.go:201] 
	
	
	==> CRI-O <==
	Sep 20 18:31:51 embed-certs-768431 crio[711]: time="2024-09-20 18:31:51.586998784Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857111586967519,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dbd38d8f-0e9d-4f16-a99c-9e3ca6295b2b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:31:51 embed-certs-768431 crio[711]: time="2024-09-20 18:31:51.590302202Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6620c446-c31c-47ac-b6f4-439b031ccd06 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:31:51 embed-certs-768431 crio[711]: time="2024-09-20 18:31:51.590373335Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6620c446-c31c-47ac-b6f4-439b031ccd06 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:31:51 embed-certs-768431 crio[711]: time="2024-09-20 18:31:51.590556562Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:94ae620077d52dd151fad67882e6e730211f6413dee3d748910740faa4183eac,PodSandboxId:eb56e696cc808b397bd7b0318a5de8e65c328a6f94d327f20bec50c7d414fffb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726856563038879412,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jkkdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f64288e4-009c-4ba8-93e7-30ca5296af46,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79bcf7932ed8f05d3557ec43dc4b30eb901ae603c23ec95cdb26fcff309f8bf0,PodSandboxId:94fd14f7e592bbf392bd3d41fb6cc0d0712ba25d0f849314481ee6df7e221be6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726856562997339560,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-g5tkc,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 8877c0e8-6c8f-4a62-94bd-508982faee3c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7de5f69693ad1e4952f1f37d279b3885941bdfe611a578c407409e1f682c233d,PodSandboxId:c6470c7fa90ed125ade07fc6f5a938d9d40009cf95e18feca7f18fe38f1229fa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1726856561945979039,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09227b45-ea2a-4fcc-b082-9978e5f00a4b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0b5138d088184ba891b7358eb30006ce43b5efaaf9d1bdd2683bffd77d6f7a3,PodSandboxId:6998435c3f51dc14a100845aeaf9dfa662b47774b2295de912617af08804ba63,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1726856561286972333,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c4527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e2d5102-0c42-4b87-8a27-dd53b8eb41f9,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95d89e4642aec6217d5404f63a2c94888e755e70ff067dca9c79dbd4d8ff401f,PodSandboxId:1da58d69dc8baa1a260a18189c4e6103d9c9c8511ede32eceea749c375d1f4b8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726856550114256246,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-768431,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dff06b5ea7279b1405d52ac7628b7439,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34a824c120f70ecac74b6f896fb9ee5b8c6212488ff5593479b43fd125e17797,PodSandboxId:d11b4cd4f6d8e14717fd212f4b2d15ca8acef5c22f1ac534c092b1b8de59fa91,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726856550115101066,Labels:map[
string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-768431,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d4e18090a0918ea03f4cdf2c5f86d9c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2f83bd27b1b0afdbcf06db3ba261f996796b94d246b84e0c8a0155a8e789c0b,PodSandboxId:338d7a9191196f3ced0ba70ed6fd9e5039e63752cdeb27ece11219098ded4e8b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726856550084803387,Label
s:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-768431,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71c26280473a70f3d3118a2c74ae46b8,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f67d435a1e16fdbe13c5f48c2f8afb534654e4177178bb9d93b86881fc88d98d,PodSandboxId:436834be7dfc5eb64d8cb3c0af7fc2e5fdab7e70dd273d6dafabf6bf4cccd47e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726856550036607735,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-768431,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bcaa81b6e93977e61634073ae28c68a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4a6e3230e7a6913391ee20bb77bdd367ee8233dfbe564b36049d0b4fdae6268,PodSandboxId:db2e24821ff52de88228e023487aa88fbdfd03452708024d27425c33cf87f9ab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726856263910944947,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-768431,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71c26280473a70f3d3118a2c74ae46b8,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6620c446-c31c-47ac-b6f4-439b031ccd06 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:31:51 embed-certs-768431 crio[711]: time="2024-09-20 18:31:51.628896167Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=afda4edf-397f-4bf7-971b-37e08f9086c7 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:31:51 embed-certs-768431 crio[711]: time="2024-09-20 18:31:51.628982932Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=afda4edf-397f-4bf7-971b-37e08f9086c7 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:31:51 embed-certs-768431 crio[711]: time="2024-09-20 18:31:51.630016510Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e1f3850d-a5b7-4ecf-a884-7441a5e70caf name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:31:51 embed-certs-768431 crio[711]: time="2024-09-20 18:31:51.630637857Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857111630614807,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e1f3850d-a5b7-4ecf-a884-7441a5e70caf name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:31:51 embed-certs-768431 crio[711]: time="2024-09-20 18:31:51.631075973Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d21d22ba-312d-4086-a129-64f7c51e034d name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:31:51 embed-certs-768431 crio[711]: time="2024-09-20 18:31:51.631149573Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d21d22ba-312d-4086-a129-64f7c51e034d name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:31:51 embed-certs-768431 crio[711]: time="2024-09-20 18:31:51.631497442Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:94ae620077d52dd151fad67882e6e730211f6413dee3d748910740faa4183eac,PodSandboxId:eb56e696cc808b397bd7b0318a5de8e65c328a6f94d327f20bec50c7d414fffb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726856563038879412,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jkkdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f64288e4-009c-4ba8-93e7-30ca5296af46,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79bcf7932ed8f05d3557ec43dc4b30eb901ae603c23ec95cdb26fcff309f8bf0,PodSandboxId:94fd14f7e592bbf392bd3d41fb6cc0d0712ba25d0f849314481ee6df7e221be6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726856562997339560,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-g5tkc,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 8877c0e8-6c8f-4a62-94bd-508982faee3c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7de5f69693ad1e4952f1f37d279b3885941bdfe611a578c407409e1f682c233d,PodSandboxId:c6470c7fa90ed125ade07fc6f5a938d9d40009cf95e18feca7f18fe38f1229fa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1726856561945979039,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09227b45-ea2a-4fcc-b082-9978e5f00a4b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0b5138d088184ba891b7358eb30006ce43b5efaaf9d1bdd2683bffd77d6f7a3,PodSandboxId:6998435c3f51dc14a100845aeaf9dfa662b47774b2295de912617af08804ba63,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1726856561286972333,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c4527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e2d5102-0c42-4b87-8a27-dd53b8eb41f9,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95d89e4642aec6217d5404f63a2c94888e755e70ff067dca9c79dbd4d8ff401f,PodSandboxId:1da58d69dc8baa1a260a18189c4e6103d9c9c8511ede32eceea749c375d1f4b8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726856550114256246,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-768431,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dff06b5ea7279b1405d52ac7628b7439,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34a824c120f70ecac74b6f896fb9ee5b8c6212488ff5593479b43fd125e17797,PodSandboxId:d11b4cd4f6d8e14717fd212f4b2d15ca8acef5c22f1ac534c092b1b8de59fa91,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726856550115101066,Labels:map[
string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-768431,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d4e18090a0918ea03f4cdf2c5f86d9c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2f83bd27b1b0afdbcf06db3ba261f996796b94d246b84e0c8a0155a8e789c0b,PodSandboxId:338d7a9191196f3ced0ba70ed6fd9e5039e63752cdeb27ece11219098ded4e8b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726856550084803387,Label
s:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-768431,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71c26280473a70f3d3118a2c74ae46b8,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f67d435a1e16fdbe13c5f48c2f8afb534654e4177178bb9d93b86881fc88d98d,PodSandboxId:436834be7dfc5eb64d8cb3c0af7fc2e5fdab7e70dd273d6dafabf6bf4cccd47e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726856550036607735,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-768431,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bcaa81b6e93977e61634073ae28c68a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4a6e3230e7a6913391ee20bb77bdd367ee8233dfbe564b36049d0b4fdae6268,PodSandboxId:db2e24821ff52de88228e023487aa88fbdfd03452708024d27425c33cf87f9ab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726856263910944947,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-768431,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71c26280473a70f3d3118a2c74ae46b8,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d21d22ba-312d-4086-a129-64f7c51e034d name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:31:51 embed-certs-768431 crio[711]: time="2024-09-20 18:31:51.669121975Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=29823fa4-1bbc-4992-88c2-d164432302e1 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:31:51 embed-certs-768431 crio[711]: time="2024-09-20 18:31:51.669267598Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=29823fa4-1bbc-4992-88c2-d164432302e1 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:31:51 embed-certs-768431 crio[711]: time="2024-09-20 18:31:51.670401619Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=df47a3c1-2e2b-4eb9-8690-642050eca255 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:31:51 embed-certs-768431 crio[711]: time="2024-09-20 18:31:51.671260547Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857111671209396,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=df47a3c1-2e2b-4eb9-8690-642050eca255 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:31:51 embed-certs-768431 crio[711]: time="2024-09-20 18:31:51.671790027Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=161bb3fd-a201-4698-83a3-b2a1d31d3109 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:31:51 embed-certs-768431 crio[711]: time="2024-09-20 18:31:51.671843570Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=161bb3fd-a201-4698-83a3-b2a1d31d3109 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:31:51 embed-certs-768431 crio[711]: time="2024-09-20 18:31:51.672053397Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:94ae620077d52dd151fad67882e6e730211f6413dee3d748910740faa4183eac,PodSandboxId:eb56e696cc808b397bd7b0318a5de8e65c328a6f94d327f20bec50c7d414fffb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726856563038879412,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jkkdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f64288e4-009c-4ba8-93e7-30ca5296af46,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79bcf7932ed8f05d3557ec43dc4b30eb901ae603c23ec95cdb26fcff309f8bf0,PodSandboxId:94fd14f7e592bbf392bd3d41fb6cc0d0712ba25d0f849314481ee6df7e221be6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726856562997339560,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-g5tkc,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 8877c0e8-6c8f-4a62-94bd-508982faee3c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7de5f69693ad1e4952f1f37d279b3885941bdfe611a578c407409e1f682c233d,PodSandboxId:c6470c7fa90ed125ade07fc6f5a938d9d40009cf95e18feca7f18fe38f1229fa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1726856561945979039,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09227b45-ea2a-4fcc-b082-9978e5f00a4b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0b5138d088184ba891b7358eb30006ce43b5efaaf9d1bdd2683bffd77d6f7a3,PodSandboxId:6998435c3f51dc14a100845aeaf9dfa662b47774b2295de912617af08804ba63,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1726856561286972333,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c4527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e2d5102-0c42-4b87-8a27-dd53b8eb41f9,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95d89e4642aec6217d5404f63a2c94888e755e70ff067dca9c79dbd4d8ff401f,PodSandboxId:1da58d69dc8baa1a260a18189c4e6103d9c9c8511ede32eceea749c375d1f4b8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726856550114256246,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-768431,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dff06b5ea7279b1405d52ac7628b7439,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34a824c120f70ecac74b6f896fb9ee5b8c6212488ff5593479b43fd125e17797,PodSandboxId:d11b4cd4f6d8e14717fd212f4b2d15ca8acef5c22f1ac534c092b1b8de59fa91,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726856550115101066,Labels:map[
string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-768431,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d4e18090a0918ea03f4cdf2c5f86d9c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2f83bd27b1b0afdbcf06db3ba261f996796b94d246b84e0c8a0155a8e789c0b,PodSandboxId:338d7a9191196f3ced0ba70ed6fd9e5039e63752cdeb27ece11219098ded4e8b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726856550084803387,Label
s:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-768431,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71c26280473a70f3d3118a2c74ae46b8,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f67d435a1e16fdbe13c5f48c2f8afb534654e4177178bb9d93b86881fc88d98d,PodSandboxId:436834be7dfc5eb64d8cb3c0af7fc2e5fdab7e70dd273d6dafabf6bf4cccd47e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726856550036607735,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-768431,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bcaa81b6e93977e61634073ae28c68a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4a6e3230e7a6913391ee20bb77bdd367ee8233dfbe564b36049d0b4fdae6268,PodSandboxId:db2e24821ff52de88228e023487aa88fbdfd03452708024d27425c33cf87f9ab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726856263910944947,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-768431,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71c26280473a70f3d3118a2c74ae46b8,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=161bb3fd-a201-4698-83a3-b2a1d31d3109 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:31:51 embed-certs-768431 crio[711]: time="2024-09-20 18:31:51.711355840Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ce903aa3-7f83-4baa-9c9b-61ea9f06afb8 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:31:51 embed-certs-768431 crio[711]: time="2024-09-20 18:31:51.711443198Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ce903aa3-7f83-4baa-9c9b-61ea9f06afb8 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:31:51 embed-certs-768431 crio[711]: time="2024-09-20 18:31:51.712713818Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ead36534-437c-4f7c-9f38-947ebfbd19c7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:31:51 embed-certs-768431 crio[711]: time="2024-09-20 18:31:51.713119104Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857111713095414,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ead36534-437c-4f7c-9f38-947ebfbd19c7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:31:51 embed-certs-768431 crio[711]: time="2024-09-20 18:31:51.714072693Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=045213a2-30f4-4af3-adef-954512f221ea name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:31:51 embed-certs-768431 crio[711]: time="2024-09-20 18:31:51.714210569Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=045213a2-30f4-4af3-adef-954512f221ea name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:31:51 embed-certs-768431 crio[711]: time="2024-09-20 18:31:51.714533476Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:94ae620077d52dd151fad67882e6e730211f6413dee3d748910740faa4183eac,PodSandboxId:eb56e696cc808b397bd7b0318a5de8e65c328a6f94d327f20bec50c7d414fffb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726856563038879412,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jkkdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f64288e4-009c-4ba8-93e7-30ca5296af46,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79bcf7932ed8f05d3557ec43dc4b30eb901ae603c23ec95cdb26fcff309f8bf0,PodSandboxId:94fd14f7e592bbf392bd3d41fb6cc0d0712ba25d0f849314481ee6df7e221be6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726856562997339560,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-g5tkc,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 8877c0e8-6c8f-4a62-94bd-508982faee3c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7de5f69693ad1e4952f1f37d279b3885941bdfe611a578c407409e1f682c233d,PodSandboxId:c6470c7fa90ed125ade07fc6f5a938d9d40009cf95e18feca7f18fe38f1229fa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1726856561945979039,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09227b45-ea2a-4fcc-b082-9978e5f00a4b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0b5138d088184ba891b7358eb30006ce43b5efaaf9d1bdd2683bffd77d6f7a3,PodSandboxId:6998435c3f51dc14a100845aeaf9dfa662b47774b2295de912617af08804ba63,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1726856561286972333,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c4527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e2d5102-0c42-4b87-8a27-dd53b8eb41f9,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95d89e4642aec6217d5404f63a2c94888e755e70ff067dca9c79dbd4d8ff401f,PodSandboxId:1da58d69dc8baa1a260a18189c4e6103d9c9c8511ede32eceea749c375d1f4b8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726856550114256246,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-768431,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dff06b5ea7279b1405d52ac7628b7439,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34a824c120f70ecac74b6f896fb9ee5b8c6212488ff5593479b43fd125e17797,PodSandboxId:d11b4cd4f6d8e14717fd212f4b2d15ca8acef5c22f1ac534c092b1b8de59fa91,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726856550115101066,Labels:map[
string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-768431,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d4e18090a0918ea03f4cdf2c5f86d9c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2f83bd27b1b0afdbcf06db3ba261f996796b94d246b84e0c8a0155a8e789c0b,PodSandboxId:338d7a9191196f3ced0ba70ed6fd9e5039e63752cdeb27ece11219098ded4e8b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726856550084803387,Label
s:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-768431,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71c26280473a70f3d3118a2c74ae46b8,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f67d435a1e16fdbe13c5f48c2f8afb534654e4177178bb9d93b86881fc88d98d,PodSandboxId:436834be7dfc5eb64d8cb3c0af7fc2e5fdab7e70dd273d6dafabf6bf4cccd47e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726856550036607735,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-768431,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bcaa81b6e93977e61634073ae28c68a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4a6e3230e7a6913391ee20bb77bdd367ee8233dfbe564b36049d0b4fdae6268,PodSandboxId:db2e24821ff52de88228e023487aa88fbdfd03452708024d27425c33cf87f9ab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726856263910944947,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-768431,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71c26280473a70f3d3118a2c74ae46b8,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=045213a2-30f4-4af3-adef-954512f221ea name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	94ae620077d52       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   eb56e696cc808       coredns-7c65d6cfc9-jkkdn
	79bcf7932ed8f       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   94fd14f7e592b       coredns-7c65d6cfc9-g5tkc
	7de5f69693ad1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   c6470c7fa90ed       storage-provisioner
	f0b5138d08818       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   9 minutes ago       Running             kube-proxy                0                   6998435c3f51d       kube-proxy-c4527
	34a824c120f70       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   9 minutes ago       Running             kube-controller-manager   2                   d11b4cd4f6d8e       kube-controller-manager-embed-certs-768431
	95d89e4642aec       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   9 minutes ago       Running             kube-scheduler            2                   1da58d69dc8ba       kube-scheduler-embed-certs-768431
	d2f83bd27b1b0       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   9 minutes ago       Running             kube-apiserver            2                   338d7a9191196       kube-apiserver-embed-certs-768431
	f67d435a1e16f       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   436834be7dfc5       etcd-embed-certs-768431
	d4a6e3230e7a6       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   14 minutes ago      Exited              kube-apiserver            1                   db2e24821ff52       kube-apiserver-embed-certs-768431
	
	
	==> coredns [79bcf7932ed8f05d3557ec43dc4b30eb901ae603c23ec95cdb26fcff309f8bf0] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [94ae620077d52dd151fad67882e6e730211f6413dee3d748910740faa4183eac] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               embed-certs-768431
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-768431
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0626f22cf0d915d75e291a5bce701f94395056e1
	                    minikube.k8s.io/name=embed-certs-768431
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T18_22_36_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 18:22:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-768431
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 18:31:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 18:27:51 +0000   Fri, 20 Sep 2024 18:22:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 18:27:51 +0000   Fri, 20 Sep 2024 18:22:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 18:27:51 +0000   Fri, 20 Sep 2024 18:22:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 18:27:51 +0000   Fri, 20 Sep 2024 18:22:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.202
	  Hostname:    embed-certs-768431
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2cf4f703da584eb5a439ae2a45e7a9e9
	  System UUID:                2cf4f703-da58-4eb5-a439-ae2a45e7a9e9
	  Boot ID:                    e3ce8ed5-feb9-44fd-a7f0-77f81b6c7830
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-g5tkc                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m12s
	  kube-system                 coredns-7c65d6cfc9-jkkdn                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m12s
	  kube-system                 etcd-embed-certs-768431                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m16s
	  kube-system                 kube-apiserver-embed-certs-768431             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m16s
	  kube-system                 kube-controller-manager-embed-certs-768431    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m16s
	  kube-system                 kube-proxy-c4527                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m12s
	  kube-system                 kube-scheduler-embed-certs-768431             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m16s
	  kube-system                 metrics-server-6867b74b74-9snmf               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m11s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m10s                  kube-proxy       
	  Normal  Starting                 9m23s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m23s (x8 over 9m23s)  kubelet          Node embed-certs-768431 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m23s (x8 over 9m23s)  kubelet          Node embed-certs-768431 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m23s (x7 over 9m23s)  kubelet          Node embed-certs-768431 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m17s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m16s                  kubelet          Node embed-certs-768431 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m16s                  kubelet          Node embed-certs-768431 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m16s                  kubelet          Node embed-certs-768431 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m13s                  node-controller  Node embed-certs-768431 event: Registered Node embed-certs-768431 in Controller
	
	
	==> dmesg <==
	[  +0.053776] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036914] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.023671] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.943145] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.537384] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.243094] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.061102] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058421] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +0.201206] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +0.129814] systemd-fstab-generator[671]: Ignoring "noauto" option for root device
	[  +0.289179] systemd-fstab-generator[701]: Ignoring "noauto" option for root device
	[  +4.224186] systemd-fstab-generator[794]: Ignoring "noauto" option for root device
	[  +2.382925] systemd-fstab-generator[913]: Ignoring "noauto" option for root device
	[  +0.070571] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.052501] kauditd_printk_skb: 92 callbacks suppressed
	[  +6.312397] kauditd_printk_skb: 62 callbacks suppressed
	[Sep20 18:22] kauditd_printk_skb: 6 callbacks suppressed
	[  +1.026058] systemd-fstab-generator[2566]: Ignoring "noauto" option for root device
	[  +4.418753] kauditd_printk_skb: 54 callbacks suppressed
	[  +2.136478] systemd-fstab-generator[2890]: Ignoring "noauto" option for root device
	[  +4.889484] systemd-fstab-generator[3017]: Ignoring "noauto" option for root device
	[  +0.114917] kauditd_printk_skb: 14 callbacks suppressed
	[  +9.211258] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [f67d435a1e16fdbe13c5f48c2f8afb534654e4177178bb9d93b86881fc88d98d] <==
	{"level":"info","ts":"2024-09-20T18:22:30.412572Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-20T18:22:30.412935Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.61.202:2380"}
	{"level":"info","ts":"2024-09-20T18:22:30.412973Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.61.202:2380"}
	{"level":"info","ts":"2024-09-20T18:22:30.415118Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"db33251a0b9c6fb3 switched to configuration voters=(15795009111912640435)"}
	{"level":"info","ts":"2024-09-20T18:22:30.420404Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"834577a0a9e3ba88","local-member-id":"db33251a0b9c6fb3","added-peer-id":"db33251a0b9c6fb3","added-peer-peer-urls":["https://192.168.61.202:2380"]}
	{"level":"info","ts":"2024-09-20T18:22:30.663249Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"db33251a0b9c6fb3 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-20T18:22:30.663325Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"db33251a0b9c6fb3 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-20T18:22:30.663348Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"db33251a0b9c6fb3 received MsgPreVoteResp from db33251a0b9c6fb3 at term 1"}
	{"level":"info","ts":"2024-09-20T18:22:30.663361Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"db33251a0b9c6fb3 became candidate at term 2"}
	{"level":"info","ts":"2024-09-20T18:22:30.663367Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"db33251a0b9c6fb3 received MsgVoteResp from db33251a0b9c6fb3 at term 2"}
	{"level":"info","ts":"2024-09-20T18:22:30.663375Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"db33251a0b9c6fb3 became leader at term 2"}
	{"level":"info","ts":"2024-09-20T18:22:30.663382Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: db33251a0b9c6fb3 elected leader db33251a0b9c6fb3 at term 2"}
	{"level":"info","ts":"2024-09-20T18:22:30.667635Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T18:22:30.670603Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"db33251a0b9c6fb3","local-member-attributes":"{Name:embed-certs-768431 ClientURLs:[https://192.168.61.202:2379]}","request-path":"/0/members/db33251a0b9c6fb3/attributes","cluster-id":"834577a0a9e3ba88","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-20T18:22:30.670768Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T18:22:30.671482Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T18:22:30.671599Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"834577a0a9e3ba88","local-member-id":"db33251a0b9c6fb3","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T18:22:30.671766Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T18:22:30.671812Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T18:22:30.674217Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-20T18:22:30.674248Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-20T18:22:30.674582Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T18:22:30.677957Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.202:2379"}
	{"level":"info","ts":"2024-09-20T18:22:30.674798Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T18:22:30.680982Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 18:31:52 up 14 min,  0 users,  load average: 0.08, 0.18, 0.14
	Linux embed-certs-768431 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [d2f83bd27b1b0afdbcf06db3ba261f996796b94d246b84e0c8a0155a8e789c0b] <==
	W0920 18:27:34.029891       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 18:27:34.030230       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0920 18:27:34.031410       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0920 18:27:34.031464       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0920 18:28:34.032358       1 handler_proxy.go:99] no RequestInfo found in the context
	W0920 18:28:34.032618       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 18:28:34.032783       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0920 18:28:34.032784       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0920 18:28:34.033986       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0920 18:28:34.034092       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0920 18:30:34.035319       1 handler_proxy.go:99] no RequestInfo found in the context
	W0920 18:30:34.035740       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 18:30:34.035931       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0920 18:30:34.036005       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0920 18:30:34.037216       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0920 18:30:34.037225       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [d4a6e3230e7a6913391ee20bb77bdd367ee8233dfbe564b36049d0b4fdae6268] <==
	W0920 18:22:23.994436       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:22:24.037644       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:22:24.045245       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:22:24.052931       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:22:24.069361       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:22:24.110357       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:22:24.113224       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:22:24.126889       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:22:24.162604       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:22:24.192594       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:22:24.268655       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:22:24.281644       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:22:24.303566       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:22:24.322714       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:22:24.353294       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:22:24.365938       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:22:24.385661       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:22:24.403391       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:22:24.403394       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:22:24.496921       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:22:24.663767       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:22:24.846566       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:22:24.991989       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:22:25.117547       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:22:25.207107       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [34a824c120f70ecac74b6f896fb9ee5b8c6212488ff5593479b43fd125e17797] <==
	E0920 18:26:40.057510       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 18:26:40.497640       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 18:27:10.064807       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 18:27:10.506784       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 18:27:40.072001       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 18:27:40.515428       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0920 18:27:51.898330       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-768431"
	E0920 18:28:10.078400       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 18:28:10.531081       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0920 18:28:27.928846       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="690.932µs"
	I0920 18:28:38.928198       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="118.18µs"
	E0920 18:28:40.085737       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 18:28:40.539092       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 18:29:10.092692       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 18:29:10.547771       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 18:29:40.100248       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 18:29:40.556404       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 18:30:10.108065       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 18:30:10.571645       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 18:30:40.115693       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 18:30:40.580847       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 18:31:10.122599       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 18:31:10.589825       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 18:31:40.131141       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 18:31:40.598112       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [f0b5138d088184ba891b7358eb30006ce43b5efaaf9d1bdd2683bffd77d6f7a3] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 18:22:41.618782       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0920 18:22:41.632584       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.202"]
	E0920 18:22:41.632673       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 18:22:41.707326       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 18:22:41.707379       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 18:22:41.707406       1 server_linux.go:169] "Using iptables Proxier"
	I0920 18:22:41.756034       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 18:22:41.766034       1 server.go:483] "Version info" version="v1.31.1"
	I0920 18:22:41.766067       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 18:22:41.768942       1 config.go:199] "Starting service config controller"
	I0920 18:22:41.769038       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 18:22:41.769118       1 config.go:105] "Starting endpoint slice config controller"
	I0920 18:22:41.769135       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 18:22:41.771399       1 config.go:328] "Starting node config controller"
	I0920 18:22:41.771801       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 18:22:41.869610       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 18:22:41.869698       1 shared_informer.go:320] Caches are synced for service config
	I0920 18:22:41.873272       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [95d89e4642aec6217d5404f63a2c94888e755e70ff067dca9c79dbd4d8ff401f] <==
	W0920 18:22:34.030430       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0920 18:22:34.030488       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 18:22:34.140465       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0920 18:22:34.140515       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 18:22:34.194928       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0920 18:22:34.195227       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 18:22:34.229504       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0920 18:22:34.230300       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 18:22:34.258599       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0920 18:22:34.258703       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0920 18:22:34.360720       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0920 18:22:34.360808       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 18:22:34.400548       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0920 18:22:34.400594       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 18:22:34.433905       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0920 18:22:34.433956       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 18:22:34.434011       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0920 18:22:34.434021       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 18:22:34.434038       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0920 18:22:34.434049       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 18:22:34.465961       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0920 18:22:34.466868       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0920 18:22:34.498968       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0920 18:22:34.499017       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0920 18:22:37.282805       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 20 18:30:36 embed-certs-768431 kubelet[2897]: E0920 18:30:36.918208    2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-9snmf" podUID="5fb654f5-5e73-436e-bc9d-04ef5077deb4"
	Sep 20 18:30:46 embed-certs-768431 kubelet[2897]: E0920 18:30:46.065708    2897 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857046061273500,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:30:46 embed-certs-768431 kubelet[2897]: E0920 18:30:46.065760    2897 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857046061273500,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:30:51 embed-certs-768431 kubelet[2897]: E0920 18:30:51.911324    2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-9snmf" podUID="5fb654f5-5e73-436e-bc9d-04ef5077deb4"
	Sep 20 18:30:56 embed-certs-768431 kubelet[2897]: E0920 18:30:56.067108    2897 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857056066876150,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:30:56 embed-certs-768431 kubelet[2897]: E0920 18:30:56.067213    2897 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857056066876150,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:31:03 embed-certs-768431 kubelet[2897]: E0920 18:31:03.915708    2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-9snmf" podUID="5fb654f5-5e73-436e-bc9d-04ef5077deb4"
	Sep 20 18:31:06 embed-certs-768431 kubelet[2897]: E0920 18:31:06.068687    2897 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857066068413428,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:31:06 embed-certs-768431 kubelet[2897]: E0920 18:31:06.068734    2897 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857066068413428,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:31:16 embed-certs-768431 kubelet[2897]: E0920 18:31:16.070409    2897 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857076069811744,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:31:16 embed-certs-768431 kubelet[2897]: E0920 18:31:16.070759    2897 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857076069811744,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:31:16 embed-certs-768431 kubelet[2897]: E0920 18:31:16.911259    2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-9snmf" podUID="5fb654f5-5e73-436e-bc9d-04ef5077deb4"
	Sep 20 18:31:26 embed-certs-768431 kubelet[2897]: E0920 18:31:26.072821    2897 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857086072051002,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:31:26 embed-certs-768431 kubelet[2897]: E0920 18:31:26.073368    2897 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857086072051002,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:31:30 embed-certs-768431 kubelet[2897]: E0920 18:31:30.911487    2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-9snmf" podUID="5fb654f5-5e73-436e-bc9d-04ef5077deb4"
	Sep 20 18:31:35 embed-certs-768431 kubelet[2897]: E0920 18:31:35.952495    2897 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 20 18:31:35 embed-certs-768431 kubelet[2897]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 20 18:31:35 embed-certs-768431 kubelet[2897]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 20 18:31:35 embed-certs-768431 kubelet[2897]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 20 18:31:35 embed-certs-768431 kubelet[2897]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 20 18:31:36 embed-certs-768431 kubelet[2897]: E0920 18:31:36.075811    2897 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857096074901010,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:31:36 embed-certs-768431 kubelet[2897]: E0920 18:31:36.075846    2897 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857096074901010,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:31:43 embed-certs-768431 kubelet[2897]: E0920 18:31:43.915786    2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-9snmf" podUID="5fb654f5-5e73-436e-bc9d-04ef5077deb4"
	Sep 20 18:31:46 embed-certs-768431 kubelet[2897]: E0920 18:31:46.077271    2897 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857106076843462,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:31:46 embed-certs-768431 kubelet[2897]: E0920 18:31:46.077636    2897 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857106076843462,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [7de5f69693ad1e4952f1f37d279b3885941bdfe611a578c407409e1f682c233d] <==
	I0920 18:22:42.070964       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0920 18:22:42.091225       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0920 18:22:42.091432       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0920 18:22:42.112357       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0920 18:22:42.113365       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"495a6000-21f4-4e58-bb3e-d8c4065c9026", APIVersion:"v1", ResourceVersion:"425", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-768431_34d83d57-5df8-4e34-b342-b5bf32fafb69 became leader
	I0920 18:22:42.121048       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-768431_34d83d57-5df8-4e34-b342-b5bf32fafb69!
	I0920 18:22:42.221966       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-768431_34d83d57-5df8-4e34-b342-b5bf32fafb69!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-768431 -n embed-certs-768431
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-768431 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-9snmf
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-768431 describe pod metrics-server-6867b74b74-9snmf
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-768431 describe pod metrics-server-6867b74b74-9snmf: exit status 1 (65.855733ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-9snmf" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-768431 describe pod metrics-server-6867b74b74-9snmf: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.58s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0920 18:23:30.234271   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/bridge-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:23:39.706889   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/flannel-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:25:02.780045   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/kindnet-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:25:04.942755   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/auto-833505/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-956403 -n no-preload-956403
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-09-20 18:32:21.063699037 +0000 UTC m=+6522.707780582
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-956403 -n no-preload-956403
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-956403 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-956403 logs -n 25: (2.146902629s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-833505 sudo cat                             | flannel-833505               | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC | 20 Sep 24 18:09 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p flannel-833505 sudo                                 | flannel-833505               | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC | 20 Sep 24 18:09 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p flannel-833505 sudo                                 | flannel-833505               | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC | 20 Sep 24 18:09 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p flannel-833505 sudo                                 | flannel-833505               | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC | 20 Sep 24 18:09 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p flannel-833505 sudo find                            | flannel-833505               | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC | 20 Sep 24 18:09 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p flannel-833505 sudo crio                            | flannel-833505               | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC | 20 Sep 24 18:09 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p flannel-833505                                      | flannel-833505               | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC | 20 Sep 24 18:09 UTC |
	| delete  | -p                                                     | disable-driver-mounts-739804 | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC | 20 Sep 24 18:09 UTC |
	|         | disable-driver-mounts-739804                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-553719 | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC | 20 Sep 24 18:10 UTC |
	|         | default-k8s-diff-port-553719                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-956403             | no-preload-956403            | jenkins | v1.34.0 | 20 Sep 24 18:10 UTC | 20 Sep 24 18:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-956403                                   | no-preload-956403            | jenkins | v1.34.0 | 20 Sep 24 18:10 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-768431            | embed-certs-768431           | jenkins | v1.34.0 | 20 Sep 24 18:10 UTC | 20 Sep 24 18:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-768431                                  | embed-certs-768431           | jenkins | v1.34.0 | 20 Sep 24 18:10 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-553719  | default-k8s-diff-port-553719 | jenkins | v1.34.0 | 20 Sep 24 18:11 UTC | 20 Sep 24 18:11 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-553719 | jenkins | v1.34.0 | 20 Sep 24 18:11 UTC |                     |
	|         | default-k8s-diff-port-553719                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-956403                  | no-preload-956403            | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-744025        | old-k8s-version-744025       | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p no-preload-956403                                   | no-preload-956403            | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC | 20 Sep 24 18:23 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-768431                 | embed-certs-768431           | jenkins | v1.34.0 | 20 Sep 24 18:13 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-768431                                  | embed-certs-768431           | jenkins | v1.34.0 | 20 Sep 24 18:13 UTC | 20 Sep 24 18:22 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-553719       | default-k8s-diff-port-553719 | jenkins | v1.34.0 | 20 Sep 24 18:13 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-553719 | jenkins | v1.34.0 | 20 Sep 24 18:13 UTC | 20 Sep 24 18:22 UTC |
	|         | default-k8s-diff-port-553719                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-744025                              | old-k8s-version-744025       | jenkins | v1.34.0 | 20 Sep 24 18:14 UTC | 20 Sep 24 18:14 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-744025             | old-k8s-version-744025       | jenkins | v1.34.0 | 20 Sep 24 18:14 UTC | 20 Sep 24 18:14 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-744025                              | old-k8s-version-744025       | jenkins | v1.34.0 | 20 Sep 24 18:14 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 18:14:12
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 18:14:12.735020   75577 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:14:12.735385   75577 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:14:12.735400   75577 out.go:358] Setting ErrFile to fd 2...
	I0920 18:14:12.735407   75577 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:14:12.735610   75577 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-8777/.minikube/bin
	I0920 18:14:12.736161   75577 out.go:352] Setting JSON to false
	I0920 18:14:12.737033   75577 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6996,"bootTime":1726849057,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 18:14:12.737131   75577 start.go:139] virtualization: kvm guest
	I0920 18:14:12.739127   75577 out.go:177] * [old-k8s-version-744025] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 18:14:12.740288   75577 notify.go:220] Checking for updates...
	I0920 18:14:12.740310   75577 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 18:14:12.741542   75577 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 18:14:12.742727   75577 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-8777/kubeconfig
	I0920 18:14:12.743806   75577 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-8777/.minikube
	I0920 18:14:12.744863   75577 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 18:14:12.745877   75577 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 18:14:12.747201   75577 config.go:182] Loaded profile config "old-k8s-version-744025": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0920 18:14:12.747636   75577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:14:12.747678   75577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:14:12.762352   75577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42889
	I0920 18:14:12.762734   75577 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:14:12.763321   75577 main.go:141] libmachine: Using API Version  1
	I0920 18:14:12.763348   75577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:14:12.763742   75577 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:14:12.763968   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .DriverName
	I0920 18:14:12.765767   75577 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0920 18:14:12.767036   75577 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 18:14:12.767377   75577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:14:12.767449   75577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:14:12.782473   75577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40777
	I0920 18:14:12.782971   75577 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:14:12.783455   75577 main.go:141] libmachine: Using API Version  1
	I0920 18:14:12.783477   75577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:14:12.783807   75577 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:14:12.784035   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .DriverName
	I0920 18:14:12.820265   75577 out.go:177] * Using the kvm2 driver based on existing profile
	I0920 18:14:12.821397   75577 start.go:297] selected driver: kvm2
	I0920 18:14:12.821409   75577 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-744025 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-744025 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.207 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:14:12.821519   75577 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 18:14:12.822217   75577 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:14:12.822309   75577 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19672-8777/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 18:14:12.837641   75577 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 18:14:12.838107   75577 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:14:12.838143   75577 cni.go:84] Creating CNI manager for ""
	I0920 18:14:12.838193   75577 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:14:12.838231   75577 start.go:340] cluster config:
	{Name:old-k8s-version-744025 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-744025 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.207 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:14:12.838329   75577 iso.go:125] acquiring lock: {Name:mkba95ef0488e46f622333e9f317f43def93040b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:14:12.840059   75577 out.go:177] * Starting "old-k8s-version-744025" primary control-plane node in "old-k8s-version-744025" cluster
	I0920 18:14:15.230099   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:14:12.841339   75577 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0920 18:14:12.841384   75577 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0920 18:14:12.841394   75577 cache.go:56] Caching tarball of preloaded images
	I0920 18:14:12.841473   75577 preload.go:172] Found /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 18:14:12.841482   75577 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0920 18:14:12.841594   75577 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/config.json ...
	I0920 18:14:12.841781   75577 start.go:360] acquireMachinesLock for old-k8s-version-744025: {Name:mkfeedb385cf08b5d2aa00913e85815d02a180c2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 18:14:18.302232   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:14:24.382087   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:14:27.454110   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:14:33.534076   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:14:36.606161   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:14:42.686057   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:14:45.758152   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:14:51.838087   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:14:54.910159   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:00.990183   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:04.062105   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:10.142153   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:13.218090   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:19.294132   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:22.366139   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:28.446103   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:31.518062   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:37.598126   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:40.670145   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:46.750116   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:49.822142   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:55.902120   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:58.974169   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:16:05.054139   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:16:08.126122   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:16:14.206089   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:16:17.278137   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:16:23.358156   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:16:26.430180   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:16:32.510115   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:16:35.582114   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:16:41.662126   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:16:44.734140   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:16:50.814123   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:16:53.886188   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:16:59.966139   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:17:03.038067   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:17:09.118169   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:17:12.190091   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:17:15.194165   75086 start.go:364] duration metric: took 4m1.091611985s to acquireMachinesLock for "embed-certs-768431"
	I0920 18:17:15.194223   75086 start.go:96] Skipping create...Using existing machine configuration
	I0920 18:17:15.194231   75086 fix.go:54] fixHost starting: 
	I0920 18:17:15.194737   75086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:17:15.194794   75086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:17:15.210558   75086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43191
	I0920 18:17:15.211084   75086 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:17:15.211527   75086 main.go:141] libmachine: Using API Version  1
	I0920 18:17:15.211550   75086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:17:15.211882   75086 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:17:15.212099   75086 main.go:141] libmachine: (embed-certs-768431) Calling .DriverName
	I0920 18:17:15.212217   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetState
	I0920 18:17:15.213955   75086 fix.go:112] recreateIfNeeded on embed-certs-768431: state=Stopped err=<nil>
	I0920 18:17:15.213995   75086 main.go:141] libmachine: (embed-certs-768431) Calling .DriverName
	W0920 18:17:15.214129   75086 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 18:17:15.215650   75086 out.go:177] * Restarting existing kvm2 VM for "embed-certs-768431" ...
	I0920 18:17:15.191760   74753 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:17:15.191794   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetMachineName
	I0920 18:17:15.192105   74753 buildroot.go:166] provisioning hostname "no-preload-956403"
	I0920 18:17:15.192133   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetMachineName
	I0920 18:17:15.192325   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:17:15.194039   74753 machine.go:96] duration metric: took 4m37.421116123s to provisionDockerMachine
	I0920 18:17:15.194077   74753 fix.go:56] duration metric: took 4m37.444193238s for fixHost
	I0920 18:17:15.194087   74753 start.go:83] releasing machines lock for "no-preload-956403", held for 4m37.444234787s
	W0920 18:17:15.194109   74753 start.go:714] error starting host: provision: host is not running
	W0920 18:17:15.194214   74753 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0920 18:17:15.194222   74753 start.go:729] Will try again in 5 seconds ...
	I0920 18:17:15.216590   75086 main.go:141] libmachine: (embed-certs-768431) Calling .Start
	I0920 18:17:15.216773   75086 main.go:141] libmachine: (embed-certs-768431) Ensuring networks are active...
	I0920 18:17:15.217439   75086 main.go:141] libmachine: (embed-certs-768431) Ensuring network default is active
	I0920 18:17:15.217769   75086 main.go:141] libmachine: (embed-certs-768431) Ensuring network mk-embed-certs-768431 is active
	I0920 18:17:15.218058   75086 main.go:141] libmachine: (embed-certs-768431) Getting domain xml...
	I0920 18:17:15.218674   75086 main.go:141] libmachine: (embed-certs-768431) Creating domain...
	I0920 18:17:16.454922   75086 main.go:141] libmachine: (embed-certs-768431) Waiting to get IP...
	I0920 18:17:16.456011   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:16.456437   75086 main.go:141] libmachine: (embed-certs-768431) DBG | unable to find current IP address of domain embed-certs-768431 in network mk-embed-certs-768431
	I0920 18:17:16.456518   75086 main.go:141] libmachine: (embed-certs-768431) DBG | I0920 18:17:16.456416   76214 retry.go:31] will retry after 282.081004ms: waiting for machine to come up
	I0920 18:17:16.740062   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:16.740455   75086 main.go:141] libmachine: (embed-certs-768431) DBG | unable to find current IP address of domain embed-certs-768431 in network mk-embed-certs-768431
	I0920 18:17:16.740505   75086 main.go:141] libmachine: (embed-certs-768431) DBG | I0920 18:17:16.740420   76214 retry.go:31] will retry after 335.306847ms: waiting for machine to come up
	I0920 18:17:17.077169   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:17.077564   75086 main.go:141] libmachine: (embed-certs-768431) DBG | unable to find current IP address of domain embed-certs-768431 in network mk-embed-certs-768431
	I0920 18:17:17.077594   75086 main.go:141] libmachine: (embed-certs-768431) DBG | I0920 18:17:17.077513   76214 retry.go:31] will retry after 455.255315ms: waiting for machine to come up
	I0920 18:17:17.534166   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:17.534545   75086 main.go:141] libmachine: (embed-certs-768431) DBG | unable to find current IP address of domain embed-certs-768431 in network mk-embed-certs-768431
	I0920 18:17:17.534570   75086 main.go:141] libmachine: (embed-certs-768431) DBG | I0920 18:17:17.534511   76214 retry.go:31] will retry after 476.184378ms: waiting for machine to come up
	I0920 18:17:18.011940   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:18.012405   75086 main.go:141] libmachine: (embed-certs-768431) DBG | unable to find current IP address of domain embed-certs-768431 in network mk-embed-certs-768431
	I0920 18:17:18.012436   75086 main.go:141] libmachine: (embed-certs-768431) DBG | I0920 18:17:18.012359   76214 retry.go:31] will retry after 597.678215ms: waiting for machine to come up
	I0920 18:17:18.611179   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:18.611560   75086 main.go:141] libmachine: (embed-certs-768431) DBG | unable to find current IP address of domain embed-certs-768431 in network mk-embed-certs-768431
	I0920 18:17:18.611588   75086 main.go:141] libmachine: (embed-certs-768431) DBG | I0920 18:17:18.611521   76214 retry.go:31] will retry after 696.074491ms: waiting for machine to come up
	I0920 18:17:20.194460   74753 start.go:360] acquireMachinesLock for no-preload-956403: {Name:mkfeedb385cf08b5d2aa00913e85815d02a180c2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 18:17:19.309493   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:19.309940   75086 main.go:141] libmachine: (embed-certs-768431) DBG | unable to find current IP address of domain embed-certs-768431 in network mk-embed-certs-768431
	I0920 18:17:19.309958   75086 main.go:141] libmachine: (embed-certs-768431) DBG | I0920 18:17:19.309909   76214 retry.go:31] will retry after 825.638908ms: waiting for machine to come up
	I0920 18:17:20.137018   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:20.137380   75086 main.go:141] libmachine: (embed-certs-768431) DBG | unable to find current IP address of domain embed-certs-768431 in network mk-embed-certs-768431
	I0920 18:17:20.137407   75086 main.go:141] libmachine: (embed-certs-768431) DBG | I0920 18:17:20.137338   76214 retry.go:31] will retry after 997.909719ms: waiting for machine to come up
	I0920 18:17:21.137154   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:21.137608   75086 main.go:141] libmachine: (embed-certs-768431) DBG | unable to find current IP address of domain embed-certs-768431 in network mk-embed-certs-768431
	I0920 18:17:21.137630   75086 main.go:141] libmachine: (embed-certs-768431) DBG | I0920 18:17:21.137552   76214 retry.go:31] will retry after 1.368594293s: waiting for machine to come up
	I0920 18:17:22.507834   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:22.508202   75086 main.go:141] libmachine: (embed-certs-768431) DBG | unable to find current IP address of domain embed-certs-768431 in network mk-embed-certs-768431
	I0920 18:17:22.508228   75086 main.go:141] libmachine: (embed-certs-768431) DBG | I0920 18:17:22.508152   76214 retry.go:31] will retry after 1.922265011s: waiting for machine to come up
	I0920 18:17:24.431977   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:24.432422   75086 main.go:141] libmachine: (embed-certs-768431) DBG | unable to find current IP address of domain embed-certs-768431 in network mk-embed-certs-768431
	I0920 18:17:24.432452   75086 main.go:141] libmachine: (embed-certs-768431) DBG | I0920 18:17:24.432370   76214 retry.go:31] will retry after 2.875158038s: waiting for machine to come up
	I0920 18:17:27.309993   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:27.310512   75086 main.go:141] libmachine: (embed-certs-768431) DBG | unable to find current IP address of domain embed-certs-768431 in network mk-embed-certs-768431
	I0920 18:17:27.310539   75086 main.go:141] libmachine: (embed-certs-768431) DBG | I0920 18:17:27.310459   76214 retry.go:31] will retry after 3.089759463s: waiting for machine to come up
	I0920 18:17:30.402254   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:30.402718   75086 main.go:141] libmachine: (embed-certs-768431) DBG | unable to find current IP address of domain embed-certs-768431 in network mk-embed-certs-768431
	I0920 18:17:30.402757   75086 main.go:141] libmachine: (embed-certs-768431) DBG | I0920 18:17:30.402671   76214 retry.go:31] will retry after 3.42897838s: waiting for machine to come up
	I0920 18:17:33.835196   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:33.835576   75086 main.go:141] libmachine: (embed-certs-768431) Found IP for machine: 192.168.61.202
	I0920 18:17:33.835606   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has current primary IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:33.835615   75086 main.go:141] libmachine: (embed-certs-768431) Reserving static IP address...
	I0920 18:17:33.835974   75086 main.go:141] libmachine: (embed-certs-768431) Reserved static IP address: 192.168.61.202
	I0920 18:17:33.836010   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "embed-certs-768431", mac: "52:54:00:d2:f2:e2", ip: "192.168.61.202"} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:33.836024   75086 main.go:141] libmachine: (embed-certs-768431) Waiting for SSH to be available...
	I0920 18:17:33.836053   75086 main.go:141] libmachine: (embed-certs-768431) DBG | skip adding static IP to network mk-embed-certs-768431 - found existing host DHCP lease matching {name: "embed-certs-768431", mac: "52:54:00:d2:f2:e2", ip: "192.168.61.202"}
	I0920 18:17:33.836065   75086 main.go:141] libmachine: (embed-certs-768431) DBG | Getting to WaitForSSH function...
	I0920 18:17:33.838215   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:33.838561   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:33.838593   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:33.838735   75086 main.go:141] libmachine: (embed-certs-768431) DBG | Using SSH client type: external
	I0920 18:17:33.838768   75086 main.go:141] libmachine: (embed-certs-768431) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/embed-certs-768431/id_rsa (-rw-------)
	I0920 18:17:33.838809   75086 main.go:141] libmachine: (embed-certs-768431) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.202 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-8777/.minikube/machines/embed-certs-768431/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 18:17:33.838826   75086 main.go:141] libmachine: (embed-certs-768431) DBG | About to run SSH command:
	I0920 18:17:33.838838   75086 main.go:141] libmachine: (embed-certs-768431) DBG | exit 0
	I0920 18:17:33.962092   75086 main.go:141] libmachine: (embed-certs-768431) DBG | SSH cmd err, output: <nil>: 
	I0920 18:17:33.962513   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetConfigRaw
	I0920 18:17:33.963115   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetIP
	I0920 18:17:33.965714   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:33.966036   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:33.966056   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:33.966391   75086 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/embed-certs-768431/config.json ...
	I0920 18:17:33.966621   75086 machine.go:93] provisionDockerMachine start ...
	I0920 18:17:33.966641   75086 main.go:141] libmachine: (embed-certs-768431) Calling .DriverName
	I0920 18:17:33.966848   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHHostname
	I0920 18:17:33.968954   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:33.969235   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:33.969266   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:33.969358   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHPort
	I0920 18:17:33.969542   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:33.969694   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:33.969854   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHUsername
	I0920 18:17:33.970001   75086 main.go:141] libmachine: Using SSH client type: native
	I0920 18:17:33.970194   75086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0920 18:17:33.970204   75086 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 18:17:35.178887   75264 start.go:364] duration metric: took 3m58.041934266s to acquireMachinesLock for "default-k8s-diff-port-553719"
	I0920 18:17:35.178955   75264 start.go:96] Skipping create...Using existing machine configuration
	I0920 18:17:35.178968   75264 fix.go:54] fixHost starting: 
	I0920 18:17:35.179541   75264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:17:35.179604   75264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:17:35.199776   75264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39643
	I0920 18:17:35.200255   75264 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:17:35.200832   75264 main.go:141] libmachine: Using API Version  1
	I0920 18:17:35.200858   75264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:17:35.201213   75264 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:17:35.201427   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .DriverName
	I0920 18:17:35.201575   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetState
	I0920 18:17:35.203239   75264 fix.go:112] recreateIfNeeded on default-k8s-diff-port-553719: state=Stopped err=<nil>
	I0920 18:17:35.203279   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .DriverName
	W0920 18:17:35.203415   75264 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 18:17:35.205529   75264 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-553719" ...
	I0920 18:17:34.078412   75086 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 18:17:34.078447   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetMachineName
	I0920 18:17:34.078648   75086 buildroot.go:166] provisioning hostname "embed-certs-768431"
	I0920 18:17:34.078676   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetMachineName
	I0920 18:17:34.078854   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHHostname
	I0920 18:17:34.081703   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.082042   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:34.082079   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.082175   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHPort
	I0920 18:17:34.082371   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:34.082525   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:34.082656   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHUsername
	I0920 18:17:34.082778   75086 main.go:141] libmachine: Using SSH client type: native
	I0920 18:17:34.082937   75086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0920 18:17:34.082949   75086 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-768431 && echo "embed-certs-768431" | sudo tee /etc/hostname
	I0920 18:17:34.199661   75086 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-768431
	
	I0920 18:17:34.199726   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHHostname
	I0920 18:17:34.202468   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.202875   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:34.202903   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.203084   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHPort
	I0920 18:17:34.203328   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:34.203477   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:34.203630   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHUsername
	I0920 18:17:34.203769   75086 main.go:141] libmachine: Using SSH client type: native
	I0920 18:17:34.203936   75086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0920 18:17:34.203952   75086 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-768431' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-768431/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-768431' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 18:17:34.314604   75086 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:17:34.314632   75086 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-8777/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-8777/.minikube}
	I0920 18:17:34.314656   75086 buildroot.go:174] setting up certificates
	I0920 18:17:34.314666   75086 provision.go:84] configureAuth start
	I0920 18:17:34.314674   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetMachineName
	I0920 18:17:34.314940   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetIP
	I0920 18:17:34.317999   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.318306   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:34.318326   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.318538   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHHostname
	I0920 18:17:34.320715   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.321046   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:34.321073   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.321187   75086 provision.go:143] copyHostCerts
	I0920 18:17:34.321256   75086 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem, removing ...
	I0920 18:17:34.321276   75086 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem
	I0920 18:17:34.321359   75086 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem (1082 bytes)
	I0920 18:17:34.321452   75086 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem, removing ...
	I0920 18:17:34.321459   75086 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem
	I0920 18:17:34.321493   75086 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem (1123 bytes)
	I0920 18:17:34.321549   75086 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem, removing ...
	I0920 18:17:34.321558   75086 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem
	I0920 18:17:34.321580   75086 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem (1675 bytes)
	I0920 18:17:34.321692   75086 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem org=jenkins.embed-certs-768431 san=[127.0.0.1 192.168.61.202 embed-certs-768431 localhost minikube]
	I0920 18:17:34.547266   75086 provision.go:177] copyRemoteCerts
	I0920 18:17:34.547343   75086 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 18:17:34.547373   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHHostname
	I0920 18:17:34.550265   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.550648   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:34.550680   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.550895   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHPort
	I0920 18:17:34.551084   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:34.551227   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHUsername
	I0920 18:17:34.551359   75086 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/embed-certs-768431/id_rsa Username:docker}
	I0920 18:17:34.632404   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0920 18:17:34.656468   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 18:17:34.681704   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0920 18:17:34.706486   75086 provision.go:87] duration metric: took 391.807931ms to configureAuth
	I0920 18:17:34.706514   75086 buildroot.go:189] setting minikube options for container-runtime
	I0920 18:17:34.706681   75086 config.go:182] Loaded profile config "embed-certs-768431": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:17:34.706750   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHHostname
	I0920 18:17:34.709500   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.709851   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:34.709915   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.709972   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHPort
	I0920 18:17:34.710199   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:34.710337   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:34.710511   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHUsername
	I0920 18:17:34.710669   75086 main.go:141] libmachine: Using SSH client type: native
	I0920 18:17:34.710854   75086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0920 18:17:34.710875   75086 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 18:17:34.941246   75086 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 18:17:34.941277   75086 machine.go:96] duration metric: took 974.641764ms to provisionDockerMachine
	I0920 18:17:34.941293   75086 start.go:293] postStartSetup for "embed-certs-768431" (driver="kvm2")
	I0920 18:17:34.941341   75086 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 18:17:34.941370   75086 main.go:141] libmachine: (embed-certs-768431) Calling .DriverName
	I0920 18:17:34.941757   75086 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 18:17:34.941792   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHHostname
	I0920 18:17:34.944366   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.944712   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:34.944749   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.944861   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHPort
	I0920 18:17:34.945051   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:34.945198   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHUsername
	I0920 18:17:34.945320   75086 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/embed-certs-768431/id_rsa Username:docker}
	I0920 18:17:35.028570   75086 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 18:17:35.032711   75086 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 18:17:35.032751   75086 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/addons for local assets ...
	I0920 18:17:35.032859   75086 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/files for local assets ...
	I0920 18:17:35.032973   75086 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem -> 159732.pem in /etc/ssl/certs
	I0920 18:17:35.033127   75086 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 18:17:35.042212   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /etc/ssl/certs/159732.pem (1708 bytes)
	I0920 18:17:35.066379   75086 start.go:296] duration metric: took 125.0653ms for postStartSetup
	I0920 18:17:35.066429   75086 fix.go:56] duration metric: took 19.872197784s for fixHost
	I0920 18:17:35.066456   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHHostname
	I0920 18:17:35.069888   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:35.070261   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:35.070286   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:35.070472   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHPort
	I0920 18:17:35.070693   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:35.070836   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:35.070989   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHUsername
	I0920 18:17:35.071204   75086 main.go:141] libmachine: Using SSH client type: native
	I0920 18:17:35.071396   75086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0920 18:17:35.071407   75086 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 18:17:35.178674   75086 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726856255.154624802
	
	I0920 18:17:35.178704   75086 fix.go:216] guest clock: 1726856255.154624802
	I0920 18:17:35.178712   75086 fix.go:229] Guest: 2024-09-20 18:17:35.154624802 +0000 UTC Remote: 2024-09-20 18:17:35.066435161 +0000 UTC m=+261.108982198 (delta=88.189641ms)
	I0920 18:17:35.178734   75086 fix.go:200] guest clock delta is within tolerance: 88.189641ms
	I0920 18:17:35.178760   75086 start.go:83] releasing machines lock for "embed-certs-768431", held for 19.984535459s
	I0920 18:17:35.178801   75086 main.go:141] libmachine: (embed-certs-768431) Calling .DriverName
	I0920 18:17:35.179101   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetIP
	I0920 18:17:35.181807   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:35.182179   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:35.182208   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:35.182353   75086 main.go:141] libmachine: (embed-certs-768431) Calling .DriverName
	I0920 18:17:35.182850   75086 main.go:141] libmachine: (embed-certs-768431) Calling .DriverName
	I0920 18:17:35.182993   75086 main.go:141] libmachine: (embed-certs-768431) Calling .DriverName
	I0920 18:17:35.183097   75086 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 18:17:35.183171   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHHostname
	I0920 18:17:35.183171   75086 ssh_runner.go:195] Run: cat /version.json
	I0920 18:17:35.183230   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHHostname
	I0920 18:17:35.185905   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:35.185942   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:35.186249   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:35.186279   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:35.186317   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:35.186331   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:35.186476   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHPort
	I0920 18:17:35.186597   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHPort
	I0920 18:17:35.186687   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:35.186791   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:35.186816   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHUsername
	I0920 18:17:35.186974   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHUsername
	I0920 18:17:35.186992   75086 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/embed-certs-768431/id_rsa Username:docker}
	I0920 18:17:35.187127   75086 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/embed-certs-768431/id_rsa Username:docker}
	I0920 18:17:35.311021   75086 ssh_runner.go:195] Run: systemctl --version
	I0920 18:17:35.317587   75086 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 18:17:35.462634   75086 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 18:17:35.469331   75086 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 18:17:35.469444   75086 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 18:17:35.485752   75086 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 18:17:35.485780   75086 start.go:495] detecting cgroup driver to use...
	I0920 18:17:35.485903   75086 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 18:17:35.507004   75086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 18:17:35.525868   75086 docker.go:217] disabling cri-docker service (if available) ...
	I0920 18:17:35.525934   75086 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 18:17:35.542533   75086 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 18:17:35.557622   75086 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 18:17:35.669845   75086 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 18:17:35.827784   75086 docker.go:233] disabling docker service ...
	I0920 18:17:35.827855   75086 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 18:17:35.842877   75086 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 18:17:35.859468   75086 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 18:17:35.994094   75086 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 18:17:36.123353   75086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 18:17:36.137941   75086 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 18:17:36.157781   75086 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 18:17:36.157871   75086 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:36.168857   75086 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 18:17:36.168929   75086 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:36.179807   75086 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:36.192260   75086 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:36.203220   75086 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 18:17:36.214388   75086 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:36.224686   75086 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:36.242143   75086 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:36.257047   75086 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 18:17:36.272014   75086 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 18:17:36.272130   75086 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 18:17:36.286083   75086 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 18:17:36.296624   75086 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:17:36.414196   75086 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 18:17:36.511654   75086 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 18:17:36.511720   75086 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 18:17:36.516593   75086 start.go:563] Will wait 60s for crictl version
	I0920 18:17:36.516640   75086 ssh_runner.go:195] Run: which crictl
	I0920 18:17:36.520360   75086 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 18:17:36.556846   75086 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 18:17:36.556949   75086 ssh_runner.go:195] Run: crio --version
	I0920 18:17:36.584699   75086 ssh_runner.go:195] Run: crio --version
	I0920 18:17:36.615179   75086 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 18:17:35.206839   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .Start
	I0920 18:17:35.207055   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Ensuring networks are active...
	I0920 18:17:35.207928   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Ensuring network default is active
	I0920 18:17:35.208260   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Ensuring network mk-default-k8s-diff-port-553719 is active
	I0920 18:17:35.208679   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Getting domain xml...
	I0920 18:17:35.209488   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Creating domain...
	I0920 18:17:36.500751   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting to get IP...
	I0920 18:17:36.501630   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:36.502033   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | unable to find current IP address of domain default-k8s-diff-port-553719 in network mk-default-k8s-diff-port-553719
	I0920 18:17:36.502137   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | I0920 18:17:36.502044   76346 retry.go:31] will retry after 216.050114ms: waiting for machine to come up
	I0920 18:17:36.719559   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:36.720002   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | unable to find current IP address of domain default-k8s-diff-port-553719 in network mk-default-k8s-diff-port-553719
	I0920 18:17:36.720026   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | I0920 18:17:36.719977   76346 retry.go:31] will retry after 303.832728ms: waiting for machine to come up
	I0920 18:17:37.025606   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:37.026096   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | unable to find current IP address of domain default-k8s-diff-port-553719 in network mk-default-k8s-diff-port-553719
	I0920 18:17:37.026137   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | I0920 18:17:37.026050   76346 retry.go:31] will retry after 316.827461ms: waiting for machine to come up
	I0920 18:17:36.616640   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetIP
	I0920 18:17:36.619927   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:36.620377   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:36.620407   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:36.620642   75086 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0920 18:17:36.627189   75086 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:17:36.639842   75086 kubeadm.go:883] updating cluster {Name:embed-certs-768431 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-768431 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.202 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 18:17:36.639953   75086 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:17:36.640019   75086 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:17:36.676519   75086 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 18:17:36.676599   75086 ssh_runner.go:195] Run: which lz4
	I0920 18:17:36.680472   75086 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 18:17:36.684650   75086 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 18:17:36.684683   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0920 18:17:38.090996   75086 crio.go:462] duration metric: took 1.410558742s to copy over tarball
	I0920 18:17:38.091068   75086 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 18:17:37.344880   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:37.345393   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | unable to find current IP address of domain default-k8s-diff-port-553719 in network mk-default-k8s-diff-port-553719
	I0920 18:17:37.345419   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | I0920 18:17:37.345323   76346 retry.go:31] will retry after 456.966571ms: waiting for machine to come up
	I0920 18:17:37.804436   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:37.804919   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | unable to find current IP address of domain default-k8s-diff-port-553719 in network mk-default-k8s-diff-port-553719
	I0920 18:17:37.804954   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | I0920 18:17:37.804891   76346 retry.go:31] will retry after 582.103589ms: waiting for machine to come up
	I0920 18:17:38.388738   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:38.389266   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | unable to find current IP address of domain default-k8s-diff-port-553719 in network mk-default-k8s-diff-port-553719
	I0920 18:17:38.389297   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | I0920 18:17:38.389217   76346 retry.go:31] will retry after 884.882678ms: waiting for machine to come up
	I0920 18:17:39.276048   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:39.276478   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | unable to find current IP address of domain default-k8s-diff-port-553719 in network mk-default-k8s-diff-port-553719
	I0920 18:17:39.276504   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | I0920 18:17:39.276453   76346 retry.go:31] will retry after 807.001285ms: waiting for machine to come up
	I0920 18:17:40.085749   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:40.086228   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | unable to find current IP address of domain default-k8s-diff-port-553719 in network mk-default-k8s-diff-port-553719
	I0920 18:17:40.086256   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | I0920 18:17:40.086190   76346 retry.go:31] will retry after 1.283354255s: waiting for machine to come up
	I0920 18:17:41.370861   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:41.371314   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | unable to find current IP address of domain default-k8s-diff-port-553719 in network mk-default-k8s-diff-port-553719
	I0920 18:17:41.371343   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | I0920 18:17:41.371265   76346 retry.go:31] will retry after 1.756084886s: waiting for machine to come up
	I0920 18:17:40.301535   75086 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.210437306s)
	I0920 18:17:40.301567   75086 crio.go:469] duration metric: took 2.210539553s to extract the tarball
	I0920 18:17:40.301578   75086 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 18:17:40.338638   75086 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:17:40.381753   75086 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 18:17:40.381780   75086 cache_images.go:84] Images are preloaded, skipping loading
	I0920 18:17:40.381788   75086 kubeadm.go:934] updating node { 192.168.61.202 8443 v1.31.1 crio true true} ...
	I0920 18:17:40.381914   75086 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-768431 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.202
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-768431 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 18:17:40.381998   75086 ssh_runner.go:195] Run: crio config
	I0920 18:17:40.428457   75086 cni.go:84] Creating CNI manager for ""
	I0920 18:17:40.428482   75086 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:17:40.428491   75086 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 18:17:40.428512   75086 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.202 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-768431 NodeName:embed-certs-768431 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.202"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.202 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 18:17:40.428681   75086 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.202
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-768431"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.202
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.202"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 18:17:40.428800   75086 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 18:17:40.439049   75086 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 18:17:40.439123   75086 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 18:17:40.449220   75086 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0920 18:17:40.466012   75086 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 18:17:40.482158   75086 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0920 18:17:40.499225   75086 ssh_runner.go:195] Run: grep 192.168.61.202	control-plane.minikube.internal$ /etc/hosts
	I0920 18:17:40.502817   75086 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.202	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:17:40.515241   75086 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:17:40.638615   75086 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:17:40.655790   75086 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/embed-certs-768431 for IP: 192.168.61.202
	I0920 18:17:40.655814   75086 certs.go:194] generating shared ca certs ...
	I0920 18:17:40.655834   75086 certs.go:226] acquiring lock for ca certs: {Name:mkc7ef6c737c6bdc3fdd9dcff8f57029c020d8f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:17:40.656006   75086 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key
	I0920 18:17:40.656070   75086 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key
	I0920 18:17:40.656084   75086 certs.go:256] generating profile certs ...
	I0920 18:17:40.656193   75086 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/embed-certs-768431/client.key
	I0920 18:17:40.656281   75086 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/embed-certs-768431/apiserver.key.a3dbf377
	I0920 18:17:40.656354   75086 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/embed-certs-768431/proxy-client.key
	I0920 18:17:40.656508   75086 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem (1338 bytes)
	W0920 18:17:40.656560   75086 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973_empty.pem, impossibly tiny 0 bytes
	I0920 18:17:40.656573   75086 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 18:17:40.656620   75086 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem (1082 bytes)
	I0920 18:17:40.656654   75086 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem (1123 bytes)
	I0920 18:17:40.656679   75086 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem (1675 bytes)
	I0920 18:17:40.656733   75086 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem (1708 bytes)
	I0920 18:17:40.657567   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 18:17:40.693111   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 18:17:40.733664   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 18:17:40.774367   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0920 18:17:40.800930   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/embed-certs-768431/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0920 18:17:40.827793   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/embed-certs-768431/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 18:17:40.851189   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/embed-certs-768431/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 18:17:40.875092   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/embed-certs-768431/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 18:17:40.898639   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem --> /usr/share/ca-certificates/15973.pem (1338 bytes)
	I0920 18:17:40.921569   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /usr/share/ca-certificates/159732.pem (1708 bytes)
	I0920 18:17:40.945739   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 18:17:40.969942   75086 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 18:17:40.987563   75086 ssh_runner.go:195] Run: openssl version
	I0920 18:17:40.993363   75086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15973.pem && ln -fs /usr/share/ca-certificates/15973.pem /etc/ssl/certs/15973.pem"
	I0920 18:17:41.004673   75086 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15973.pem
	I0920 18:17:41.008985   75086 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:04 /usr/share/ca-certificates/15973.pem
	I0920 18:17:41.009032   75086 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15973.pem
	I0920 18:17:41.014950   75086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15973.pem /etc/ssl/certs/51391683.0"
	I0920 18:17:41.027944   75086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/159732.pem && ln -fs /usr/share/ca-certificates/159732.pem /etc/ssl/certs/159732.pem"
	I0920 18:17:41.040711   75086 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/159732.pem
	I0920 18:17:41.045304   75086 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:04 /usr/share/ca-certificates/159732.pem
	I0920 18:17:41.045361   75086 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/159732.pem
	I0920 18:17:41.050891   75086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/159732.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 18:17:41.061542   75086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 18:17:41.072622   75086 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:17:41.076916   75086 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:17:41.076965   75086 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:17:41.082644   75086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 18:17:41.093136   75086 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 18:17:41.097496   75086 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 18:17:41.103393   75086 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 18:17:41.109444   75086 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 18:17:41.115844   75086 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 18:17:41.121578   75086 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 18:17:41.127292   75086 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 18:17:41.133310   75086 kubeadm.go:392] StartCluster: {Name:embed-certs-768431 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-768431 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.202 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:17:41.133393   75086 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 18:17:41.133439   75086 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:17:41.177284   75086 cri.go:89] found id: ""
	I0920 18:17:41.177380   75086 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 18:17:41.187289   75086 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 18:17:41.187311   75086 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 18:17:41.187362   75086 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 18:17:41.196445   75086 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 18:17:41.197939   75086 kubeconfig.go:125] found "embed-certs-768431" server: "https://192.168.61.202:8443"
	I0920 18:17:41.201030   75086 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 18:17:41.210356   75086 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.202
	I0920 18:17:41.210394   75086 kubeadm.go:1160] stopping kube-system containers ...
	I0920 18:17:41.210422   75086 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0920 18:17:41.210499   75086 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:17:41.244659   75086 cri.go:89] found id: ""
	I0920 18:17:41.244721   75086 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 18:17:41.264341   75086 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 18:17:41.275229   75086 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 18:17:41.275252   75086 kubeadm.go:157] found existing configuration files:
	
	I0920 18:17:41.275314   75086 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 18:17:41.285420   75086 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 18:17:41.285502   75086 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 18:17:41.295902   75086 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 18:17:41.305951   75086 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 18:17:41.306015   75086 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 18:17:41.316567   75086 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 18:17:41.325623   75086 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 18:17:41.325691   75086 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 18:17:41.336405   75086 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 18:17:41.346501   75086 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 18:17:41.346574   75086 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 18:17:41.357340   75086 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 18:17:41.367956   75086 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:17:41.479022   75086 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:17:42.796122   75086 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.317061067s)
	I0920 18:17:42.796155   75086 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:17:43.053135   75086 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:17:43.139770   75086 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:17:43.244066   75086 api_server.go:52] waiting for apiserver process to appear ...
	I0920 18:17:43.244166   75086 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:17:43.744622   75086 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:17:43.129383   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:43.129827   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | unable to find current IP address of domain default-k8s-diff-port-553719 in network mk-default-k8s-diff-port-553719
	I0920 18:17:43.129864   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | I0920 18:17:43.129798   76346 retry.go:31] will retry after 2.259775905s: waiting for machine to come up
	I0920 18:17:45.391508   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:45.391915   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | unable to find current IP address of domain default-k8s-diff-port-553719 in network mk-default-k8s-diff-port-553719
	I0920 18:17:45.391941   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | I0920 18:17:45.391885   76346 retry.go:31] will retry after 1.767770692s: waiting for machine to come up
	I0920 18:17:44.244723   75086 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:17:44.265463   75086 api_server.go:72] duration metric: took 1.02139562s to wait for apiserver process to appear ...
	I0920 18:17:44.265500   75086 api_server.go:88] waiting for apiserver healthz status ...
	I0920 18:17:44.265528   75086 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0920 18:17:46.935510   75086 api_server.go:279] https://192.168.61.202:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 18:17:46.935571   75086 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 18:17:46.935589   75086 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0920 18:17:46.947553   75086 api_server.go:279] https://192.168.61.202:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 18:17:46.947586   75086 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 18:17:47.265919   75086 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0920 18:17:47.272986   75086 api_server.go:279] https://192.168.61.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 18:17:47.273020   75086 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 18:17:47.766662   75086 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0920 18:17:47.783700   75086 api_server.go:279] https://192.168.61.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 18:17:47.783749   75086 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 18:17:48.266012   75086 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0920 18:17:48.270573   75086 api_server.go:279] https://192.168.61.202:8443/healthz returned 200:
	ok
	I0920 18:17:48.277290   75086 api_server.go:141] control plane version: v1.31.1
	I0920 18:17:48.277331   75086 api_server.go:131] duration metric: took 4.011813655s to wait for apiserver health ...
	I0920 18:17:48.277342   75086 cni.go:84] Creating CNI manager for ""
	I0920 18:17:48.277351   75086 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:17:48.278863   75086 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 18:17:48.279934   75086 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 18:17:48.304037   75086 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 18:17:48.337082   75086 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 18:17:48.349485   75086 system_pods.go:59] 8 kube-system pods found
	I0920 18:17:48.349541   75086 system_pods.go:61] "coredns-7c65d6cfc9-cskt4" [2ce6fa43-bdf9-4625-a198-8f59ee9fcf0b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 18:17:48.349554   75086 system_pods.go:61] "etcd-embed-certs-768431" [656af9fd-b380-4934-a0b5-5d39a755de44] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0920 18:17:48.349565   75086 system_pods.go:61] "kube-apiserver-embed-certs-768431" [e128f4ff-3432-4d27-83de-ed76b686403d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0920 18:17:48.349573   75086 system_pods.go:61] "kube-controller-manager-embed-certs-768431" [a2c7e6d7-e351-4e6a-ab95-638aecdaab28] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0920 18:17:48.349579   75086 system_pods.go:61] "kube-proxy-cshjm" [86a82f1b-d8d5-4ab7-89fb-84b836dc0470] Running
	I0920 18:17:48.349586   75086 system_pods.go:61] "kube-scheduler-embed-certs-768431" [64bc50a6-2e99-4992-b249-9ffdf463539a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0920 18:17:48.349601   75086 system_pods.go:61] "metrics-server-6867b74b74-dwnt6" [bb0a120f-ca9e-4b54-826b-55aa20b2f6a4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 18:17:48.349606   75086 system_pods.go:61] "storage-provisioner" [083c6964-a774-4b85-8e6d-acd04ededb3c] Running
	I0920 18:17:48.349617   75086 system_pods.go:74] duration metric: took 12.507925ms to wait for pod list to return data ...
	I0920 18:17:48.349630   75086 node_conditions.go:102] verifying NodePressure condition ...
	I0920 18:17:48.353604   75086 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 18:17:48.353635   75086 node_conditions.go:123] node cpu capacity is 2
	I0920 18:17:48.353648   75086 node_conditions.go:105] duration metric: took 4.012436ms to run NodePressure ...
	I0920 18:17:48.353668   75086 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:17:48.623666   75086 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0920 18:17:48.634113   75086 kubeadm.go:739] kubelet initialised
	I0920 18:17:48.634184   75086 kubeadm.go:740] duration metric: took 10.458173ms waiting for restarted kubelet to initialise ...
	I0920 18:17:48.634205   75086 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:17:48.642334   75086 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-cskt4" in "kube-system" namespace to be "Ready" ...
	I0920 18:17:47.161670   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:47.162064   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | unable to find current IP address of domain default-k8s-diff-port-553719 in network mk-default-k8s-diff-port-553719
	I0920 18:17:47.162094   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | I0920 18:17:47.162020   76346 retry.go:31] will retry after 2.299554771s: waiting for machine to come up
	I0920 18:17:49.463022   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:49.463483   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | unable to find current IP address of domain default-k8s-diff-port-553719 in network mk-default-k8s-diff-port-553719
	I0920 18:17:49.463515   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | I0920 18:17:49.463442   76346 retry.go:31] will retry after 4.372569793s: waiting for machine to come up
	I0920 18:17:50.650781   75086 pod_ready.go:103] pod "coredns-7c65d6cfc9-cskt4" in "kube-system" namespace has status "Ready":"False"
	I0920 18:17:53.149156   75086 pod_ready.go:103] pod "coredns-7c65d6cfc9-cskt4" in "kube-system" namespace has status "Ready":"False"
	I0920 18:17:55.087078   75577 start.go:364] duration metric: took 3m42.245259516s to acquireMachinesLock for "old-k8s-version-744025"
	I0920 18:17:55.087155   75577 start.go:96] Skipping create...Using existing machine configuration
	I0920 18:17:55.087166   75577 fix.go:54] fixHost starting: 
	I0920 18:17:55.087618   75577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:17:55.087671   75577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:17:55.107336   75577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34989
	I0920 18:17:55.107839   75577 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:17:55.108462   75577 main.go:141] libmachine: Using API Version  1
	I0920 18:17:55.108496   75577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:17:55.108855   75577 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:17:55.109072   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .DriverName
	I0920 18:17:55.109222   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetState
	I0920 18:17:55.110831   75577 fix.go:112] recreateIfNeeded on old-k8s-version-744025: state=Stopped err=<nil>
	I0920 18:17:55.110857   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .DriverName
	W0920 18:17:55.110973   75577 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 18:17:55.113408   75577 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-744025" ...
	I0920 18:17:53.840215   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:53.840817   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has current primary IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:53.840844   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Found IP for machine: 192.168.72.190
	I0920 18:17:53.840859   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Reserving static IP address...
	I0920 18:17:53.841438   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Reserved static IP address: 192.168.72.190
	I0920 18:17:53.841460   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for SSH to be available...
	I0920 18:17:53.841475   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-553719", mac: "52:54:00:dd:93:60", ip: "192.168.72.190"} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:53.841503   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | skip adding static IP to network mk-default-k8s-diff-port-553719 - found existing host DHCP lease matching {name: "default-k8s-diff-port-553719", mac: "52:54:00:dd:93:60", ip: "192.168.72.190"}
	I0920 18:17:53.841583   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | Getting to WaitForSSH function...
	I0920 18:17:53.843772   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:53.844082   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:53.844115   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:53.844201   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | Using SSH client type: external
	I0920 18:17:53.844238   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/default-k8s-diff-port-553719/id_rsa (-rw-------)
	I0920 18:17:53.844282   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.190 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-8777/.minikube/machines/default-k8s-diff-port-553719/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 18:17:53.844296   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | About to run SSH command:
	I0920 18:17:53.844309   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | exit 0
	I0920 18:17:53.965938   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | SSH cmd err, output: <nil>: 
	I0920 18:17:53.966354   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetConfigRaw
	I0920 18:17:53.967105   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetIP
	I0920 18:17:53.969715   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:53.970112   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:53.970140   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:53.970429   75264 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/default-k8s-diff-port-553719/config.json ...
	I0920 18:17:53.970686   75264 machine.go:93] provisionDockerMachine start ...
	I0920 18:17:53.970707   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .DriverName
	I0920 18:17:53.970924   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHHostname
	I0920 18:17:53.973207   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:53.973541   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:53.973571   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:53.973716   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHPort
	I0920 18:17:53.973901   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:53.974041   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:53.974155   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHUsername
	I0920 18:17:53.974298   75264 main.go:141] libmachine: Using SSH client type: native
	I0920 18:17:53.974518   75264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.190 22 <nil> <nil>}
	I0920 18:17:53.974533   75264 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 18:17:54.078095   75264 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 18:17:54.078121   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetMachineName
	I0920 18:17:54.078377   75264 buildroot.go:166] provisioning hostname "default-k8s-diff-port-553719"
	I0920 18:17:54.078397   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetMachineName
	I0920 18:17:54.078540   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHHostname
	I0920 18:17:54.081589   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.081998   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:54.082032   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.082179   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHPort
	I0920 18:17:54.082375   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:54.082539   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:54.082743   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHUsername
	I0920 18:17:54.082949   75264 main.go:141] libmachine: Using SSH client type: native
	I0920 18:17:54.083153   75264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.190 22 <nil> <nil>}
	I0920 18:17:54.083173   75264 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-553719 && echo "default-k8s-diff-port-553719" | sudo tee /etc/hostname
	I0920 18:17:54.199155   75264 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-553719
	
	I0920 18:17:54.199192   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHHostname
	I0920 18:17:54.202231   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.202650   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:54.202685   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.202835   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHPort
	I0920 18:17:54.203112   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:54.203289   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:54.203458   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHUsername
	I0920 18:17:54.203696   75264 main.go:141] libmachine: Using SSH client type: native
	I0920 18:17:54.203944   75264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.190 22 <nil> <nil>}
	I0920 18:17:54.203969   75264 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-553719' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-553719/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-553719' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 18:17:54.312356   75264 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:17:54.312386   75264 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-8777/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-8777/.minikube}
	I0920 18:17:54.312432   75264 buildroot.go:174] setting up certificates
	I0920 18:17:54.312450   75264 provision.go:84] configureAuth start
	I0920 18:17:54.312464   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetMachineName
	I0920 18:17:54.312758   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetIP
	I0920 18:17:54.315327   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.315697   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:54.315725   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.315870   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHHostname
	I0920 18:17:54.318203   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.318653   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:54.318679   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.318824   75264 provision.go:143] copyHostCerts
	I0920 18:17:54.318885   75264 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem, removing ...
	I0920 18:17:54.318906   75264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem
	I0920 18:17:54.318978   75264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem (1675 bytes)
	I0920 18:17:54.319096   75264 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem, removing ...
	I0920 18:17:54.319107   75264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem
	I0920 18:17:54.319141   75264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem (1082 bytes)
	I0920 18:17:54.319384   75264 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem, removing ...
	I0920 18:17:54.319401   75264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem
	I0920 18:17:54.319465   75264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem (1123 bytes)
	I0920 18:17:54.319551   75264 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-553719 san=[127.0.0.1 192.168.72.190 default-k8s-diff-port-553719 localhost minikube]
	I0920 18:17:54.453062   75264 provision.go:177] copyRemoteCerts
	I0920 18:17:54.453131   75264 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 18:17:54.453160   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHHostname
	I0920 18:17:54.456047   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.456402   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:54.456431   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.456598   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHPort
	I0920 18:17:54.456796   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:54.456970   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHUsername
	I0920 18:17:54.457092   75264 sshutil.go:53] new ssh client: &{IP:192.168.72.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/default-k8s-diff-port-553719/id_rsa Username:docker}
	I0920 18:17:54.544567   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0920 18:17:54.570009   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0920 18:17:54.594249   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0920 18:17:54.617652   75264 provision.go:87] duration metric: took 305.186554ms to configureAuth
	I0920 18:17:54.617686   75264 buildroot.go:189] setting minikube options for container-runtime
	I0920 18:17:54.617971   75264 config.go:182] Loaded profile config "default-k8s-diff-port-553719": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:17:54.618064   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHHostname
	I0920 18:17:54.621408   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.621882   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:54.621915   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.622140   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHPort
	I0920 18:17:54.622368   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:54.622592   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:54.622833   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHUsername
	I0920 18:17:54.623047   75264 main.go:141] libmachine: Using SSH client type: native
	I0920 18:17:54.623287   75264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.190 22 <nil> <nil>}
	I0920 18:17:54.623305   75264 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 18:17:54.859786   75264 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 18:17:54.859810   75264 machine.go:96] duration metric: took 889.110491ms to provisionDockerMachine
	I0920 18:17:54.859820   75264 start.go:293] postStartSetup for "default-k8s-diff-port-553719" (driver="kvm2")
	I0920 18:17:54.859831   75264 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 18:17:54.859850   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .DriverName
	I0920 18:17:54.860209   75264 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 18:17:54.860258   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHHostname
	I0920 18:17:54.863576   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.863933   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:54.863966   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.864168   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHPort
	I0920 18:17:54.864345   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:54.864494   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHUsername
	I0920 18:17:54.864640   75264 sshutil.go:53] new ssh client: &{IP:192.168.72.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/default-k8s-diff-port-553719/id_rsa Username:docker}
	I0920 18:17:54.944717   75264 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 18:17:54.948836   75264 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 18:17:54.948870   75264 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/addons for local assets ...
	I0920 18:17:54.948937   75264 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/files for local assets ...
	I0920 18:17:54.949014   75264 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem -> 159732.pem in /etc/ssl/certs
	I0920 18:17:54.949116   75264 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 18:17:54.958726   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /etc/ssl/certs/159732.pem (1708 bytes)
	I0920 18:17:54.982792   75264 start.go:296] duration metric: took 122.958621ms for postStartSetup
	I0920 18:17:54.982831   75264 fix.go:56] duration metric: took 19.803863913s for fixHost
	I0920 18:17:54.982856   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHHostname
	I0920 18:17:54.985588   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.985924   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:54.985966   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.986145   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHPort
	I0920 18:17:54.986407   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:54.986586   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:54.986784   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHUsername
	I0920 18:17:54.986982   75264 main.go:141] libmachine: Using SSH client type: native
	I0920 18:17:54.987233   75264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.190 22 <nil> <nil>}
	I0920 18:17:54.987248   75264 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 18:17:55.086859   75264 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726856275.038902431
	
	I0920 18:17:55.086888   75264 fix.go:216] guest clock: 1726856275.038902431
	I0920 18:17:55.086900   75264 fix.go:229] Guest: 2024-09-20 18:17:55.038902431 +0000 UTC Remote: 2024-09-20 18:17:54.98283641 +0000 UTC m=+257.985357778 (delta=56.066021ms)
	I0920 18:17:55.086959   75264 fix.go:200] guest clock delta is within tolerance: 56.066021ms
	I0920 18:17:55.086968   75264 start.go:83] releasing machines lock for "default-k8s-diff-port-553719", held for 19.908037967s
	I0920 18:17:55.087009   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .DriverName
	I0920 18:17:55.087303   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetIP
	I0920 18:17:55.090396   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:55.090777   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:55.090805   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:55.090973   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .DriverName
	I0920 18:17:55.091481   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .DriverName
	I0920 18:17:55.091684   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .DriverName
	I0920 18:17:55.091772   75264 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 18:17:55.091827   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHHostname
	I0920 18:17:55.091951   75264 ssh_runner.go:195] Run: cat /version.json
	I0920 18:17:55.091976   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHHostname
	I0920 18:17:55.094742   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:55.094878   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:55.095102   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:55.095133   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:55.095265   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHPort
	I0920 18:17:55.095362   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:55.095400   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:55.095443   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:55.095550   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHPort
	I0920 18:17:55.095619   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHUsername
	I0920 18:17:55.095689   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:55.095763   75264 sshutil.go:53] new ssh client: &{IP:192.168.72.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/default-k8s-diff-port-553719/id_rsa Username:docker}
	I0920 18:17:55.095795   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHUsername
	I0920 18:17:55.095950   75264 sshutil.go:53] new ssh client: &{IP:192.168.72.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/default-k8s-diff-port-553719/id_rsa Username:docker}
	I0920 18:17:55.213223   75264 ssh_runner.go:195] Run: systemctl --version
	I0920 18:17:55.220952   75264 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 18:17:55.370747   75264 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 18:17:55.377509   75264 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 18:17:55.377595   75264 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 18:17:55.395830   75264 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 18:17:55.395854   75264 start.go:495] detecting cgroup driver to use...
	I0920 18:17:55.395920   75264 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 18:17:55.412885   75264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 18:17:55.428380   75264 docker.go:217] disabling cri-docker service (if available) ...
	I0920 18:17:55.428433   75264 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 18:17:55.444371   75264 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 18:17:55.459485   75264 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 18:17:55.583649   75264 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 18:17:55.768003   75264 docker.go:233] disabling docker service ...
	I0920 18:17:55.768065   75264 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 18:17:55.787062   75264 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 18:17:55.802662   75264 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 18:17:55.967892   75264 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 18:17:56.105744   75264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 18:17:56.120499   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 18:17:56.140527   75264 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 18:17:56.140613   75264 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:56.151282   75264 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 18:17:56.151355   75264 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:56.164680   75264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:56.176142   75264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:56.187120   75264 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 18:17:56.198384   75264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:56.209298   75264 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:56.226714   75264 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:56.237886   75264 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 18:17:56.253664   75264 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 18:17:56.253778   75264 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 18:17:56.267429   75264 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 18:17:56.279118   75264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:17:56.412692   75264 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 18:17:56.513349   75264 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 18:17:56.513438   75264 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 18:17:56.517974   75264 start.go:563] Will wait 60s for crictl version
	I0920 18:17:56.518042   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:17:56.521966   75264 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 18:17:56.561446   75264 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 18:17:56.561520   75264 ssh_runner.go:195] Run: crio --version
	I0920 18:17:56.592009   75264 ssh_runner.go:195] Run: crio --version
	I0920 18:17:56.626555   75264 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 18:17:56.627668   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetIP
	I0920 18:17:56.630995   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:56.631450   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:56.631480   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:56.631751   75264 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0920 18:17:56.636473   75264 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:17:56.648824   75264 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-553719 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-553719 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.190 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 18:17:56.648937   75264 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:17:56.648980   75264 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:17:56.691029   75264 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 18:17:56.691104   75264 ssh_runner.go:195] Run: which lz4
	I0920 18:17:56.695538   75264 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 18:17:56.699703   75264 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 18:17:56.699735   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0920 18:17:55.114625   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .Start
	I0920 18:17:55.114814   75577 main.go:141] libmachine: (old-k8s-version-744025) Ensuring networks are active...
	I0920 18:17:55.115808   75577 main.go:141] libmachine: (old-k8s-version-744025) Ensuring network default is active
	I0920 18:17:55.116209   75577 main.go:141] libmachine: (old-k8s-version-744025) Ensuring network mk-old-k8s-version-744025 is active
	I0920 18:17:55.116615   75577 main.go:141] libmachine: (old-k8s-version-744025) Getting domain xml...
	I0920 18:17:55.117403   75577 main.go:141] libmachine: (old-k8s-version-744025) Creating domain...
	I0920 18:17:56.411359   75577 main.go:141] libmachine: (old-k8s-version-744025) Waiting to get IP...
	I0920 18:17:56.412507   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:17:56.413010   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:17:56.413094   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:17:56.412996   76990 retry.go:31] will retry after 300.110477ms: waiting for machine to come up
	I0920 18:17:56.714648   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:17:56.715163   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:17:56.715183   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:17:56.715114   76990 retry.go:31] will retry after 352.948637ms: waiting for machine to come up
	I0920 18:17:57.069760   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:17:57.070449   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:17:57.070474   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:17:57.070404   76990 retry.go:31] will retry after 335.58281ms: waiting for machine to come up
	I0920 18:17:57.408023   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:17:57.408521   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:17:57.408546   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:17:57.408469   76990 retry.go:31] will retry after 498.040148ms: waiting for machine to come up
	I0920 18:17:54.653346   75086 pod_ready.go:93] pod "coredns-7c65d6cfc9-cskt4" in "kube-system" namespace has status "Ready":"True"
	I0920 18:17:54.653373   75086 pod_ready.go:82] duration metric: took 6.011015799s for pod "coredns-7c65d6cfc9-cskt4" in "kube-system" namespace to be "Ready" ...
	I0920 18:17:54.653383   75086 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:17:56.660606   75086 pod_ready.go:103] pod "etcd-embed-certs-768431" in "kube-system" namespace has status "Ready":"False"
	I0920 18:17:58.162253   75086 pod_ready.go:93] pod "etcd-embed-certs-768431" in "kube-system" namespace has status "Ready":"True"
	I0920 18:17:58.162282   75086 pod_ready.go:82] duration metric: took 3.508893207s for pod "etcd-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:17:58.162292   75086 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:17:58.109419   75264 crio.go:462] duration metric: took 1.413913626s to copy over tarball
	I0920 18:17:58.109504   75264 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 18:18:00.360623   75264 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.25108327s)
	I0920 18:18:00.360655   75264 crio.go:469] duration metric: took 2.251201466s to extract the tarball
	I0920 18:18:00.360703   75264 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 18:18:00.400822   75264 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:18:00.451366   75264 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 18:18:00.451402   75264 cache_images.go:84] Images are preloaded, skipping loading
	I0920 18:18:00.451413   75264 kubeadm.go:934] updating node { 192.168.72.190 8444 v1.31.1 crio true true} ...
	I0920 18:18:00.451590   75264 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-553719 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.190
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-553719 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 18:18:00.451688   75264 ssh_runner.go:195] Run: crio config
	I0920 18:18:00.502703   75264 cni.go:84] Creating CNI manager for ""
	I0920 18:18:00.502729   75264 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:18:00.502740   75264 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 18:18:00.502778   75264 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.190 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-553719 NodeName:default-k8s-diff-port-553719 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.190"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.190 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 18:18:00.502927   75264 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.190
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-553719"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.190
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.190"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 18:18:00.502996   75264 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 18:18:00.513768   75264 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 18:18:00.513870   75264 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 18:18:00.523631   75264 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0920 18:18:00.540428   75264 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 18:18:00.556858   75264 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0920 18:18:00.574954   75264 ssh_runner.go:195] Run: grep 192.168.72.190	control-plane.minikube.internal$ /etc/hosts
	I0920 18:18:00.578951   75264 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.190	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:18:00.592592   75264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:18:00.712035   75264 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:18:00.728657   75264 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/default-k8s-diff-port-553719 for IP: 192.168.72.190
	I0920 18:18:00.728684   75264 certs.go:194] generating shared ca certs ...
	I0920 18:18:00.728706   75264 certs.go:226] acquiring lock for ca certs: {Name:mkc7ef6c737c6bdc3fdd9dcff8f57029c020d8f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:18:00.728877   75264 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key
	I0920 18:18:00.728937   75264 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key
	I0920 18:18:00.728948   75264 certs.go:256] generating profile certs ...
	I0920 18:18:00.729055   75264 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/default-k8s-diff-port-553719/client.key
	I0920 18:18:00.729151   75264 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/default-k8s-diff-port-553719/apiserver.key.cc1f66e7
	I0920 18:18:00.729205   75264 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/default-k8s-diff-port-553719/proxy-client.key
	I0920 18:18:00.729368   75264 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem (1338 bytes)
	W0920 18:18:00.729415   75264 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973_empty.pem, impossibly tiny 0 bytes
	I0920 18:18:00.729425   75264 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 18:18:00.729453   75264 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem (1082 bytes)
	I0920 18:18:00.729474   75264 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem (1123 bytes)
	I0920 18:18:00.729501   75264 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem (1675 bytes)
	I0920 18:18:00.729538   75264 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem (1708 bytes)
	I0920 18:18:00.730192   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 18:18:00.775634   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 18:18:00.812006   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 18:18:00.846904   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0920 18:18:00.877908   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/default-k8s-diff-port-553719/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0920 18:18:00.910057   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/default-k8s-diff-port-553719/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 18:18:00.935377   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/default-k8s-diff-port-553719/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 18:18:00.960143   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/default-k8s-diff-port-553719/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 18:18:00.983906   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 18:18:01.007105   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem --> /usr/share/ca-certificates/15973.pem (1338 bytes)
	I0920 18:18:01.030331   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /usr/share/ca-certificates/159732.pem (1708 bytes)
	I0920 18:18:01.055976   75264 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 18:18:01.073892   75264 ssh_runner.go:195] Run: openssl version
	I0920 18:18:01.081246   75264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 18:18:01.092174   75264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:18:01.096541   75264 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:18:01.096595   75264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:18:01.103564   75264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 18:18:01.117697   75264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15973.pem && ln -fs /usr/share/ca-certificates/15973.pem /etc/ssl/certs/15973.pem"
	I0920 18:18:01.130349   75264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15973.pem
	I0920 18:18:01.135397   75264 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:04 /usr/share/ca-certificates/15973.pem
	I0920 18:18:01.135471   75264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15973.pem
	I0920 18:18:01.141399   75264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15973.pem /etc/ssl/certs/51391683.0"
	I0920 18:18:01.153123   75264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/159732.pem && ln -fs /usr/share/ca-certificates/159732.pem /etc/ssl/certs/159732.pem"
	I0920 18:18:01.165157   75264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/159732.pem
	I0920 18:18:01.170596   75264 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:04 /usr/share/ca-certificates/159732.pem
	I0920 18:18:01.170678   75264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/159732.pem
	I0920 18:18:01.176534   75264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/159732.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 18:18:01.188576   75264 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 18:18:01.193401   75264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 18:18:01.199557   75264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 18:18:01.205410   75264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 18:18:01.211296   75264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 18:18:01.216953   75264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 18:18:01.223417   75264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 18:18:01.229172   75264 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-553719 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-553719 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.190 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:18:01.229428   75264 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 18:18:01.229488   75264 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:18:01.270537   75264 cri.go:89] found id: ""
	I0920 18:18:01.270610   75264 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 18:18:01.284566   75264 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 18:18:01.284588   75264 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 18:18:01.284638   75264 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 18:18:01.297884   75264 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 18:18:01.298879   75264 kubeconfig.go:125] found "default-k8s-diff-port-553719" server: "https://192.168.72.190:8444"
	I0920 18:18:01.300996   75264 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 18:18:01.312183   75264 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.190
	I0920 18:18:01.312251   75264 kubeadm.go:1160] stopping kube-system containers ...
	I0920 18:18:01.312266   75264 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0920 18:18:01.312324   75264 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:18:01.358873   75264 cri.go:89] found id: ""
	I0920 18:18:01.358959   75264 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 18:18:01.377027   75264 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 18:18:01.386872   75264 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 18:18:01.386890   75264 kubeadm.go:157] found existing configuration files:
	
	I0920 18:18:01.386931   75264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0920 18:18:01.396044   75264 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 18:18:01.396118   75264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 18:18:01.405783   75264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0920 18:18:01.415296   75264 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 18:18:01.415367   75264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 18:18:01.427782   75264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0920 18:18:01.438838   75264 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 18:18:01.438897   75264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 18:18:01.450255   75264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0920 18:18:01.461237   75264 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 18:18:01.461313   75264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 18:18:01.472543   75264 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 18:18:01.483456   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:01.607155   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:17:57.908137   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:17:57.908561   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:17:57.908589   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:17:57.908522   76990 retry.go:31] will retry after 749.044696ms: waiting for machine to come up
	I0920 18:17:58.658869   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:17:58.659393   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:17:58.659424   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:17:58.659342   76990 retry.go:31] will retry after 949.936088ms: waiting for machine to come up
	I0920 18:17:59.610936   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:17:59.611467   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:17:59.611513   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:17:59.611430   76990 retry.go:31] will retry after 762.437104ms: waiting for machine to come up
	I0920 18:18:00.375768   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:00.376207   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:18:00.376235   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:18:00.376152   76990 retry.go:31] will retry after 1.228102027s: waiting for machine to come up
	I0920 18:18:01.606490   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:01.606958   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:18:01.606982   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:18:01.606907   76990 retry.go:31] will retry after 1.383186524s: waiting for machine to come up
	I0920 18:18:00.169862   75086 pod_ready.go:103] pod "kube-apiserver-embed-certs-768431" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:01.669351   75086 pod_ready.go:93] pod "kube-apiserver-embed-certs-768431" in "kube-system" namespace has status "Ready":"True"
	I0920 18:18:01.669378   75086 pod_ready.go:82] duration metric: took 3.507078668s for pod "kube-apiserver-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:01.669392   75086 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:01.674981   75086 pod_ready.go:93] pod "kube-controller-manager-embed-certs-768431" in "kube-system" namespace has status "Ready":"True"
	I0920 18:18:01.675006   75086 pod_ready.go:82] duration metric: took 5.604682ms for pod "kube-controller-manager-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:01.675018   75086 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-cshjm" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:01.680522   75086 pod_ready.go:93] pod "kube-proxy-cshjm" in "kube-system" namespace has status "Ready":"True"
	I0920 18:18:01.680547   75086 pod_ready.go:82] duration metric: took 5.521295ms for pod "kube-proxy-cshjm" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:01.680559   75086 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:01.685097   75086 pod_ready.go:93] pod "kube-scheduler-embed-certs-768431" in "kube-system" namespace has status "Ready":"True"
	I0920 18:18:01.685120   75086 pod_ready.go:82] duration metric: took 4.551724ms for pod "kube-scheduler-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:01.685131   75086 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:03.693234   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:02.268120   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:02.471363   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:02.530979   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:02.580339   75264 api_server.go:52] waiting for apiserver process to appear ...
	I0920 18:18:02.580455   75264 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:03.081156   75264 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:03.580564   75264 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:04.080991   75264 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:04.095108   75264 api_server.go:72] duration metric: took 1.514770034s to wait for apiserver process to appear ...
	I0920 18:18:04.095139   75264 api_server.go:88] waiting for apiserver healthz status ...
	I0920 18:18:04.095164   75264 api_server.go:253] Checking apiserver healthz at https://192.168.72.190:8444/healthz ...
	I0920 18:18:04.095704   75264 api_server.go:269] stopped: https://192.168.72.190:8444/healthz: Get "https://192.168.72.190:8444/healthz": dial tcp 192.168.72.190:8444: connect: connection refused
	I0920 18:18:04.595270   75264 api_server.go:253] Checking apiserver healthz at https://192.168.72.190:8444/healthz ...
	I0920 18:18:06.378496   75264 api_server.go:279] https://192.168.72.190:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 18:18:06.378532   75264 api_server.go:103] status: https://192.168.72.190:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 18:18:06.378550   75264 api_server.go:253] Checking apiserver healthz at https://192.168.72.190:8444/healthz ...
	I0920 18:18:06.425353   75264 api_server.go:279] https://192.168.72.190:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 18:18:06.425386   75264 api_server.go:103] status: https://192.168.72.190:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 18:18:06.595786   75264 api_server.go:253] Checking apiserver healthz at https://192.168.72.190:8444/healthz ...
	I0920 18:18:06.602114   75264 api_server.go:279] https://192.168.72.190:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 18:18:06.602148   75264 api_server.go:103] status: https://192.168.72.190:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 18:18:07.096095   75264 api_server.go:253] Checking apiserver healthz at https://192.168.72.190:8444/healthz ...
	I0920 18:18:07.102320   75264 api_server.go:279] https://192.168.72.190:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 18:18:07.102348   75264 api_server.go:103] status: https://192.168.72.190:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 18:18:07.595804   75264 api_server.go:253] Checking apiserver healthz at https://192.168.72.190:8444/healthz ...
	I0920 18:18:07.600025   75264 api_server.go:279] https://192.168.72.190:8444/healthz returned 200:
	ok
	I0920 18:18:07.606598   75264 api_server.go:141] control plane version: v1.31.1
	I0920 18:18:07.606626   75264 api_server.go:131] duration metric: took 3.511479158s to wait for apiserver health ...
	I0920 18:18:07.606637   75264 cni.go:84] Creating CNI manager for ""
	I0920 18:18:07.606645   75264 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:18:07.608423   75264 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 18:18:02.992412   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:02.992887   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:18:02.992919   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:18:02.992822   76990 retry.go:31] will retry after 2.100326569s: waiting for machine to come up
	I0920 18:18:05.095088   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:05.095546   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:18:05.095569   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:18:05.095507   76990 retry.go:31] will retry after 1.758181729s: waiting for machine to come up
	I0920 18:18:06.855172   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:06.855654   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:18:06.855678   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:18:06.855606   76990 retry.go:31] will retry after 2.478116743s: waiting for machine to come up
	I0920 18:18:05.694350   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:08.191644   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:07.609444   75264 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 18:18:07.620420   75264 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 18:18:07.637796   75264 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 18:18:07.652676   75264 system_pods.go:59] 8 kube-system pods found
	I0920 18:18:07.652710   75264 system_pods.go:61] "coredns-7c65d6cfc9-dmdfb" [8e4ffdb3-37e0-4498-bdf5-d6e7dbcad020] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 18:18:07.652721   75264 system_pods.go:61] "etcd-default-k8s-diff-port-553719" [e8de0e96-5f3a-4f3f-ae69-c5da7d3c2eb7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0920 18:18:07.652727   75264 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-553719" [5164e26c-7fcd-41dc-865f-429046a9ad61] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0920 18:18:07.652735   75264 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-553719" [b2c00a76-9645-4c63-9812-43e6ed9f1d4f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0920 18:18:07.652741   75264 system_pods.go:61] "kube-proxy-p9crq" [83e0f53d-6960-42c4-904d-ea85ba9160f4] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0920 18:18:07.652746   75264 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-553719" [42155f7b-95bb-467b-acf5-89eb4f80cf76] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0920 18:18:07.652751   75264 system_pods.go:61] "metrics-server-6867b74b74-vtl79" [29e0b6eb-22a9-4e37-97f9-83b48cc38193] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 18:18:07.652760   75264 system_pods.go:61] "storage-provisioner" [6fad2d07-f99e-45ac-9657-bce6d73d7fce] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0920 18:18:07.652769   75264 system_pods.go:74] duration metric: took 14.950839ms to wait for pod list to return data ...
	I0920 18:18:07.652780   75264 node_conditions.go:102] verifying NodePressure condition ...
	I0920 18:18:07.658890   75264 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 18:18:07.658927   75264 node_conditions.go:123] node cpu capacity is 2
	I0920 18:18:07.658941   75264 node_conditions.go:105] duration metric: took 6.15496ms to run NodePressure ...
	I0920 18:18:07.658964   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:07.932231   75264 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0920 18:18:07.936474   75264 kubeadm.go:739] kubelet initialised
	I0920 18:18:07.936500   75264 kubeadm.go:740] duration metric: took 4.236071ms waiting for restarted kubelet to initialise ...
	I0920 18:18:07.936507   75264 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:18:07.941918   75264 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-dmdfb" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:07.947609   75264 pod_ready.go:98] node "default-k8s-diff-port-553719" hosting pod "coredns-7c65d6cfc9-dmdfb" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:07.947642   75264 pod_ready.go:82] duration metric: took 5.693968ms for pod "coredns-7c65d6cfc9-dmdfb" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:07.947653   75264 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-553719" hosting pod "coredns-7c65d6cfc9-dmdfb" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:07.947660   75264 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:07.954317   75264 pod_ready.go:98] node "default-k8s-diff-port-553719" hosting pod "etcd-default-k8s-diff-port-553719" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:07.954358   75264 pod_ready.go:82] duration metric: took 6.689603ms for pod "etcd-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:07.954373   75264 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-553719" hosting pod "etcd-default-k8s-diff-port-553719" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:07.954382   75264 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:07.966099   75264 pod_ready.go:98] node "default-k8s-diff-port-553719" hosting pod "kube-apiserver-default-k8s-diff-port-553719" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:07.966126   75264 pod_ready.go:82] duration metric: took 11.735727ms for pod "kube-apiserver-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:07.966141   75264 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-553719" hosting pod "kube-apiserver-default-k8s-diff-port-553719" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:07.966154   75264 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:08.041768   75264 pod_ready.go:98] node "default-k8s-diff-port-553719" hosting pod "kube-controller-manager-default-k8s-diff-port-553719" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:08.041794   75264 pod_ready.go:82] duration metric: took 75.630916ms for pod "kube-controller-manager-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:08.041805   75264 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-553719" hosting pod "kube-controller-manager-default-k8s-diff-port-553719" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:08.041816   75264 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-p9crq" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:08.440423   75264 pod_ready.go:98] node "default-k8s-diff-port-553719" hosting pod "kube-proxy-p9crq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:08.440459   75264 pod_ready.go:82] duration metric: took 398.635469ms for pod "kube-proxy-p9crq" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:08.440471   75264 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-553719" hosting pod "kube-proxy-p9crq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:08.440480   75264 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:08.841917   75264 pod_ready.go:98] node "default-k8s-diff-port-553719" hosting pod "kube-scheduler-default-k8s-diff-port-553719" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:08.841953   75264 pod_ready.go:82] duration metric: took 401.459059ms for pod "kube-scheduler-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:08.841968   75264 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-553719" hosting pod "kube-scheduler-default-k8s-diff-port-553719" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:08.841977   75264 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:09.241198   75264 pod_ready.go:98] node "default-k8s-diff-port-553719" hosting pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:09.241225   75264 pod_ready.go:82] duration metric: took 399.238784ms for pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:09.241237   75264 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-553719" hosting pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:09.241245   75264 pod_ready.go:39] duration metric: took 1.304729447s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:18:09.241259   75264 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 18:18:09.253519   75264 ops.go:34] apiserver oom_adj: -16
	I0920 18:18:09.253546   75264 kubeadm.go:597] duration metric: took 7.96895197s to restartPrimaryControlPlane
	I0920 18:18:09.253558   75264 kubeadm.go:394] duration metric: took 8.024395552s to StartCluster
	I0920 18:18:09.253586   75264 settings.go:142] acquiring lock: {Name:mk2a13c58cdc0faf8cddca5d6716175d45db9bfd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:18:09.253682   75264 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19672-8777/kubeconfig
	I0920 18:18:09.255907   75264 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/kubeconfig: {Name:mkf32a4c736808e023459b2f0e40188618a38db1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:18:09.256208   75264 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.190 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:18:09.256322   75264 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 18:18:09.256394   75264 config.go:182] Loaded profile config "default-k8s-diff-port-553719": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:18:09.256420   75264 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-553719"
	I0920 18:18:09.256428   75264 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-553719"
	I0920 18:18:09.256440   75264 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-553719"
	I0920 18:18:09.256448   75264 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-553719"
	I0920 18:18:09.256457   75264 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-553719"
	W0920 18:18:09.256468   75264 addons.go:243] addon metrics-server should already be in state true
	I0920 18:18:09.256496   75264 host.go:66] Checking if "default-k8s-diff-port-553719" exists ...
	I0920 18:18:09.256441   75264 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-553719"
	W0920 18:18:09.256569   75264 addons.go:243] addon storage-provisioner should already be in state true
	I0920 18:18:09.256602   75264 host.go:66] Checking if "default-k8s-diff-port-553719" exists ...
	I0920 18:18:09.256814   75264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:09.256844   75264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:09.256882   75264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:09.256926   75264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:09.256930   75264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:09.256957   75264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:09.258738   75264 out.go:177] * Verifying Kubernetes components...
	I0920 18:18:09.259893   75264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:18:09.272294   75264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45635
	I0920 18:18:09.272357   75264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39993
	I0920 18:18:09.272304   75264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32923
	I0920 18:18:09.272766   75264 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:09.272889   75264 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:09.272918   75264 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:09.273279   75264 main.go:141] libmachine: Using API Version  1
	I0920 18:18:09.273294   75264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:09.273416   75264 main.go:141] libmachine: Using API Version  1
	I0920 18:18:09.273438   75264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:09.273457   75264 main.go:141] libmachine: Using API Version  1
	I0920 18:18:09.273478   75264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:09.273657   75264 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:09.273850   75264 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:09.273922   75264 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:09.274017   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetState
	I0920 18:18:09.274244   75264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:09.274287   75264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:09.274399   75264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:09.274430   75264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:09.277535   75264 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-553719"
	W0920 18:18:09.277556   75264 addons.go:243] addon default-storageclass should already be in state true
	I0920 18:18:09.277586   75264 host.go:66] Checking if "default-k8s-diff-port-553719" exists ...
	I0920 18:18:09.277990   75264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:09.278041   75264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:09.290955   75264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39845
	I0920 18:18:09.291487   75264 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:09.292058   75264 main.go:141] libmachine: Using API Version  1
	I0920 18:18:09.292087   75264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:09.292409   75264 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:09.292607   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetState
	I0920 18:18:09.293351   75264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44575
	I0920 18:18:09.293790   75264 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:09.294056   75264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44517
	I0920 18:18:09.294412   75264 main.go:141] libmachine: Using API Version  1
	I0920 18:18:09.294438   75264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:09.294540   75264 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:09.294854   75264 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:09.294902   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .DriverName
	I0920 18:18:09.295362   75264 main.go:141] libmachine: Using API Version  1
	I0920 18:18:09.295387   75264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:09.295521   75264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:09.295569   75264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:09.295783   75264 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:09.295993   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetState
	I0920 18:18:09.297231   75264 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0920 18:18:09.298214   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .DriverName
	I0920 18:18:09.298825   75264 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 18:18:09.298849   75264 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 18:18:09.298870   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHHostname
	I0920 18:18:09.299849   75264 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:18:09.301018   75264 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 18:18:09.301034   75264 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 18:18:09.301048   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHHostname
	I0920 18:18:09.302841   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:18:09.303141   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:18:09.303165   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:18:09.303335   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHPort
	I0920 18:18:09.303491   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:18:09.303627   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHUsername
	I0920 18:18:09.303772   75264 sshutil.go:53] new ssh client: &{IP:192.168.72.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/default-k8s-diff-port-553719/id_rsa Username:docker}
	I0920 18:18:09.304104   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:18:09.304507   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:18:09.304552   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:18:09.304623   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHPort
	I0920 18:18:09.304786   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:18:09.304946   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHUsername
	I0920 18:18:09.305085   75264 sshutil.go:53] new ssh client: &{IP:192.168.72.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/default-k8s-diff-port-553719/id_rsa Username:docker}
	I0920 18:18:09.312912   75264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35085
	I0920 18:18:09.313277   75264 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:09.313780   75264 main.go:141] libmachine: Using API Version  1
	I0920 18:18:09.313801   75264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:09.314355   75264 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:09.314525   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetState
	I0920 18:18:09.316088   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .DriverName
	I0920 18:18:09.316415   75264 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 18:18:09.316429   75264 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 18:18:09.316448   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHHostname
	I0920 18:18:09.319116   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:18:09.319484   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:18:09.319512   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:18:09.319664   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHPort
	I0920 18:18:09.319832   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:18:09.319984   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHUsername
	I0920 18:18:09.320098   75264 sshutil.go:53] new ssh client: &{IP:192.168.72.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/default-k8s-diff-port-553719/id_rsa Username:docker}
	I0920 18:18:09.457570   75264 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:18:09.476315   75264 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-553719" to be "Ready" ...
	I0920 18:18:09.599420   75264 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 18:18:09.599442   75264 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0920 18:18:09.600777   75264 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 18:18:09.621891   75264 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 18:18:09.626217   75264 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 18:18:09.626251   75264 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 18:18:09.675537   75264 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 18:18:09.675569   75264 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 18:18:09.723167   75264 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 18:18:10.813006   75264 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.191083617s)
	I0920 18:18:10.813080   75264 main.go:141] libmachine: Making call to close driver server
	I0920 18:18:10.813095   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .Close
	I0920 18:18:10.813403   75264 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:18:10.813452   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | Closing plugin on server side
	I0920 18:18:10.813455   75264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:18:10.813497   75264 main.go:141] libmachine: Making call to close driver server
	I0920 18:18:10.813518   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .Close
	I0920 18:18:10.813748   75264 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:18:10.813764   75264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:18:10.813783   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | Closing plugin on server side
	I0920 18:18:10.813975   75264 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.213167075s)
	I0920 18:18:10.814018   75264 main.go:141] libmachine: Making call to close driver server
	I0920 18:18:10.814034   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .Close
	I0920 18:18:10.814323   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | Closing plugin on server side
	I0920 18:18:10.814325   75264 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:18:10.814345   75264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:18:10.814358   75264 main.go:141] libmachine: Making call to close driver server
	I0920 18:18:10.814368   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .Close
	I0920 18:18:10.814653   75264 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:18:10.814673   75264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:18:10.821768   75264 main.go:141] libmachine: Making call to close driver server
	I0920 18:18:10.821785   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .Close
	I0920 18:18:10.822016   75264 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:18:10.822034   75264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:18:10.846062   75264 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.122849108s)
	I0920 18:18:10.846122   75264 main.go:141] libmachine: Making call to close driver server
	I0920 18:18:10.846139   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .Close
	I0920 18:18:10.846441   75264 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:18:10.846469   75264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:18:10.846469   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | Closing plugin on server side
	I0920 18:18:10.846479   75264 main.go:141] libmachine: Making call to close driver server
	I0920 18:18:10.846488   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .Close
	I0920 18:18:10.846715   75264 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:18:10.846737   75264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:18:10.846748   75264 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-553719"
	I0920 18:18:10.846749   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | Closing plugin on server side
	I0920 18:18:10.848702   75264 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0920 18:18:10.849990   75264 addons.go:510] duration metric: took 1.593667614s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0920 18:18:11.480490   75264 node_ready.go:53] node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:09.334928   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:09.335367   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:18:09.335402   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:18:09.335314   76990 retry.go:31] will retry after 4.194120768s: waiting for machine to come up
	I0920 18:18:10.192078   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:12.192186   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:14.778926   74753 start.go:364] duration metric: took 54.584395971s to acquireMachinesLock for "no-preload-956403"
	I0920 18:18:14.778979   74753 start.go:96] Skipping create...Using existing machine configuration
	I0920 18:18:14.778987   74753 fix.go:54] fixHost starting: 
	I0920 18:18:14.779392   74753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:14.779430   74753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:14.796004   74753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45761
	I0920 18:18:14.796509   74753 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:14.797006   74753 main.go:141] libmachine: Using API Version  1
	I0920 18:18:14.797028   74753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:14.797330   74753 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:14.797499   74753 main.go:141] libmachine: (no-preload-956403) Calling .DriverName
	I0920 18:18:14.797650   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetState
	I0920 18:18:14.799376   74753 fix.go:112] recreateIfNeeded on no-preload-956403: state=Stopped err=<nil>
	I0920 18:18:14.799400   74753 main.go:141] libmachine: (no-preload-956403) Calling .DriverName
	W0920 18:18:14.799564   74753 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 18:18:14.802451   74753 out.go:177] * Restarting existing kvm2 VM for "no-preload-956403" ...
	I0920 18:18:13.533702   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.534281   75577 main.go:141] libmachine: (old-k8s-version-744025) Found IP for machine: 192.168.39.207
	I0920 18:18:13.534309   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has current primary IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.534314   75577 main.go:141] libmachine: (old-k8s-version-744025) Reserving static IP address...
	I0920 18:18:13.534725   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "old-k8s-version-744025", mac: "52:54:00:e5:57:41", ip: "192.168.39.207"} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:13.534761   75577 main.go:141] libmachine: (old-k8s-version-744025) Reserved static IP address: 192.168.39.207
	I0920 18:18:13.534783   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | skip adding static IP to network mk-old-k8s-version-744025 - found existing host DHCP lease matching {name: "old-k8s-version-744025", mac: "52:54:00:e5:57:41", ip: "192.168.39.207"}
	I0920 18:18:13.534800   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | Getting to WaitForSSH function...
	I0920 18:18:13.534816   75577 main.go:141] libmachine: (old-k8s-version-744025) Waiting for SSH to be available...
	I0920 18:18:13.536879   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.537271   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:13.537301   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.537391   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | Using SSH client type: external
	I0920 18:18:13.537439   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/old-k8s-version-744025/id_rsa (-rw-------)
	I0920 18:18:13.537476   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.207 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-8777/.minikube/machines/old-k8s-version-744025/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 18:18:13.537486   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | About to run SSH command:
	I0920 18:18:13.537498   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | exit 0
	I0920 18:18:13.662214   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | SSH cmd err, output: <nil>: 
	I0920 18:18:13.662565   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetConfigRaw
	I0920 18:18:13.663269   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetIP
	I0920 18:18:13.666068   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.666530   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:13.666561   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.666899   75577 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/config.json ...
	I0920 18:18:13.667111   75577 machine.go:93] provisionDockerMachine start ...
	I0920 18:18:13.667129   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .DriverName
	I0920 18:18:13.667331   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHHostname
	I0920 18:18:13.669347   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.669717   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:13.669743   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.669908   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHPort
	I0920 18:18:13.670167   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:13.670334   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:13.670453   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHUsername
	I0920 18:18:13.670583   75577 main.go:141] libmachine: Using SSH client type: native
	I0920 18:18:13.670759   75577 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.207 22 <nil> <nil>}
	I0920 18:18:13.670770   75577 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 18:18:13.774059   75577 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 18:18:13.774093   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetMachineName
	I0920 18:18:13.774406   75577 buildroot.go:166] provisioning hostname "old-k8s-version-744025"
	I0920 18:18:13.774434   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetMachineName
	I0920 18:18:13.774618   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHHostname
	I0920 18:18:13.777175   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.777482   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:13.777513   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.777633   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHPort
	I0920 18:18:13.777803   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:13.777966   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:13.778082   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHUsername
	I0920 18:18:13.778235   75577 main.go:141] libmachine: Using SSH client type: native
	I0920 18:18:13.778404   75577 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.207 22 <nil> <nil>}
	I0920 18:18:13.778417   75577 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-744025 && echo "old-k8s-version-744025" | sudo tee /etc/hostname
	I0920 18:18:13.900180   75577 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-744025
	
	I0920 18:18:13.900224   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHHostname
	I0920 18:18:13.902958   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.903309   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:13.903340   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.903543   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHPort
	I0920 18:18:13.903762   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:13.903931   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:13.904051   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHUsername
	I0920 18:18:13.904214   75577 main.go:141] libmachine: Using SSH client type: native
	I0920 18:18:13.904393   75577 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.207 22 <nil> <nil>}
	I0920 18:18:13.904409   75577 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-744025' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-744025/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-744025' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 18:18:14.023748   75577 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:18:14.023781   75577 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-8777/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-8777/.minikube}
	I0920 18:18:14.023834   75577 buildroot.go:174] setting up certificates
	I0920 18:18:14.023851   75577 provision.go:84] configureAuth start
	I0920 18:18:14.023866   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetMachineName
	I0920 18:18:14.024154   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetIP
	I0920 18:18:14.027240   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.027640   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:14.027778   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.027867   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHHostname
	I0920 18:18:14.030383   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.030741   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:14.030765   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.030915   75577 provision.go:143] copyHostCerts
	I0920 18:18:14.030979   75577 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem, removing ...
	I0920 18:18:14.030999   75577 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem
	I0920 18:18:14.031072   75577 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem (1082 bytes)
	I0920 18:18:14.031188   75577 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem, removing ...
	I0920 18:18:14.031205   75577 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem
	I0920 18:18:14.031240   75577 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem (1123 bytes)
	I0920 18:18:14.031340   75577 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem, removing ...
	I0920 18:18:14.031351   75577 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem
	I0920 18:18:14.031378   75577 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem (1675 bytes)
	I0920 18:18:14.031455   75577 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-744025 san=[127.0.0.1 192.168.39.207 localhost minikube old-k8s-version-744025]
	I0920 18:18:14.140775   75577 provision.go:177] copyRemoteCerts
	I0920 18:18:14.140847   75577 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 18:18:14.140883   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHHostname
	I0920 18:18:14.143599   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.144016   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:14.144062   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.144199   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHPort
	I0920 18:18:14.144377   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:14.144526   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHUsername
	I0920 18:18:14.144656   75577 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/old-k8s-version-744025/id_rsa Username:docker}
	I0920 18:18:14.228286   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 18:18:14.257293   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0920 18:18:14.281335   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0920 18:18:14.305736   75577 provision.go:87] duration metric: took 281.853458ms to configureAuth
	I0920 18:18:14.305762   75577 buildroot.go:189] setting minikube options for container-runtime
	I0920 18:18:14.305983   75577 config.go:182] Loaded profile config "old-k8s-version-744025": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0920 18:18:14.306076   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHHostname
	I0920 18:18:14.308833   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.309220   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:14.309244   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.309535   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHPort
	I0920 18:18:14.309779   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:14.309974   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:14.310124   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHUsername
	I0920 18:18:14.310316   75577 main.go:141] libmachine: Using SSH client type: native
	I0920 18:18:14.310535   75577 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.207 22 <nil> <nil>}
	I0920 18:18:14.310551   75577 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 18:18:14.536416   75577 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 18:18:14.536442   75577 machine.go:96] duration metric: took 869.318772ms to provisionDockerMachine
	I0920 18:18:14.536456   75577 start.go:293] postStartSetup for "old-k8s-version-744025" (driver="kvm2")
	I0920 18:18:14.536468   75577 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 18:18:14.536510   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .DriverName
	I0920 18:18:14.536803   75577 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 18:18:14.536831   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHHostname
	I0920 18:18:14.539503   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.539923   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:14.539951   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.540126   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHPort
	I0920 18:18:14.540303   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:14.540463   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHUsername
	I0920 18:18:14.540649   75577 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/old-k8s-version-744025/id_rsa Username:docker}
	I0920 18:18:14.625866   75577 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 18:18:14.630527   75577 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 18:18:14.630551   75577 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/addons for local assets ...
	I0920 18:18:14.630637   75577 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/files for local assets ...
	I0920 18:18:14.630727   75577 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem -> 159732.pem in /etc/ssl/certs
	I0920 18:18:14.630840   75577 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 18:18:14.640965   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /etc/ssl/certs/159732.pem (1708 bytes)
	I0920 18:18:14.668227   75577 start.go:296] duration metric: took 131.756559ms for postStartSetup
	I0920 18:18:14.668268   75577 fix.go:56] duration metric: took 19.581104117s for fixHost
	I0920 18:18:14.668295   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHHostname
	I0920 18:18:14.671138   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.671520   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:14.671549   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.671777   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHPort
	I0920 18:18:14.671981   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:14.672141   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:14.672280   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHUsername
	I0920 18:18:14.672436   75577 main.go:141] libmachine: Using SSH client type: native
	I0920 18:18:14.672606   75577 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.207 22 <nil> <nil>}
	I0920 18:18:14.672616   75577 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 18:18:14.778752   75577 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726856294.753230182
	
	I0920 18:18:14.778785   75577 fix.go:216] guest clock: 1726856294.753230182
	I0920 18:18:14.778796   75577 fix.go:229] Guest: 2024-09-20 18:18:14.753230182 +0000 UTC Remote: 2024-09-20 18:18:14.668273351 +0000 UTC m=+241.967312055 (delta=84.956831ms)
	I0920 18:18:14.778827   75577 fix.go:200] guest clock delta is within tolerance: 84.956831ms
	I0920 18:18:14.778836   75577 start.go:83] releasing machines lock for "old-k8s-version-744025", held for 19.691716285s
	I0920 18:18:14.778874   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .DriverName
	I0920 18:18:14.779135   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetIP
	I0920 18:18:14.781932   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.782357   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:14.782386   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.782572   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .DriverName
	I0920 18:18:14.783124   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .DriverName
	I0920 18:18:14.783328   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .DriverName
	I0920 18:18:14.783401   75577 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 18:18:14.783465   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHHostname
	I0920 18:18:14.783519   75577 ssh_runner.go:195] Run: cat /version.json
	I0920 18:18:14.783552   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHHostname
	I0920 18:18:14.786645   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.786817   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.786994   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:14.787023   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.787207   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:14.787259   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.787264   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHPort
	I0920 18:18:14.787404   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHPort
	I0920 18:18:14.787469   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:14.787561   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:14.787612   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHUsername
	I0920 18:18:14.787711   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHUsername
	I0920 18:18:14.787845   75577 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/old-k8s-version-744025/id_rsa Username:docker}
	I0920 18:18:14.787845   75577 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/old-k8s-version-744025/id_rsa Username:docker}
	I0920 18:18:14.872183   75577 ssh_runner.go:195] Run: systemctl --version
	I0920 18:18:14.913275   75577 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 18:18:15.071299   75577 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 18:18:15.078725   75577 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 18:18:15.078806   75577 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 18:18:15.101497   75577 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 18:18:15.101522   75577 start.go:495] detecting cgroup driver to use...
	I0920 18:18:15.101579   75577 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 18:18:15.120626   75577 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 18:18:15.137297   75577 docker.go:217] disabling cri-docker service (if available) ...
	I0920 18:18:15.137401   75577 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 18:18:15.152152   75577 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 18:18:15.167359   75577 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 18:18:15.288763   75577 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 18:18:15.475429   75577 docker.go:233] disabling docker service ...
	I0920 18:18:15.475512   75577 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 18:18:15.493331   75577 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 18:18:15.510326   75577 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 18:18:15.629119   75577 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 18:18:15.758923   75577 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 18:18:15.776778   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 18:18:15.798980   75577 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0920 18:18:15.799042   75577 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:18:15.810264   75577 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 18:18:15.810322   75577 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:18:15.821060   75577 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:18:15.832685   75577 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:18:15.843026   75577 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 18:18:15.853996   75577 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 18:18:15.864126   75577 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 18:18:15.864183   75577 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 18:18:15.878216   75577 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 18:18:15.889820   75577 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:18:16.015555   75577 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 18:18:16.118797   75577 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 18:18:16.118880   75577 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 18:18:16.124430   75577 start.go:563] Will wait 60s for crictl version
	I0920 18:18:16.124485   75577 ssh_runner.go:195] Run: which crictl
	I0920 18:18:16.128632   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 18:18:16.165596   75577 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 18:18:16.165707   75577 ssh_runner.go:195] Run: crio --version
	I0920 18:18:16.198772   75577 ssh_runner.go:195] Run: crio --version
	I0920 18:18:16.238511   75577 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0920 18:18:13.980331   75264 node_ready.go:53] node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:15.981435   75264 node_ready.go:53] node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:16.982412   75264 node_ready.go:49] node "default-k8s-diff-port-553719" has status "Ready":"True"
	I0920 18:18:16.982441   75264 node_ready.go:38] duration metric: took 7.506086526s for node "default-k8s-diff-port-553719" to be "Ready" ...
	I0920 18:18:16.982457   75264 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:18:16.989870   75264 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-dmdfb" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:14.803646   74753 main.go:141] libmachine: (no-preload-956403) Calling .Start
	I0920 18:18:14.803856   74753 main.go:141] libmachine: (no-preload-956403) Ensuring networks are active...
	I0920 18:18:14.804594   74753 main.go:141] libmachine: (no-preload-956403) Ensuring network default is active
	I0920 18:18:14.804920   74753 main.go:141] libmachine: (no-preload-956403) Ensuring network mk-no-preload-956403 is active
	I0920 18:18:14.805341   74753 main.go:141] libmachine: (no-preload-956403) Getting domain xml...
	I0920 18:18:14.806068   74753 main.go:141] libmachine: (no-preload-956403) Creating domain...
	I0920 18:18:16.196663   74753 main.go:141] libmachine: (no-preload-956403) Waiting to get IP...
	I0920 18:18:16.197381   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:16.197762   74753 main.go:141] libmachine: (no-preload-956403) DBG | unable to find current IP address of domain no-preload-956403 in network mk-no-preload-956403
	I0920 18:18:16.197885   74753 main.go:141] libmachine: (no-preload-956403) DBG | I0920 18:18:16.197764   77223 retry.go:31] will retry after 214.087819ms: waiting for machine to come up
	I0920 18:18:16.413365   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:16.413951   74753 main.go:141] libmachine: (no-preload-956403) DBG | unable to find current IP address of domain no-preload-956403 in network mk-no-preload-956403
	I0920 18:18:16.413977   74753 main.go:141] libmachine: (no-preload-956403) DBG | I0920 18:18:16.413916   77223 retry.go:31] will retry after 249.35647ms: waiting for machine to come up
	I0920 18:18:16.665587   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:16.666168   74753 main.go:141] libmachine: (no-preload-956403) DBG | unable to find current IP address of domain no-preload-956403 in network mk-no-preload-956403
	I0920 18:18:16.666203   74753 main.go:141] libmachine: (no-preload-956403) DBG | I0920 18:18:16.666132   77223 retry.go:31] will retry after 374.598012ms: waiting for machine to come up
	I0920 18:18:17.042911   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:17.043594   74753 main.go:141] libmachine: (no-preload-956403) DBG | unable to find current IP address of domain no-preload-956403 in network mk-no-preload-956403
	I0920 18:18:17.043618   74753 main.go:141] libmachine: (no-preload-956403) DBG | I0920 18:18:17.043550   77223 retry.go:31] will retry after 536.252353ms: waiting for machine to come up
	I0920 18:18:17.581141   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:17.581582   74753 main.go:141] libmachine: (no-preload-956403) DBG | unable to find current IP address of domain no-preload-956403 in network mk-no-preload-956403
	I0920 18:18:17.581616   74753 main.go:141] libmachine: (no-preload-956403) DBG | I0920 18:18:17.581528   77223 retry.go:31] will retry after 459.241867ms: waiting for machine to come up
	I0920 18:18:16.239946   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetIP
	I0920 18:18:16.242727   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:16.243147   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:16.243184   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:16.243561   75577 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 18:18:16.247928   75577 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:18:16.262165   75577 kubeadm.go:883] updating cluster {Name:old-k8s-version-744025 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-744025 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.207 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 18:18:16.262310   75577 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0920 18:18:16.262358   75577 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:18:16.313771   75577 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0920 18:18:16.313854   75577 ssh_runner.go:195] Run: which lz4
	I0920 18:18:16.318361   75577 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 18:18:16.322529   75577 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 18:18:16.322570   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0920 18:18:14.192319   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:16.194161   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:18.699498   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:18.999985   75264 pod_ready.go:103] pod "coredns-7c65d6cfc9-dmdfb" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:21.497825   75264 pod_ready.go:103] pod "coredns-7c65d6cfc9-dmdfb" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:18.042075   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:18.042566   74753 main.go:141] libmachine: (no-preload-956403) DBG | unable to find current IP address of domain no-preload-956403 in network mk-no-preload-956403
	I0920 18:18:18.042603   74753 main.go:141] libmachine: (no-preload-956403) DBG | I0920 18:18:18.042533   77223 retry.go:31] will retry after 833.585895ms: waiting for machine to come up
	I0920 18:18:18.877534   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:18.878023   74753 main.go:141] libmachine: (no-preload-956403) DBG | unable to find current IP address of domain no-preload-956403 in network mk-no-preload-956403
	I0920 18:18:18.878047   74753 main.go:141] libmachine: (no-preload-956403) DBG | I0920 18:18:18.877988   77223 retry.go:31] will retry after 1.035805905s: waiting for machine to come up
	I0920 18:18:19.915316   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:19.915735   74753 main.go:141] libmachine: (no-preload-956403) DBG | unable to find current IP address of domain no-preload-956403 in network mk-no-preload-956403
	I0920 18:18:19.915859   74753 main.go:141] libmachine: (no-preload-956403) DBG | I0920 18:18:19.915787   77223 retry.go:31] will retry after 978.827371ms: waiting for machine to come up
	I0920 18:18:20.896532   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:20.897185   74753 main.go:141] libmachine: (no-preload-956403) DBG | unable to find current IP address of domain no-preload-956403 in network mk-no-preload-956403
	I0920 18:18:20.897216   74753 main.go:141] libmachine: (no-preload-956403) DBG | I0920 18:18:20.897122   77223 retry.go:31] will retry after 1.808853939s: waiting for machine to come up
	I0920 18:18:17.953626   75577 crio.go:462] duration metric: took 1.635321078s to copy over tarball
	I0920 18:18:17.953717   75577 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 18:18:21.049561   75577 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.095809102s)
	I0920 18:18:21.049588   75577 crio.go:469] duration metric: took 3.095926273s to extract the tarball
	I0920 18:18:21.049596   75577 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 18:18:21.093521   75577 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:18:21.132318   75577 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0920 18:18:21.132361   75577 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0920 18:18:21.132426   75577 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:18:21.132449   75577 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 18:18:21.132432   75577 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 18:18:21.132551   75577 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 18:18:21.132587   75577 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0920 18:18:21.132708   75577 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0920 18:18:21.132819   75577 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 18:18:21.132557   75577 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0920 18:18:21.134217   75577 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0920 18:18:21.134256   75577 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0920 18:18:21.134277   75577 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 18:18:21.134313   75577 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0920 18:18:21.134327   75577 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 18:18:21.134341   75577 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 18:18:21.134266   75577 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 18:18:21.134356   75577 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:18:21.421546   75577 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0920 18:18:21.467311   75577 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0920 18:18:21.467349   75577 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0920 18:18:21.467400   75577 ssh_runner.go:195] Run: which crictl
	I0920 18:18:21.471158   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 18:18:21.474543   75577 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0920 18:18:21.480501   75577 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 18:18:21.499826   75577 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0920 18:18:21.499835   75577 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0920 18:18:21.505712   75577 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0920 18:18:21.514377   75577 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0920 18:18:21.514897   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 18:18:21.601531   75577 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0920 18:18:21.601582   75577 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0920 18:18:21.601635   75577 ssh_runner.go:195] Run: which crictl
	I0920 18:18:21.660417   75577 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0920 18:18:21.660457   75577 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 18:18:21.660504   75577 ssh_runner.go:195] Run: which crictl
	I0920 18:18:21.660528   75577 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0920 18:18:21.660571   75577 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 18:18:21.660625   75577 ssh_runner.go:195] Run: which crictl
	I0920 18:18:21.667538   75577 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0920 18:18:21.667558   75577 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0920 18:18:21.667583   75577 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 18:18:21.667595   75577 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0920 18:18:21.667633   75577 ssh_runner.go:195] Run: which crictl
	I0920 18:18:21.667652   75577 ssh_runner.go:195] Run: which crictl
	I0920 18:18:21.676529   75577 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0920 18:18:21.676580   75577 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 18:18:21.676602   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 18:18:21.676633   75577 ssh_runner.go:195] Run: which crictl
	I0920 18:18:21.676643   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 18:18:21.680041   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 18:18:21.681078   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 18:18:21.681180   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 18:18:21.683409   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 18:18:21.804439   75577 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0920 18:18:21.804492   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 18:18:21.804530   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 18:18:21.804585   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 18:18:21.823434   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 18:18:21.823497   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 18:18:21.823539   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 18:18:21.932568   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 18:18:21.932619   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 18:18:21.932639   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 18:18:21.958001   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 18:18:21.971089   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 18:18:21.975643   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 18:18:22.100118   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 18:18:22.100173   75577 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0920 18:18:22.100197   75577 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0920 18:18:22.100276   75577 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0920 18:18:22.100333   75577 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0920 18:18:22.109895   75577 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0920 18:18:22.137798   75577 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0920 18:18:22.331884   75577 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:18:22.476470   75577 cache_images.go:92] duration metric: took 1.344086626s to LoadCachedImages
	W0920 18:18:22.476575   75577 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0920 18:18:22.476595   75577 kubeadm.go:934] updating node { 192.168.39.207 8443 v1.20.0 crio true true} ...
	I0920 18:18:22.476742   75577 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-744025 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.207
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-744025 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 18:18:22.476833   75577 ssh_runner.go:195] Run: crio config
	I0920 18:18:22.528964   75577 cni.go:84] Creating CNI manager for ""
	I0920 18:18:22.528990   75577 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:18:22.528999   75577 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 18:18:22.529016   75577 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.207 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-744025 NodeName:old-k8s-version-744025 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.207"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.207 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0920 18:18:22.529173   75577 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.207
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-744025"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.207
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.207"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 18:18:22.529233   75577 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0920 18:18:22.540849   75577 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 18:18:22.540935   75577 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 18:18:22.551148   75577 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0920 18:18:22.569199   75577 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 18:18:22.585909   75577 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0920 18:18:22.604987   75577 ssh_runner.go:195] Run: grep 192.168.39.207	control-plane.minikube.internal$ /etc/hosts
	I0920 18:18:22.609152   75577 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.207	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:18:22.628042   75577 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:18:21.191366   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:23.193152   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:23.914443   75264 pod_ready.go:103] pod "coredns-7c65d6cfc9-dmdfb" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:25.002152   75264 pod_ready.go:93] pod "coredns-7c65d6cfc9-dmdfb" in "kube-system" namespace has status "Ready":"True"
	I0920 18:18:25.002185   75264 pod_ready.go:82] duration metric: took 8.012280973s for pod "coredns-7c65d6cfc9-dmdfb" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:25.002198   75264 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:25.010541   75264 pod_ready.go:93] pod "etcd-default-k8s-diff-port-553719" in "kube-system" namespace has status "Ready":"True"
	I0920 18:18:25.010570   75264 pod_ready.go:82] duration metric: took 8.362504ms for pod "etcd-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:25.010591   75264 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:25.023701   75264 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-553719" in "kube-system" namespace has status "Ready":"True"
	I0920 18:18:25.023741   75264 pod_ready.go:82] duration metric: took 13.139423ms for pod "kube-apiserver-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:25.023758   75264 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:25.031247   75264 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-553719" in "kube-system" namespace has status "Ready":"True"
	I0920 18:18:25.031282   75264 pod_ready.go:82] duration metric: took 7.515474ms for pod "kube-controller-manager-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:25.031307   75264 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-p9crq" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:25.119242   75264 pod_ready.go:93] pod "kube-proxy-p9crq" in "kube-system" namespace has status "Ready":"True"
	I0920 18:18:25.119267   75264 pod_ready.go:82] duration metric: took 87.951791ms for pod "kube-proxy-p9crq" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:25.119280   75264 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:25.518243   75264 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-553719" in "kube-system" namespace has status "Ready":"True"
	I0920 18:18:25.518267   75264 pod_ready.go:82] duration metric: took 398.979314ms for pod "kube-scheduler-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:25.518281   75264 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:22.707778   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:22.708271   74753 main.go:141] libmachine: (no-preload-956403) DBG | unable to find current IP address of domain no-preload-956403 in network mk-no-preload-956403
	I0920 18:18:22.708333   74753 main.go:141] libmachine: (no-preload-956403) DBG | I0920 18:18:22.708161   77223 retry.go:31] will retry after 1.516042611s: waiting for machine to come up
	I0920 18:18:24.225560   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:24.225987   74753 main.go:141] libmachine: (no-preload-956403) DBG | unable to find current IP address of domain no-preload-956403 in network mk-no-preload-956403
	I0920 18:18:24.226041   74753 main.go:141] libmachine: (no-preload-956403) DBG | I0920 18:18:24.225968   77223 retry.go:31] will retry after 2.135874415s: waiting for machine to come up
	I0920 18:18:26.363371   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:26.363861   74753 main.go:141] libmachine: (no-preload-956403) DBG | unable to find current IP address of domain no-preload-956403 in network mk-no-preload-956403
	I0920 18:18:26.363895   74753 main.go:141] libmachine: (no-preload-956403) DBG | I0920 18:18:26.363777   77223 retry.go:31] will retry after 3.29383193s: waiting for machine to come up
	I0920 18:18:22.796651   75577 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:18:22.813789   75577 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025 for IP: 192.168.39.207
	I0920 18:18:22.813817   75577 certs.go:194] generating shared ca certs ...
	I0920 18:18:22.813848   75577 certs.go:226] acquiring lock for ca certs: {Name:mkc7ef6c737c6bdc3fdd9dcff8f57029c020d8f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:18:22.814045   75577 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key
	I0920 18:18:22.814101   75577 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key
	I0920 18:18:22.814118   75577 certs.go:256] generating profile certs ...
	I0920 18:18:22.814290   75577 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/client.key
	I0920 18:18:22.814383   75577 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/apiserver.key.3105b99d
	I0920 18:18:22.814445   75577 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/proxy-client.key
	I0920 18:18:22.814626   75577 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem (1338 bytes)
	W0920 18:18:22.814660   75577 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973_empty.pem, impossibly tiny 0 bytes
	I0920 18:18:22.814666   75577 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 18:18:22.814691   75577 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem (1082 bytes)
	I0920 18:18:22.814717   75577 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem (1123 bytes)
	I0920 18:18:22.814749   75577 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem (1675 bytes)
	I0920 18:18:22.814813   75577 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem (1708 bytes)
	I0920 18:18:22.815736   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 18:18:22.863832   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 18:18:22.921747   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 18:18:22.959556   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0920 18:18:22.992097   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0920 18:18:23.027565   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 18:18:23.057374   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 18:18:23.094290   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 18:18:23.120095   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem --> /usr/share/ca-certificates/15973.pem (1338 bytes)
	I0920 18:18:23.144374   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /usr/share/ca-certificates/159732.pem (1708 bytes)
	I0920 18:18:23.169431   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 18:18:23.195779   75577 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 18:18:23.212820   75577 ssh_runner.go:195] Run: openssl version
	I0920 18:18:23.218876   75577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 18:18:23.229684   75577 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:18:23.234533   75577 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:18:23.234603   75577 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:18:23.240460   75577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 18:18:23.251940   75577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15973.pem && ln -fs /usr/share/ca-certificates/15973.pem /etc/ssl/certs/15973.pem"
	I0920 18:18:23.263308   75577 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15973.pem
	I0920 18:18:23.268059   75577 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:04 /usr/share/ca-certificates/15973.pem
	I0920 18:18:23.268128   75577 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15973.pem
	I0920 18:18:23.274199   75577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15973.pem /etc/ssl/certs/51391683.0"
	I0920 18:18:23.286362   75577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/159732.pem && ln -fs /usr/share/ca-certificates/159732.pem /etc/ssl/certs/159732.pem"
	I0920 18:18:23.303962   75577 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/159732.pem
	I0920 18:18:23.310907   75577 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:04 /usr/share/ca-certificates/159732.pem
	I0920 18:18:23.310981   75577 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/159732.pem
	I0920 18:18:23.317881   75577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/159732.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 18:18:23.329247   75577 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 18:18:23.334223   75577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 18:18:23.340565   75577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 18:18:23.346929   75577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 18:18:23.353681   75577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 18:18:23.359699   75577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 18:18:23.365749   75577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 18:18:23.371899   75577 kubeadm.go:392] StartCluster: {Name:old-k8s-version-744025 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-744025 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.207 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:18:23.371981   75577 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 18:18:23.372027   75577 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:18:23.419904   75577 cri.go:89] found id: ""
	I0920 18:18:23.419982   75577 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 18:18:23.431761   75577 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 18:18:23.431782   75577 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 18:18:23.431833   75577 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 18:18:23.444358   75577 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 18:18:23.445545   75577 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-744025" does not appear in /home/jenkins/minikube-integration/19672-8777/kubeconfig
	I0920 18:18:23.446531   75577 kubeconfig.go:62] /home/jenkins/minikube-integration/19672-8777/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-744025" cluster setting kubeconfig missing "old-k8s-version-744025" context setting]
	I0920 18:18:23.447808   75577 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/kubeconfig: {Name:mkf32a4c736808e023459b2f0e40188618a38db1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:18:23.568927   75577 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 18:18:23.579991   75577 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.207
	I0920 18:18:23.580025   75577 kubeadm.go:1160] stopping kube-system containers ...
	I0920 18:18:23.580038   75577 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0920 18:18:23.580097   75577 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:18:23.625568   75577 cri.go:89] found id: ""
	I0920 18:18:23.625648   75577 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 18:18:23.643938   75577 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 18:18:23.654375   75577 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 18:18:23.654398   75577 kubeadm.go:157] found existing configuration files:
	
	I0920 18:18:23.654453   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 18:18:23.664335   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 18:18:23.664409   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 18:18:23.674996   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 18:18:23.685310   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 18:18:23.685401   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 18:18:23.696241   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 18:18:23.706386   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 18:18:23.706465   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 18:18:23.716491   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 18:18:23.726566   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 18:18:23.726626   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 18:18:23.738576   75577 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 18:18:23.749510   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:23.877503   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:24.789322   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:25.054969   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:25.152117   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:25.249140   75577 api_server.go:52] waiting for apiserver process to appear ...
	I0920 18:18:25.249245   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:25.749427   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:26.249360   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:26.749895   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:27.249636   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:25.194678   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:27.693680   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:27.524378   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:29.524998   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:32.025310   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:29.661278   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:29.661743   74753 main.go:141] libmachine: (no-preload-956403) DBG | unable to find current IP address of domain no-preload-956403 in network mk-no-preload-956403
	I0920 18:18:29.661762   74753 main.go:141] libmachine: (no-preload-956403) DBG | I0920 18:18:29.661714   77223 retry.go:31] will retry after 3.154777794s: waiting for machine to come up
	I0920 18:18:27.749629   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:28.250139   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:28.749691   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:29.249709   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:29.749550   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:30.250186   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:30.749562   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:31.249622   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:31.749925   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:32.250324   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:30.191419   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:32.191873   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:32.820331   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:32.820870   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has current primary IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:32.820895   74753 main.go:141] libmachine: (no-preload-956403) Found IP for machine: 192.168.50.47
	I0920 18:18:32.820908   74753 main.go:141] libmachine: (no-preload-956403) Reserving static IP address...
	I0920 18:18:32.821313   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "no-preload-956403", mac: "52:54:00:b6:13:30", ip: "192.168.50.47"} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:32.821339   74753 main.go:141] libmachine: (no-preload-956403) Reserved static IP address: 192.168.50.47
	I0920 18:18:32.821355   74753 main.go:141] libmachine: (no-preload-956403) DBG | skip adding static IP to network mk-no-preload-956403 - found existing host DHCP lease matching {name: "no-preload-956403", mac: "52:54:00:b6:13:30", ip: "192.168.50.47"}
	I0920 18:18:32.821372   74753 main.go:141] libmachine: (no-preload-956403) DBG | Getting to WaitForSSH function...
	I0920 18:18:32.821389   74753 main.go:141] libmachine: (no-preload-956403) Waiting for SSH to be available...
	I0920 18:18:32.823590   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:32.823894   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:32.823928   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:32.824019   74753 main.go:141] libmachine: (no-preload-956403) DBG | Using SSH client type: external
	I0920 18:18:32.824046   74753 main.go:141] libmachine: (no-preload-956403) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/no-preload-956403/id_rsa (-rw-------)
	I0920 18:18:32.824077   74753 main.go:141] libmachine: (no-preload-956403) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.47 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-8777/.minikube/machines/no-preload-956403/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 18:18:32.824090   74753 main.go:141] libmachine: (no-preload-956403) DBG | About to run SSH command:
	I0920 18:18:32.824102   74753 main.go:141] libmachine: (no-preload-956403) DBG | exit 0
	I0920 18:18:32.950122   74753 main.go:141] libmachine: (no-preload-956403) DBG | SSH cmd err, output: <nil>: 
	I0920 18:18:32.950537   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetConfigRaw
	I0920 18:18:32.951187   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetIP
	I0920 18:18:32.953731   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:32.954073   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:32.954102   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:32.954365   74753 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/no-preload-956403/config.json ...
	I0920 18:18:32.954603   74753 machine.go:93] provisionDockerMachine start ...
	I0920 18:18:32.954621   74753 main.go:141] libmachine: (no-preload-956403) Calling .DriverName
	I0920 18:18:32.954814   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:18:32.957049   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:32.957379   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:32.957425   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:32.957538   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHPort
	I0920 18:18:32.957717   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:32.957920   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:32.958094   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHUsername
	I0920 18:18:32.958282   74753 main.go:141] libmachine: Using SSH client type: native
	I0920 18:18:32.958494   74753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.47 22 <nil> <nil>}
	I0920 18:18:32.958506   74753 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 18:18:33.058187   74753 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 18:18:33.058222   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetMachineName
	I0920 18:18:33.058477   74753 buildroot.go:166] provisioning hostname "no-preload-956403"
	I0920 18:18:33.058508   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetMachineName
	I0920 18:18:33.058668   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:18:33.061310   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.061616   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:33.061649   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.061783   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHPort
	I0920 18:18:33.061961   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:33.062089   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:33.062197   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHUsername
	I0920 18:18:33.062340   74753 main.go:141] libmachine: Using SSH client type: native
	I0920 18:18:33.062536   74753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.47 22 <nil> <nil>}
	I0920 18:18:33.062553   74753 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-956403 && echo "no-preload-956403" | sudo tee /etc/hostname
	I0920 18:18:33.175825   74753 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-956403
	
	I0920 18:18:33.175860   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:18:33.178483   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.178749   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:33.178769   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.178904   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHPort
	I0920 18:18:33.179077   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:33.179226   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:33.179366   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHUsername
	I0920 18:18:33.179545   74753 main.go:141] libmachine: Using SSH client type: native
	I0920 18:18:33.179710   74753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.47 22 <nil> <nil>}
	I0920 18:18:33.179726   74753 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-956403' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-956403/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-956403' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 18:18:33.290675   74753 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:18:33.290701   74753 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-8777/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-8777/.minikube}
	I0920 18:18:33.290722   74753 buildroot.go:174] setting up certificates
	I0920 18:18:33.290735   74753 provision.go:84] configureAuth start
	I0920 18:18:33.290747   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetMachineName
	I0920 18:18:33.291015   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetIP
	I0920 18:18:33.293810   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.294244   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:33.294282   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.294337   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:18:33.296376   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.296749   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:33.296776   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.296930   74753 provision.go:143] copyHostCerts
	I0920 18:18:33.296985   74753 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem, removing ...
	I0920 18:18:33.296995   74753 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem
	I0920 18:18:33.297048   74753 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem (1082 bytes)
	I0920 18:18:33.297166   74753 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem, removing ...
	I0920 18:18:33.297177   74753 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem
	I0920 18:18:33.297211   74753 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem (1123 bytes)
	I0920 18:18:33.297287   74753 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem, removing ...
	I0920 18:18:33.297296   74753 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem
	I0920 18:18:33.297320   74753 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem (1675 bytes)
	I0920 18:18:33.297387   74753 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem org=jenkins.no-preload-956403 san=[127.0.0.1 192.168.50.47 localhost minikube no-preload-956403]
	I0920 18:18:33.366768   74753 provision.go:177] copyRemoteCerts
	I0920 18:18:33.366830   74753 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 18:18:33.366852   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:18:33.369441   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.369755   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:33.369787   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.369958   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHPort
	I0920 18:18:33.370127   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:33.370293   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHUsername
	I0920 18:18:33.370490   74753 sshutil.go:53] new ssh client: &{IP:192.168.50.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/no-preload-956403/id_rsa Username:docker}
	I0920 18:18:33.452070   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0920 18:18:33.477564   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0920 18:18:33.501002   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 18:18:33.526725   74753 provision.go:87] duration metric: took 235.975552ms to configureAuth
	I0920 18:18:33.526755   74753 buildroot.go:189] setting minikube options for container-runtime
	I0920 18:18:33.526943   74753 config.go:182] Loaded profile config "no-preload-956403": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:18:33.527011   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:18:33.529870   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.530338   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:33.530371   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.530485   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHPort
	I0920 18:18:33.530708   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:33.530897   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:33.531057   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHUsername
	I0920 18:18:33.531276   74753 main.go:141] libmachine: Using SSH client type: native
	I0920 18:18:33.531497   74753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.47 22 <nil> <nil>}
	I0920 18:18:33.531519   74753 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 18:18:33.758711   74753 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 18:18:33.758739   74753 machine.go:96] duration metric: took 804.122904ms to provisionDockerMachine
	I0920 18:18:33.758753   74753 start.go:293] postStartSetup for "no-preload-956403" (driver="kvm2")
	I0920 18:18:33.758771   74753 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 18:18:33.758796   74753 main.go:141] libmachine: (no-preload-956403) Calling .DriverName
	I0920 18:18:33.759207   74753 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 18:18:33.759262   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:18:33.762524   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.762991   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:33.763027   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.763207   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHPort
	I0920 18:18:33.763471   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:33.763643   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHUsername
	I0920 18:18:33.763861   74753 sshutil.go:53] new ssh client: &{IP:192.168.50.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/no-preload-956403/id_rsa Username:docker}
	I0920 18:18:33.844910   74753 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 18:18:33.849111   74753 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 18:18:33.849137   74753 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/addons for local assets ...
	I0920 18:18:33.849205   74753 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/files for local assets ...
	I0920 18:18:33.849280   74753 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem -> 159732.pem in /etc/ssl/certs
	I0920 18:18:33.849367   74753 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 18:18:33.858757   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /etc/ssl/certs/159732.pem (1708 bytes)
	I0920 18:18:33.883430   74753 start.go:296] duration metric: took 124.663757ms for postStartSetup
	I0920 18:18:33.883468   74753 fix.go:56] duration metric: took 19.104481875s for fixHost
	I0920 18:18:33.883488   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:18:33.886703   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.887047   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:33.887076   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.887276   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHPort
	I0920 18:18:33.887502   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:33.887685   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:33.887816   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHUsername
	I0920 18:18:33.887962   74753 main.go:141] libmachine: Using SSH client type: native
	I0920 18:18:33.888137   74753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.47 22 <nil> <nil>}
	I0920 18:18:33.888148   74753 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 18:18:33.994635   74753 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726856313.964367484
	
	I0920 18:18:33.994660   74753 fix.go:216] guest clock: 1726856313.964367484
	I0920 18:18:33.994670   74753 fix.go:229] Guest: 2024-09-20 18:18:33.964367484 +0000 UTC Remote: 2024-09-20 18:18:33.883472234 +0000 UTC m=+356.278282007 (delta=80.89525ms)
	I0920 18:18:33.994695   74753 fix.go:200] guest clock delta is within tolerance: 80.89525ms
	I0920 18:18:33.994701   74753 start.go:83] releasing machines lock for "no-preload-956403", held for 19.215743841s
	I0920 18:18:33.994726   74753 main.go:141] libmachine: (no-preload-956403) Calling .DriverName
	I0920 18:18:33.994976   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetIP
	I0920 18:18:33.997685   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.998089   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:33.998114   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.998291   74753 main.go:141] libmachine: (no-preload-956403) Calling .DriverName
	I0920 18:18:33.998765   74753 main.go:141] libmachine: (no-preload-956403) Calling .DriverName
	I0920 18:18:33.998923   74753 main.go:141] libmachine: (no-preload-956403) Calling .DriverName
	I0920 18:18:33.999007   74753 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 18:18:33.999054   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:18:33.999142   74753 ssh_runner.go:195] Run: cat /version.json
	I0920 18:18:33.999168   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:18:34.001725   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:34.001891   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:34.002197   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:34.002225   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:34.002369   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHPort
	I0920 18:18:34.002424   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:34.002452   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:34.002533   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:34.002625   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHPort
	I0920 18:18:34.002695   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHUsername
	I0920 18:18:34.002744   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:34.002813   74753 sshutil.go:53] new ssh client: &{IP:192.168.50.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/no-preload-956403/id_rsa Username:docker}
	I0920 18:18:34.002888   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHUsername
	I0920 18:18:34.003032   74753 sshutil.go:53] new ssh client: &{IP:192.168.50.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/no-preload-956403/id_rsa Username:docker}
	I0920 18:18:34.079092   74753 ssh_runner.go:195] Run: systemctl --version
	I0920 18:18:34.112066   74753 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 18:18:34.257205   74753 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 18:18:34.265774   74753 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 18:18:34.265871   74753 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 18:18:34.285222   74753 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 18:18:34.285247   74753 start.go:495] detecting cgroup driver to use...
	I0920 18:18:34.285320   74753 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 18:18:34.302192   74753 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 18:18:34.316624   74753 docker.go:217] disabling cri-docker service (if available) ...
	I0920 18:18:34.316695   74753 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 18:18:34.331098   74753 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 18:18:34.345433   74753 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 18:18:34.481886   74753 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 18:18:34.644442   74753 docker.go:233] disabling docker service ...
	I0920 18:18:34.644530   74753 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 18:18:34.658714   74753 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 18:18:34.671506   74753 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 18:18:34.809548   74753 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 18:18:34.957438   74753 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 18:18:34.972102   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 18:18:34.993129   74753 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 18:18:34.993199   74753 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:18:35.004196   74753 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 18:18:35.004273   74753 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:18:35.015258   74753 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:18:35.026240   74753 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:18:35.037658   74753 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 18:18:35.048637   74753 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:18:35.059494   74753 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:18:35.077264   74753 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:18:35.087777   74753 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 18:18:35.097812   74753 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 18:18:35.097947   74753 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 18:18:35.112200   74753 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 18:18:35.123381   74753 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:18:35.262386   74753 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 18:18:35.361152   74753 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 18:18:35.361257   74753 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 18:18:35.366321   74753 start.go:563] Will wait 60s for crictl version
	I0920 18:18:35.366390   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:18:35.370379   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 18:18:35.415080   74753 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 18:18:35.415182   74753 ssh_runner.go:195] Run: crio --version
	I0920 18:18:35.446934   74753 ssh_runner.go:195] Run: crio --version
	I0920 18:18:35.478823   74753 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 18:18:34.026138   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:36.525133   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:35.479956   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetIP
	I0920 18:18:35.483428   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:35.483820   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:35.483848   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:35.484079   74753 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0920 18:18:35.488686   74753 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:18:35.501184   74753 kubeadm.go:883] updating cluster {Name:no-preload-956403 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-956403 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.47 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 18:18:35.501348   74753 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:18:35.501391   74753 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:18:35.535377   74753 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 18:18:35.535400   74753 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0920 18:18:35.535466   74753 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 18:18:35.535499   74753 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0920 18:18:35.535510   74753 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0920 18:18:35.535534   74753 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0920 18:18:35.535538   74753 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0920 18:18:35.535513   74753 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0920 18:18:35.535625   74753 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0920 18:18:35.535483   74753 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:18:35.537100   74753 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 18:18:35.537104   74753 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0920 18:18:35.537126   74753 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0920 18:18:35.537091   74753 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0920 18:18:35.537164   74753 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0920 18:18:35.537180   74753 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0920 18:18:35.537191   74753 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0920 18:18:35.537105   74753 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:18:35.770046   74753 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0920 18:18:35.796364   74753 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0920 18:18:35.800776   74753 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0920 18:18:35.802291   74753 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0920 18:18:35.845969   74753 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0920 18:18:35.860263   74753 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0920 18:18:35.892349   74753 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 18:18:35.919269   74753 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0920 18:18:35.919323   74753 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0920 18:18:35.919375   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:18:35.929045   74753 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0920 18:18:35.929096   74753 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0920 18:18:35.929143   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:18:35.929181   74753 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0920 18:18:35.929235   74753 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0920 18:18:35.929299   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:18:35.968513   74753 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0920 18:18:35.968557   74753 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0920 18:18:35.968553   74753 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0920 18:18:35.968589   74753 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0920 18:18:35.968612   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:18:35.968636   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:18:35.984335   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0920 18:18:35.984366   74753 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0920 18:18:35.984379   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0920 18:18:35.984403   74753 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 18:18:35.984433   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0920 18:18:35.984450   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:18:35.984474   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0920 18:18:35.984505   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0920 18:18:36.106947   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0920 18:18:36.106964   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 18:18:36.106954   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0920 18:18:36.107025   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0920 18:18:36.107112   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0920 18:18:36.107142   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0920 18:18:36.231892   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0920 18:18:36.231989   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0920 18:18:36.232026   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0920 18:18:36.232143   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0920 18:18:36.232193   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0920 18:18:36.232281   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 18:18:36.355534   74753 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0920 18:18:36.355632   74753 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0920 18:18:36.355650   74753 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0920 18:18:36.355730   74753 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0920 18:18:36.358584   74753 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0920 18:18:36.358653   74753 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0920 18:18:36.358678   74753 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0920 18:18:36.358677   74753 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0920 18:18:36.358723   74753 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0920 18:18:36.358751   74753 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0920 18:18:36.367463   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 18:18:36.372389   74753 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0920 18:18:36.372412   74753 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0920 18:18:36.372457   74753 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0920 18:18:36.372580   74753 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I0920 18:18:36.374496   74753 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0920 18:18:36.374538   74753 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I0920 18:18:36.374638   74753 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I0920 18:18:36.410860   74753 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0920 18:18:36.410961   74753 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0920 18:18:36.738003   74753 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:18:32.749877   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:33.249391   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:33.749510   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:34.250003   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:34.749967   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:35.249953   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:35.749992   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:36.250161   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:36.750339   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:37.249600   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:34.192782   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:36.692485   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:38.692626   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:39.024337   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:41.025364   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:38.480788   74753 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.108302901s)
	I0920 18:18:38.480819   74753 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0920 18:18:38.480839   74753 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0920 18:18:38.480869   74753 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1: (2.069867717s)
	I0920 18:18:38.480893   74753 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0920 18:18:38.480896   74753 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I0920 18:18:38.480922   74753 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.742864605s)
	I0920 18:18:38.480970   74753 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0920 18:18:38.480993   74753 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:18:38.481031   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:18:40.340748   74753 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (1.85982838s)
	I0920 18:18:40.340782   74753 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0920 18:18:40.340790   74753 ssh_runner.go:235] Completed: which crictl: (1.859743083s)
	I0920 18:18:40.340812   74753 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0920 18:18:40.340850   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:18:40.340869   74753 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0920 18:18:37.750269   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:38.249855   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:38.749845   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:39.249342   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:39.750306   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:40.250103   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:40.750346   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:41.250134   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:41.750136   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:42.249945   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:41.191300   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:43.193536   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:43.025860   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:45.527119   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:44.427309   74753 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (4.086436233s)
	I0920 18:18:44.427365   74753 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (4.086473186s)
	I0920 18:18:44.427395   74753 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0920 18:18:44.427400   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:18:44.427430   74753 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0920 18:18:44.427510   74753 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0920 18:18:46.393094   74753 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (1.965557326s)
	I0920 18:18:46.393133   74753 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0920 18:18:46.393163   74753 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0920 18:18:46.393170   74753 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.965750119s)
	I0920 18:18:46.393216   74753 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0920 18:18:46.393241   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:18:42.749980   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:43.249416   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:43.750087   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:44.249972   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:44.749649   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:45.250128   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:45.750346   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:46.249611   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:46.749566   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:47.249814   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:45.193713   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:47.691882   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:48.027047   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:50.527076   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:47.771559   74753 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.37829872s)
	I0920 18:18:47.771595   74753 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0920 18:18:47.771607   74753 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.378351302s)
	I0920 18:18:47.771618   74753 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0920 18:18:47.771645   74753 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0920 18:18:47.771668   74753 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0920 18:18:47.771726   74753 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0920 18:18:49.544117   74753 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.772423068s)
	I0920 18:18:49.544147   74753 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0920 18:18:49.544159   74753 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.772401848s)
	I0920 18:18:49.544187   74753 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0920 18:18:49.544199   74753 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0920 18:18:49.544275   74753 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0920 18:18:50.198691   74753 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0920 18:18:50.198744   74753 cache_images.go:123] Successfully loaded all cached images
	I0920 18:18:50.198752   74753 cache_images.go:92] duration metric: took 14.66333409s to LoadCachedImages
	I0920 18:18:50.198766   74753 kubeadm.go:934] updating node { 192.168.50.47 8443 v1.31.1 crio true true} ...
	I0920 18:18:50.198900   74753 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-956403 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.47
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-956403 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 18:18:50.198988   74753 ssh_runner.go:195] Run: crio config
	I0920 18:18:50.249876   74753 cni.go:84] Creating CNI manager for ""
	I0920 18:18:50.249901   74753 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:18:50.249915   74753 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 18:18:50.249942   74753 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.47 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-956403 NodeName:no-preload-956403 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.47"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.47 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 18:18:50.250150   74753 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.47
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-956403"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.47
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.47"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 18:18:50.250264   74753 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 18:18:50.262805   74753 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 18:18:50.262886   74753 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 18:18:50.272958   74753 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0920 18:18:50.290269   74753 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 18:18:50.306981   74753 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0920 18:18:50.324360   74753 ssh_runner.go:195] Run: grep 192.168.50.47	control-plane.minikube.internal$ /etc/hosts
	I0920 18:18:50.328382   74753 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.47	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:18:50.341021   74753 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:18:50.462105   74753 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:18:50.478780   74753 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/no-preload-956403 for IP: 192.168.50.47
	I0920 18:18:50.478804   74753 certs.go:194] generating shared ca certs ...
	I0920 18:18:50.478824   74753 certs.go:226] acquiring lock for ca certs: {Name:mkc7ef6c737c6bdc3fdd9dcff8f57029c020d8f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:18:50.479010   74753 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key
	I0920 18:18:50.479069   74753 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key
	I0920 18:18:50.479082   74753 certs.go:256] generating profile certs ...
	I0920 18:18:50.479188   74753 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/no-preload-956403/client.key
	I0920 18:18:50.479270   74753 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/no-preload-956403/apiserver.key.859cf278
	I0920 18:18:50.479335   74753 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/no-preload-956403/proxy-client.key
	I0920 18:18:50.479491   74753 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem (1338 bytes)
	W0920 18:18:50.479534   74753 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973_empty.pem, impossibly tiny 0 bytes
	I0920 18:18:50.479549   74753 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 18:18:50.479596   74753 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem (1082 bytes)
	I0920 18:18:50.479633   74753 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem (1123 bytes)
	I0920 18:18:50.479668   74753 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem (1675 bytes)
	I0920 18:18:50.479771   74753 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem (1708 bytes)
	I0920 18:18:50.480696   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 18:18:50.514559   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 18:18:50.548790   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 18:18:50.581384   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0920 18:18:50.609138   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/no-preload-956403/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0920 18:18:50.641098   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/no-preload-956403/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 18:18:50.680479   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/no-preload-956403/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 18:18:50.705168   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/no-preload-956403/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 18:18:50.727603   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /usr/share/ca-certificates/159732.pem (1708 bytes)
	I0920 18:18:50.750272   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 18:18:50.776117   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem --> /usr/share/ca-certificates/15973.pem (1338 bytes)
	I0920 18:18:50.799799   74753 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 18:18:50.816250   74753 ssh_runner.go:195] Run: openssl version
	I0920 18:18:50.821680   74753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/159732.pem && ln -fs /usr/share/ca-certificates/159732.pem /etc/ssl/certs/159732.pem"
	I0920 18:18:50.832295   74753 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/159732.pem
	I0920 18:18:50.836833   74753 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:04 /usr/share/ca-certificates/159732.pem
	I0920 18:18:50.836896   74753 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/159732.pem
	I0920 18:18:50.842626   74753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/159732.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 18:18:50.853297   74753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 18:18:50.864400   74753 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:18:50.868951   74753 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:18:50.869011   74753 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:18:50.874547   74753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 18:18:50.885615   74753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15973.pem && ln -fs /usr/share/ca-certificates/15973.pem /etc/ssl/certs/15973.pem"
	I0920 18:18:50.896960   74753 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15973.pem
	I0920 18:18:50.901798   74753 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:04 /usr/share/ca-certificates/15973.pem
	I0920 18:18:50.901879   74753 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15973.pem
	I0920 18:18:50.907920   74753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15973.pem /etc/ssl/certs/51391683.0"
	I0920 18:18:50.919345   74753 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 18:18:50.923671   74753 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 18:18:50.929701   74753 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 18:18:50.935649   74753 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 18:18:50.942162   74753 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 18:18:50.948012   74753 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 18:18:50.954097   74753 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 18:18:50.960442   74753 kubeadm.go:392] StartCluster: {Name:no-preload-956403 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-956403 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.47 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:18:50.960535   74753 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 18:18:50.960600   74753 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:18:50.998528   74753 cri.go:89] found id: ""
	I0920 18:18:50.998608   74753 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 18:18:51.008964   74753 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 18:18:51.008985   74753 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 18:18:51.009043   74753 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 18:18:51.018457   74753 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 18:18:51.019553   74753 kubeconfig.go:125] found "no-preload-956403" server: "https://192.168.50.47:8443"
	I0920 18:18:51.021712   74753 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 18:18:51.033439   74753 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.47
	I0920 18:18:51.033470   74753 kubeadm.go:1160] stopping kube-system containers ...
	I0920 18:18:51.033481   74753 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0920 18:18:51.033538   74753 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:18:51.070049   74753 cri.go:89] found id: ""
	I0920 18:18:51.070137   74753 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 18:18:51.087472   74753 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 18:18:51.098582   74753 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 18:18:51.098606   74753 kubeadm.go:157] found existing configuration files:
	
	I0920 18:18:51.098654   74753 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 18:18:51.107201   74753 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 18:18:51.107276   74753 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 18:18:51.116058   74753 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 18:18:51.124563   74753 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 18:18:51.124630   74753 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 18:18:51.134174   74753 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 18:18:51.142880   74753 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 18:18:51.142944   74753 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 18:18:51.152181   74753 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 18:18:51.161942   74753 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 18:18:51.162012   74753 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 18:18:51.171615   74753 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 18:18:51.180728   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:51.292140   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:52.019018   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:52.239327   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:52.317900   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:52.433910   74753 api_server.go:52] waiting for apiserver process to appear ...
	I0920 18:18:52.433991   74753 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:47.749487   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:48.249612   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:48.750324   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:49.250006   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:49.749667   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:50.249802   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:50.749597   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:51.250236   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:51.750203   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:52.250132   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:49.693075   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:52.192547   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:53.024979   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:55.025225   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:52.934956   74753 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:53.434681   74753 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:53.451797   74753 api_server.go:72] duration metric: took 1.017867426s to wait for apiserver process to appear ...
	I0920 18:18:53.451828   74753 api_server.go:88] waiting for apiserver healthz status ...
	I0920 18:18:53.451851   74753 api_server.go:253] Checking apiserver healthz at https://192.168.50.47:8443/healthz ...
	I0920 18:18:53.452366   74753 api_server.go:269] stopped: https://192.168.50.47:8443/healthz: Get "https://192.168.50.47:8443/healthz": dial tcp 192.168.50.47:8443: connect: connection refused
	I0920 18:18:53.952175   74753 api_server.go:253] Checking apiserver healthz at https://192.168.50.47:8443/healthz ...
	I0920 18:18:55.972801   74753 api_server.go:279] https://192.168.50.47:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 18:18:55.972835   74753 api_server.go:103] status: https://192.168.50.47:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 18:18:55.972854   74753 api_server.go:253] Checking apiserver healthz at https://192.168.50.47:8443/healthz ...
	I0920 18:18:56.018532   74753 api_server.go:279] https://192.168.50.47:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 18:18:56.018563   74753 api_server.go:103] status: https://192.168.50.47:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 18:18:56.452127   74753 api_server.go:253] Checking apiserver healthz at https://192.168.50.47:8443/healthz ...
	I0920 18:18:56.459752   74753 api_server.go:279] https://192.168.50.47:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 18:18:56.459796   74753 api_server.go:103] status: https://192.168.50.47:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 18:18:56.952284   74753 api_server.go:253] Checking apiserver healthz at https://192.168.50.47:8443/healthz ...
	I0920 18:18:56.959496   74753 api_server.go:279] https://192.168.50.47:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 18:18:56.959553   74753 api_server.go:103] status: https://192.168.50.47:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 18:18:57.452049   74753 api_server.go:253] Checking apiserver healthz at https://192.168.50.47:8443/healthz ...
	I0920 18:18:57.456586   74753 api_server.go:279] https://192.168.50.47:8443/healthz returned 200:
	ok
	I0920 18:18:57.463754   74753 api_server.go:141] control plane version: v1.31.1
	I0920 18:18:57.463785   74753 api_server.go:131] duration metric: took 4.011948835s to wait for apiserver health ...
	I0920 18:18:57.463795   74753 cni.go:84] Creating CNI manager for ""
	I0920 18:18:57.463804   74753 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:18:57.465916   74753 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 18:18:57.467416   74753 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 18:18:57.485487   74753 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 18:18:57.529426   74753 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 18:18:57.540046   74753 system_pods.go:59] 8 kube-system pods found
	I0920 18:18:57.540100   74753 system_pods.go:61] "coredns-7c65d6cfc9-j2t5h" [dd2636f1-3200-4f22-957c-046277c9be8c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 18:18:57.540113   74753 system_pods.go:61] "etcd-no-preload-956403" [daf84be0-e673-4685-a998-47ec64664484] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0920 18:18:57.540127   74753 system_pods.go:61] "kube-apiserver-no-preload-956403" [416217e6-3c91-49ec-bd69-812a49b4afd9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0920 18:18:57.540139   74753 system_pods.go:61] "kube-controller-manager-no-preload-956403" [30fae740-b073-4619-bca5-010ca90b2667] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0920 18:18:57.540148   74753 system_pods.go:61] "kube-proxy-sz4bm" [269600fb-ef65-4b17-8c07-76c79e35f5a8] Running
	I0920 18:18:57.540160   74753 system_pods.go:61] "kube-scheduler-no-preload-956403" [46432927-e506-4665-8825-c5f6a2ed0458] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0920 18:18:57.540174   74753 system_pods.go:61] "metrics-server-6867b74b74-tfsff" [599ba06a-6d4d-483b-b390-a3595a814757] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 18:18:57.540184   74753 system_pods.go:61] "storage-provisioner" [df661627-7d32-4467-9805-1ae65d4fa35c] Running
	I0920 18:18:57.540196   74753 system_pods.go:74] duration metric: took 10.749595ms to wait for pod list to return data ...
	I0920 18:18:57.540208   74753 node_conditions.go:102] verifying NodePressure condition ...
	I0920 18:18:57.544401   74753 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 18:18:57.544438   74753 node_conditions.go:123] node cpu capacity is 2
	I0920 18:18:57.544451   74753 node_conditions.go:105] duration metric: took 4.237618ms to run NodePressure ...
	I0920 18:18:57.544471   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:52.749468   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:53.249519   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:53.750056   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:54.250286   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:54.750240   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:55.249980   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:55.750192   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:56.249940   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:56.749289   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:57.249615   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:54.193519   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:56.693152   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:57.818736   74753 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0920 18:18:57.823058   74753 kubeadm.go:739] kubelet initialised
	I0920 18:18:57.823082   74753 kubeadm.go:740] duration metric: took 4.316691ms waiting for restarted kubelet to initialise ...
	I0920 18:18:57.823091   74753 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:18:57.827594   74753 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-j2t5h" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:57.833207   74753 pod_ready.go:98] node "no-preload-956403" hosting pod "coredns-7c65d6cfc9-j2t5h" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:57.833240   74753 pod_ready.go:82] duration metric: took 5.620335ms for pod "coredns-7c65d6cfc9-j2t5h" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:57.833252   74753 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-956403" hosting pod "coredns-7c65d6cfc9-j2t5h" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:57.833269   74753 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:57.838687   74753 pod_ready.go:98] node "no-preload-956403" hosting pod "etcd-no-preload-956403" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:57.838723   74753 pod_ready.go:82] duration metric: took 5.441795ms for pod "etcd-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:57.838737   74753 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-956403" hosting pod "etcd-no-preload-956403" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:57.838747   74753 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:57.848481   74753 pod_ready.go:98] node "no-preload-956403" hosting pod "kube-apiserver-no-preload-956403" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:57.848510   74753 pod_ready.go:82] duration metric: took 9.754053ms for pod "kube-apiserver-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:57.848520   74753 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-956403" hosting pod "kube-apiserver-no-preload-956403" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:57.848530   74753 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:57.934356   74753 pod_ready.go:98] node "no-preload-956403" hosting pod "kube-controller-manager-no-preload-956403" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:57.934383   74753 pod_ready.go:82] duration metric: took 85.842265ms for pod "kube-controller-manager-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:57.934392   74753 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-956403" hosting pod "kube-controller-manager-no-preload-956403" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:57.934397   74753 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-sz4bm" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:58.332467   74753 pod_ready.go:98] node "no-preload-956403" hosting pod "kube-proxy-sz4bm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:58.332495   74753 pod_ready.go:82] duration metric: took 398.088821ms for pod "kube-proxy-sz4bm" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:58.332504   74753 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-956403" hosting pod "kube-proxy-sz4bm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:58.332510   74753 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:58.732836   74753 pod_ready.go:98] node "no-preload-956403" hosting pod "kube-scheduler-no-preload-956403" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:58.732859   74753 pod_ready.go:82] duration metric: took 400.343274ms for pod "kube-scheduler-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:58.732868   74753 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-956403" hosting pod "kube-scheduler-no-preload-956403" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:58.732874   74753 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:59.132465   74753 pod_ready.go:98] node "no-preload-956403" hosting pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:59.132489   74753 pod_ready.go:82] duration metric: took 399.606625ms for pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:59.132503   74753 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-956403" hosting pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:59.132510   74753 pod_ready.go:39] duration metric: took 1.309409155s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:18:59.132526   74753 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 18:18:59.147163   74753 ops.go:34] apiserver oom_adj: -16
	I0920 18:18:59.147190   74753 kubeadm.go:597] duration metric: took 8.138198351s to restartPrimaryControlPlane
	I0920 18:18:59.147200   74753 kubeadm.go:394] duration metric: took 8.186768244s to StartCluster
	I0920 18:18:59.147214   74753 settings.go:142] acquiring lock: {Name:mk2a13c58cdc0faf8cddca5d6716175d45db9bfd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:18:59.147287   74753 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19672-8777/kubeconfig
	I0920 18:18:59.149036   74753 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/kubeconfig: {Name:mkf32a4c736808e023459b2f0e40188618a38db1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:18:59.149303   74753 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.47 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:18:59.149381   74753 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 18:18:59.149479   74753 addons.go:69] Setting storage-provisioner=true in profile "no-preload-956403"
	I0920 18:18:59.149503   74753 addons.go:234] Setting addon storage-provisioner=true in "no-preload-956403"
	I0920 18:18:59.149501   74753 addons.go:69] Setting default-storageclass=true in profile "no-preload-956403"
	W0920 18:18:59.149515   74753 addons.go:243] addon storage-provisioner should already be in state true
	I0920 18:18:59.149526   74753 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-956403"
	I0920 18:18:59.149520   74753 addons.go:69] Setting metrics-server=true in profile "no-preload-956403"
	I0920 18:18:59.149546   74753 host.go:66] Checking if "no-preload-956403" exists ...
	I0920 18:18:59.149558   74753 addons.go:234] Setting addon metrics-server=true in "no-preload-956403"
	W0920 18:18:59.149575   74753 addons.go:243] addon metrics-server should already be in state true
	I0920 18:18:59.149588   74753 config.go:182] Loaded profile config "no-preload-956403": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:18:59.149610   74753 host.go:66] Checking if "no-preload-956403" exists ...
	I0920 18:18:59.149847   74753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:59.149889   74753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:59.149944   74753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:59.149954   74753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:59.150003   74753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:59.150075   74753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:59.151276   74753 out.go:177] * Verifying Kubernetes components...
	I0920 18:18:59.152577   74753 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:18:59.178294   74753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42565
	I0920 18:18:59.178917   74753 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:59.179414   74753 main.go:141] libmachine: Using API Version  1
	I0920 18:18:59.179435   74753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:59.179869   74753 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:59.180059   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetState
	I0920 18:18:59.183556   74753 addons.go:234] Setting addon default-storageclass=true in "no-preload-956403"
	W0920 18:18:59.183575   74753 addons.go:243] addon default-storageclass should already be in state true
	I0920 18:18:59.183606   74753 host.go:66] Checking if "no-preload-956403" exists ...
	I0920 18:18:59.183970   74753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:59.184011   74753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:59.195338   74753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43759
	I0920 18:18:59.195739   74753 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:59.196290   74753 main.go:141] libmachine: Using API Version  1
	I0920 18:18:59.196316   74753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:59.196742   74753 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:59.197211   74753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33531
	I0920 18:18:59.197327   74753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:59.197375   74753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:59.197552   74753 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:59.198055   74753 main.go:141] libmachine: Using API Version  1
	I0920 18:18:59.198080   74753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:59.198420   74753 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:59.198997   74753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:59.199037   74753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:59.203164   74753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40055
	I0920 18:18:59.203700   74753 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:59.204320   74753 main.go:141] libmachine: Using API Version  1
	I0920 18:18:59.204341   74753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:59.204745   74753 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:59.205399   74753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:59.205440   74753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:59.215457   74753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37991
	I0920 18:18:59.215953   74753 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:59.216312   74753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35461
	I0920 18:18:59.216521   74753 main.go:141] libmachine: Using API Version  1
	I0920 18:18:59.216723   74753 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:59.216826   74753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:59.217221   74753 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:59.217403   74753 main.go:141] libmachine: Using API Version  1
	I0920 18:18:59.217414   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetState
	I0920 18:18:59.217452   74753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:59.217942   74753 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:59.218403   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetState
	I0920 18:18:59.219340   74753 main.go:141] libmachine: (no-preload-956403) Calling .DriverName
	I0920 18:18:59.220536   74753 main.go:141] libmachine: (no-preload-956403) Calling .DriverName
	I0920 18:18:59.221934   74753 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0920 18:18:59.222788   74753 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:18:59.223744   74753 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 18:18:59.223766   74753 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 18:18:59.223788   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:18:59.224567   74753 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 18:18:59.224588   74753 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 18:18:59.224607   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:18:59.225333   74753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45207
	I0920 18:18:59.225992   74753 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:59.226975   74753 main.go:141] libmachine: Using API Version  1
	I0920 18:18:59.226991   74753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:59.227472   74753 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:59.227788   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetState
	I0920 18:18:59.227818   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:59.228234   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:59.228282   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:59.228441   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHPort
	I0920 18:18:59.228636   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:59.228671   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:59.228821   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHUsername
	I0920 18:18:59.228960   74753 sshutil.go:53] new ssh client: &{IP:192.168.50.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/no-preload-956403/id_rsa Username:docker}
	I0920 18:18:59.229280   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:59.229297   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:59.229478   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHPort
	I0920 18:18:59.229651   74753 main.go:141] libmachine: (no-preload-956403) Calling .DriverName
	I0920 18:18:59.229807   74753 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 18:18:59.229817   74753 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 18:18:59.229843   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:18:59.230195   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:59.230729   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHUsername
	I0920 18:18:59.230899   74753 sshutil.go:53] new ssh client: &{IP:192.168.50.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/no-preload-956403/id_rsa Username:docker}
	I0920 18:18:59.232419   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:59.232844   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:59.232877   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:59.233020   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHPort
	I0920 18:18:59.233201   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:59.233311   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHUsername
	I0920 18:18:59.233424   74753 sshutil.go:53] new ssh client: &{IP:192.168.50.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/no-preload-956403/id_rsa Username:docker}
	I0920 18:18:59.373284   74753 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:18:59.391114   74753 node_ready.go:35] waiting up to 6m0s for node "no-preload-956403" to be "Ready" ...
	I0920 18:18:59.475607   74753 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 18:18:59.530301   74753 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 18:18:59.530320   74753 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0920 18:18:59.530804   74753 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 18:18:59.607246   74753 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 18:18:59.607272   74753 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 18:18:59.685234   74753 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 18:18:59.685273   74753 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 18:18:59.753902   74753 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 18:19:00.029605   74753 main.go:141] libmachine: Making call to close driver server
	I0920 18:19:00.029630   74753 main.go:141] libmachine: (no-preload-956403) Calling .Close
	I0920 18:19:00.029968   74753 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:19:00.030016   74753 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:19:00.030032   74753 main.go:141] libmachine: Making call to close driver server
	I0920 18:19:00.030039   74753 main.go:141] libmachine: (no-preload-956403) Calling .Close
	I0920 18:19:00.030285   74753 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:19:00.030299   74753 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:19:00.049472   74753 main.go:141] libmachine: Making call to close driver server
	I0920 18:19:00.049500   74753 main.go:141] libmachine: (no-preload-956403) Calling .Close
	I0920 18:19:00.049765   74753 main.go:141] libmachine: (no-preload-956403) DBG | Closing plugin on server side
	I0920 18:19:00.049781   74753 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:19:00.049797   74753 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:19:00.790635   74753 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.259791892s)
	I0920 18:19:00.790694   74753 main.go:141] libmachine: Making call to close driver server
	I0920 18:19:00.790703   74753 main.go:141] libmachine: (no-preload-956403) Calling .Close
	I0920 18:19:00.791004   74753 main.go:141] libmachine: (no-preload-956403) DBG | Closing plugin on server side
	I0920 18:19:00.791007   74753 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:19:00.791075   74753 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:19:00.791100   74753 main.go:141] libmachine: Making call to close driver server
	I0920 18:19:00.791110   74753 main.go:141] libmachine: (no-preload-956403) Calling .Close
	I0920 18:19:00.791339   74753 main.go:141] libmachine: (no-preload-956403) DBG | Closing plugin on server side
	I0920 18:19:00.791347   74753 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:19:00.791358   74753 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:19:00.829250   74753 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.075301587s)
	I0920 18:19:00.829297   74753 main.go:141] libmachine: Making call to close driver server
	I0920 18:19:00.829311   74753 main.go:141] libmachine: (no-preload-956403) Calling .Close
	I0920 18:19:00.829607   74753 main.go:141] libmachine: (no-preload-956403) DBG | Closing plugin on server side
	I0920 18:19:00.829612   74753 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:19:00.829632   74753 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:19:00.829641   74753 main.go:141] libmachine: Making call to close driver server
	I0920 18:19:00.829647   74753 main.go:141] libmachine: (no-preload-956403) Calling .Close
	I0920 18:19:00.829905   74753 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:19:00.829927   74753 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:19:00.829926   74753 main.go:141] libmachine: (no-preload-956403) DBG | Closing plugin on server side
	I0920 18:19:00.829938   74753 addons.go:475] Verifying addon metrics-server=true in "no-preload-956403"
	I0920 18:19:00.831897   74753 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0920 18:18:57.524979   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:59.526500   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:02.024579   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:00.833093   74753 addons.go:510] duration metric: took 1.683722999s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0920 18:19:01.394507   74753 node_ready.go:53] node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:57.750113   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:58.250100   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:58.750133   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:59.249427   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:59.749350   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:00.250267   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:00.749723   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:01.249549   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:01.749698   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:02.250043   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:59.195097   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:01.692391   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:03.692510   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:04.024713   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:06.525591   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:03.395035   74753 node_ready.go:53] node "no-preload-956403" has status "Ready":"False"
	I0920 18:19:05.895477   74753 node_ready.go:53] node "no-preload-956403" has status "Ready":"False"
	I0920 18:19:06.395177   74753 node_ready.go:49] node "no-preload-956403" has status "Ready":"True"
	I0920 18:19:06.395211   74753 node_ready.go:38] duration metric: took 7.0040677s for node "no-preload-956403" to be "Ready" ...
	I0920 18:19:06.395224   74753 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:19:06.400929   74753 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-j2t5h" in "kube-system" namespace to be "Ready" ...
	I0920 18:19:06.406052   74753 pod_ready.go:93] pod "coredns-7c65d6cfc9-j2t5h" in "kube-system" namespace has status "Ready":"True"
	I0920 18:19:06.406076   74753 pod_ready.go:82] duration metric: took 5.118178ms for pod "coredns-7c65d6cfc9-j2t5h" in "kube-system" namespace to be "Ready" ...
	I0920 18:19:06.406088   74753 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	I0920 18:19:07.412930   74753 pod_ready.go:93] pod "etcd-no-preload-956403" in "kube-system" namespace has status "Ready":"True"
	I0920 18:19:07.412946   74753 pod_ready.go:82] duration metric: took 1.006851075s for pod "etcd-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	I0920 18:19:07.412955   74753 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	I0920 18:19:07.420312   74753 pod_ready.go:93] pod "kube-apiserver-no-preload-956403" in "kube-system" namespace has status "Ready":"True"
	I0920 18:19:07.420342   74753 pod_ready.go:82] duration metric: took 7.380815ms for pod "kube-apiserver-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	I0920 18:19:07.420354   74753 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	I0920 18:19:02.750082   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:03.249861   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:03.749400   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:04.250213   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:04.749806   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:05.249569   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:05.750115   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:06.250182   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:06.750188   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:07.250041   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:06.191328   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:08.192222   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:09.024510   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:11.025023   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:08.426149   74753 pod_ready.go:93] pod "kube-controller-manager-no-preload-956403" in "kube-system" namespace has status "Ready":"True"
	I0920 18:19:08.426173   74753 pod_ready.go:82] duration metric: took 1.005811855s for pod "kube-controller-manager-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	I0920 18:19:08.426182   74753 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-sz4bm" in "kube-system" namespace to be "Ready" ...
	I0920 18:19:08.431446   74753 pod_ready.go:93] pod "kube-proxy-sz4bm" in "kube-system" namespace has status "Ready":"True"
	I0920 18:19:08.431474   74753 pod_ready.go:82] duration metric: took 5.284488ms for pod "kube-proxy-sz4bm" in "kube-system" namespace to be "Ready" ...
	I0920 18:19:08.431486   74753 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	I0920 18:19:08.794973   74753 pod_ready.go:93] pod "kube-scheduler-no-preload-956403" in "kube-system" namespace has status "Ready":"True"
	I0920 18:19:08.794997   74753 pod_ready.go:82] duration metric: took 363.504269ms for pod "kube-scheduler-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	I0920 18:19:08.795005   74753 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace to be "Ready" ...
	I0920 18:19:10.802181   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:07.749479   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:08.249641   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:08.749637   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:09.249803   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:09.749534   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:10.249365   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:10.749366   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:11.250045   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:11.750298   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:12.249766   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:10.192631   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:12.692470   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:13.525217   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:16.025516   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:13.300755   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:15.301345   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:12.749447   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:13.249356   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:13.749514   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:14.249942   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:14.749988   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:15.249405   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:15.749620   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:16.249962   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:16.750325   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:17.250198   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:15.191379   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:17.692235   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:18.525176   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:21.023910   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:17.801315   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:20.302369   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:22.302578   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:17.749509   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:18.250076   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:18.749518   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:19.249440   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:19.750156   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:20.249686   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:20.750315   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:21.249755   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:21.749370   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:22.249767   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:20.192027   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:22.192925   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:23.024677   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:25.025216   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:24.801558   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:26.802796   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:22.749289   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:23.249386   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:23.749870   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:24.249424   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:24.750349   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:25.249582   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:25.249657   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:25.287604   75577 cri.go:89] found id: ""
	I0920 18:19:25.287633   75577 logs.go:276] 0 containers: []
	W0920 18:19:25.287641   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:25.287647   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:25.287692   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:25.322093   75577 cri.go:89] found id: ""
	I0920 18:19:25.322127   75577 logs.go:276] 0 containers: []
	W0920 18:19:25.322137   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:25.322144   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:25.322194   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:25.354812   75577 cri.go:89] found id: ""
	I0920 18:19:25.354840   75577 logs.go:276] 0 containers: []
	W0920 18:19:25.354851   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:25.354863   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:25.354928   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:25.387777   75577 cri.go:89] found id: ""
	I0920 18:19:25.387804   75577 logs.go:276] 0 containers: []
	W0920 18:19:25.387813   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:25.387819   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:25.387867   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:25.420325   75577 cri.go:89] found id: ""
	I0920 18:19:25.420354   75577 logs.go:276] 0 containers: []
	W0920 18:19:25.420364   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:25.420372   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:25.420437   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:25.454824   75577 cri.go:89] found id: ""
	I0920 18:19:25.454853   75577 logs.go:276] 0 containers: []
	W0920 18:19:25.454864   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:25.454871   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:25.454933   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:25.493520   75577 cri.go:89] found id: ""
	I0920 18:19:25.493548   75577 logs.go:276] 0 containers: []
	W0920 18:19:25.493559   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:25.493566   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:25.493639   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:25.528795   75577 cri.go:89] found id: ""
	I0920 18:19:25.528820   75577 logs.go:276] 0 containers: []
	W0920 18:19:25.528828   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:25.528836   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:25.528847   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:25.571411   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:25.571442   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:25.625648   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:25.625693   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:25.639891   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:25.639920   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:25.769263   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:25.769289   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:25.769303   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:19:24.691757   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:26.695197   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:27.524327   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:29.525192   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:32.024450   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:29.301400   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:31.301988   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:28.346894   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:28.359891   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:28.359967   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:28.396117   75577 cri.go:89] found id: ""
	I0920 18:19:28.396144   75577 logs.go:276] 0 containers: []
	W0920 18:19:28.396152   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:28.396158   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:28.396206   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:28.447888   75577 cri.go:89] found id: ""
	I0920 18:19:28.447923   75577 logs.go:276] 0 containers: []
	W0920 18:19:28.447933   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:28.447941   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:28.447998   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:28.485018   75577 cri.go:89] found id: ""
	I0920 18:19:28.485046   75577 logs.go:276] 0 containers: []
	W0920 18:19:28.485054   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:28.485060   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:28.485108   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:28.520904   75577 cri.go:89] found id: ""
	I0920 18:19:28.520934   75577 logs.go:276] 0 containers: []
	W0920 18:19:28.520942   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:28.520948   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:28.520996   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:28.561375   75577 cri.go:89] found id: ""
	I0920 18:19:28.561407   75577 logs.go:276] 0 containers: []
	W0920 18:19:28.561415   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:28.561421   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:28.561467   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:28.597643   75577 cri.go:89] found id: ""
	I0920 18:19:28.597671   75577 logs.go:276] 0 containers: []
	W0920 18:19:28.597679   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:28.597685   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:28.597747   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:28.632885   75577 cri.go:89] found id: ""
	I0920 18:19:28.632912   75577 logs.go:276] 0 containers: []
	W0920 18:19:28.632921   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:28.632926   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:28.632973   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:28.670639   75577 cri.go:89] found id: ""
	I0920 18:19:28.670665   75577 logs.go:276] 0 containers: []
	W0920 18:19:28.670675   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:28.670699   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:28.670714   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:19:28.749623   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:28.749659   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:28.790808   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:28.790831   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:28.842027   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:28.842070   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:28.855927   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:28.855954   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:28.935807   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:31.436321   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:31.450159   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:31.450221   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:31.485444   75577 cri.go:89] found id: ""
	I0920 18:19:31.485483   75577 logs.go:276] 0 containers: []
	W0920 18:19:31.485494   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:31.485502   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:31.485562   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:31.520888   75577 cri.go:89] found id: ""
	I0920 18:19:31.520918   75577 logs.go:276] 0 containers: []
	W0920 18:19:31.520927   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:31.520941   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:31.521007   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:31.555001   75577 cri.go:89] found id: ""
	I0920 18:19:31.555029   75577 logs.go:276] 0 containers: []
	W0920 18:19:31.555040   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:31.555047   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:31.555111   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:31.588766   75577 cri.go:89] found id: ""
	I0920 18:19:31.588794   75577 logs.go:276] 0 containers: []
	W0920 18:19:31.588802   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:31.588808   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:31.588872   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:31.622006   75577 cri.go:89] found id: ""
	I0920 18:19:31.622037   75577 logs.go:276] 0 containers: []
	W0920 18:19:31.622048   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:31.622056   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:31.622110   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:31.659475   75577 cri.go:89] found id: ""
	I0920 18:19:31.659508   75577 logs.go:276] 0 containers: []
	W0920 18:19:31.659519   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:31.659527   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:31.659589   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:31.695380   75577 cri.go:89] found id: ""
	I0920 18:19:31.695415   75577 logs.go:276] 0 containers: []
	W0920 18:19:31.695426   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:31.695436   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:31.695521   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:31.729507   75577 cri.go:89] found id: ""
	I0920 18:19:31.729540   75577 logs.go:276] 0 containers: []
	W0920 18:19:31.729550   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:31.729561   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:31.729574   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:31.781857   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:31.781908   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:31.795325   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:31.795356   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:31.868684   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:31.868708   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:31.868722   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:19:31.945334   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:31.945371   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:29.191780   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:31.192712   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:33.693996   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:34.024591   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:36.024999   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:33.801443   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:36.300715   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:34.485723   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:34.499039   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:34.499106   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:34.534664   75577 cri.go:89] found id: ""
	I0920 18:19:34.534697   75577 logs.go:276] 0 containers: []
	W0920 18:19:34.534709   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:34.534717   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:34.534777   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:34.567515   75577 cri.go:89] found id: ""
	I0920 18:19:34.567545   75577 logs.go:276] 0 containers: []
	W0920 18:19:34.567556   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:34.567564   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:34.567624   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:34.601653   75577 cri.go:89] found id: ""
	I0920 18:19:34.601693   75577 logs.go:276] 0 containers: []
	W0920 18:19:34.601704   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:34.601712   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:34.601775   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:34.635238   75577 cri.go:89] found id: ""
	I0920 18:19:34.635271   75577 logs.go:276] 0 containers: []
	W0920 18:19:34.635282   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:34.635291   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:34.635361   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:34.669665   75577 cri.go:89] found id: ""
	I0920 18:19:34.669689   75577 logs.go:276] 0 containers: []
	W0920 18:19:34.669697   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:34.669703   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:34.669751   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:34.705984   75577 cri.go:89] found id: ""
	I0920 18:19:34.706012   75577 logs.go:276] 0 containers: []
	W0920 18:19:34.706022   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:34.706032   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:34.706110   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:34.740052   75577 cri.go:89] found id: ""
	I0920 18:19:34.740079   75577 logs.go:276] 0 containers: []
	W0920 18:19:34.740087   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:34.740092   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:34.740139   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:34.774930   75577 cri.go:89] found id: ""
	I0920 18:19:34.774962   75577 logs.go:276] 0 containers: []
	W0920 18:19:34.774973   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:34.774984   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:34.775081   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:34.791674   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:34.791714   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:34.894988   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:34.895019   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:34.895035   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:19:34.973785   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:34.973823   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:35.010928   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:35.010964   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:37.561543   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:37.574865   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:37.574925   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:37.609145   75577 cri.go:89] found id: ""
	I0920 18:19:37.609170   75577 logs.go:276] 0 containers: []
	W0920 18:19:37.609178   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:37.609183   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:37.609247   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:37.645080   75577 cri.go:89] found id: ""
	I0920 18:19:37.645107   75577 logs.go:276] 0 containers: []
	W0920 18:19:37.645115   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:37.645121   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:37.645168   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:37.680060   75577 cri.go:89] found id: ""
	I0920 18:19:37.680094   75577 logs.go:276] 0 containers: []
	W0920 18:19:37.680105   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:37.680113   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:37.680173   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:37.713586   75577 cri.go:89] found id: ""
	I0920 18:19:37.713618   75577 logs.go:276] 0 containers: []
	W0920 18:19:37.713629   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:37.713636   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:37.713694   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:36.192483   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:38.692017   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:38.025974   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:40.524671   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:38.302403   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:40.802748   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:37.750249   75577 cri.go:89] found id: ""
	I0920 18:19:37.750274   75577 logs.go:276] 0 containers: []
	W0920 18:19:37.750282   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:37.750289   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:37.750351   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:37.785613   75577 cri.go:89] found id: ""
	I0920 18:19:37.785642   75577 logs.go:276] 0 containers: []
	W0920 18:19:37.785650   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:37.785656   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:37.785705   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:37.824240   75577 cri.go:89] found id: ""
	I0920 18:19:37.824267   75577 logs.go:276] 0 containers: []
	W0920 18:19:37.824278   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:37.824286   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:37.824348   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:37.858522   75577 cri.go:89] found id: ""
	I0920 18:19:37.858547   75577 logs.go:276] 0 containers: []
	W0920 18:19:37.858556   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:37.858564   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:37.858575   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:19:37.939852   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:37.939891   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:37.981029   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:37.981062   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:38.030606   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:38.030633   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:38.043914   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:38.043953   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:38.124846   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:40.625196   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:40.640638   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:40.640708   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:40.687107   75577 cri.go:89] found id: ""
	I0920 18:19:40.687131   75577 logs.go:276] 0 containers: []
	W0920 18:19:40.687140   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:40.687148   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:40.687206   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:40.725727   75577 cri.go:89] found id: ""
	I0920 18:19:40.725858   75577 logs.go:276] 0 containers: []
	W0920 18:19:40.725875   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:40.725889   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:40.725967   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:40.762965   75577 cri.go:89] found id: ""
	I0920 18:19:40.762988   75577 logs.go:276] 0 containers: []
	W0920 18:19:40.762996   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:40.763002   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:40.763049   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:40.797387   75577 cri.go:89] found id: ""
	I0920 18:19:40.797415   75577 logs.go:276] 0 containers: []
	W0920 18:19:40.797427   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:40.797439   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:40.797498   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:40.834482   75577 cri.go:89] found id: ""
	I0920 18:19:40.834513   75577 logs.go:276] 0 containers: []
	W0920 18:19:40.834521   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:40.834528   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:40.834578   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:40.870062   75577 cri.go:89] found id: ""
	I0920 18:19:40.870090   75577 logs.go:276] 0 containers: []
	W0920 18:19:40.870099   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:40.870106   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:40.870164   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:40.906613   75577 cri.go:89] found id: ""
	I0920 18:19:40.906642   75577 logs.go:276] 0 containers: []
	W0920 18:19:40.906653   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:40.906662   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:40.906721   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:40.953033   75577 cri.go:89] found id: ""
	I0920 18:19:40.953056   75577 logs.go:276] 0 containers: []
	W0920 18:19:40.953065   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:40.953073   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:40.953083   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:40.998490   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:40.998523   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:41.051488   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:41.051525   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:41.067908   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:41.067937   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:41.144854   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:41.144878   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:41.144890   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:19:40.692456   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:43.192532   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:43.024238   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:45.024998   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:47.025427   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:43.302334   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:45.801415   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:43.723613   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:43.736857   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:43.736924   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:43.772588   75577 cri.go:89] found id: ""
	I0920 18:19:43.772624   75577 logs.go:276] 0 containers: []
	W0920 18:19:43.772635   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:43.772643   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:43.772712   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:43.808580   75577 cri.go:89] found id: ""
	I0920 18:19:43.808611   75577 logs.go:276] 0 containers: []
	W0920 18:19:43.808622   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:43.808629   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:43.808695   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:43.847167   75577 cri.go:89] found id: ""
	I0920 18:19:43.847207   75577 logs.go:276] 0 containers: []
	W0920 18:19:43.847218   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:43.847243   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:43.847302   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:43.883614   75577 cri.go:89] found id: ""
	I0920 18:19:43.883646   75577 logs.go:276] 0 containers: []
	W0920 18:19:43.883658   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:43.883667   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:43.883738   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:43.917602   75577 cri.go:89] found id: ""
	I0920 18:19:43.917627   75577 logs.go:276] 0 containers: []
	W0920 18:19:43.917635   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:43.917641   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:43.917694   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:43.953232   75577 cri.go:89] found id: ""
	I0920 18:19:43.953259   75577 logs.go:276] 0 containers: []
	W0920 18:19:43.953268   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:43.953273   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:43.953325   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:43.991204   75577 cri.go:89] found id: ""
	I0920 18:19:43.991234   75577 logs.go:276] 0 containers: []
	W0920 18:19:43.991246   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:43.991253   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:43.991334   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:44.028117   75577 cri.go:89] found id: ""
	I0920 18:19:44.028140   75577 logs.go:276] 0 containers: []
	W0920 18:19:44.028147   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:44.028154   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:44.028164   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:44.068175   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:44.068203   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:44.119953   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:44.119993   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:44.134127   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:44.134154   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:44.211563   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:44.211592   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:44.211604   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:19:46.787328   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:46.803429   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:46.803516   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:46.844813   75577 cri.go:89] found id: ""
	I0920 18:19:46.844839   75577 logs.go:276] 0 containers: []
	W0920 18:19:46.844850   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:46.844856   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:46.844914   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:46.878441   75577 cri.go:89] found id: ""
	I0920 18:19:46.878483   75577 logs.go:276] 0 containers: []
	W0920 18:19:46.878497   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:46.878506   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:46.878580   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:46.913933   75577 cri.go:89] found id: ""
	I0920 18:19:46.913976   75577 logs.go:276] 0 containers: []
	W0920 18:19:46.913986   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:46.913993   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:46.914066   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:46.948577   75577 cri.go:89] found id: ""
	I0920 18:19:46.948609   75577 logs.go:276] 0 containers: []
	W0920 18:19:46.948618   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:46.948625   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:46.948689   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:46.982742   75577 cri.go:89] found id: ""
	I0920 18:19:46.982770   75577 logs.go:276] 0 containers: []
	W0920 18:19:46.982778   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:46.982785   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:46.982836   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:47.017075   75577 cri.go:89] found id: ""
	I0920 18:19:47.017107   75577 logs.go:276] 0 containers: []
	W0920 18:19:47.017120   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:47.017128   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:47.017190   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:47.055494   75577 cri.go:89] found id: ""
	I0920 18:19:47.055520   75577 logs.go:276] 0 containers: []
	W0920 18:19:47.055528   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:47.055534   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:47.055586   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:47.091966   75577 cri.go:89] found id: ""
	I0920 18:19:47.091998   75577 logs.go:276] 0 containers: []
	W0920 18:19:47.092006   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:47.092019   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:47.092035   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:47.142916   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:47.142955   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:47.158471   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:47.158502   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:47.241530   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:47.241553   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:47.241567   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:19:47.316958   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:47.316992   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:45.192724   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:47.692046   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:49.524915   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:52.026403   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:48.302289   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:50.303276   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:49.854403   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:49.867317   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:49.867431   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:49.903627   75577 cri.go:89] found id: ""
	I0920 18:19:49.903661   75577 logs.go:276] 0 containers: []
	W0920 18:19:49.903674   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:49.903682   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:49.903745   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:49.936857   75577 cri.go:89] found id: ""
	I0920 18:19:49.936892   75577 logs.go:276] 0 containers: []
	W0920 18:19:49.936902   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:49.936909   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:49.936975   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:49.974403   75577 cri.go:89] found id: ""
	I0920 18:19:49.974433   75577 logs.go:276] 0 containers: []
	W0920 18:19:49.974441   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:49.974447   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:49.974505   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:50.011810   75577 cri.go:89] found id: ""
	I0920 18:19:50.011839   75577 logs.go:276] 0 containers: []
	W0920 18:19:50.011850   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:50.011857   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:50.011921   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:50.050491   75577 cri.go:89] found id: ""
	I0920 18:19:50.050527   75577 logs.go:276] 0 containers: []
	W0920 18:19:50.050538   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:50.050546   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:50.050610   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:50.088270   75577 cri.go:89] found id: ""
	I0920 18:19:50.088299   75577 logs.go:276] 0 containers: []
	W0920 18:19:50.088308   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:50.088314   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:50.088375   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:50.126355   75577 cri.go:89] found id: ""
	I0920 18:19:50.126382   75577 logs.go:276] 0 containers: []
	W0920 18:19:50.126392   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:50.126399   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:50.126460   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:50.160770   75577 cri.go:89] found id: ""
	I0920 18:19:50.160798   75577 logs.go:276] 0 containers: []
	W0920 18:19:50.160808   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:50.160819   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:50.160834   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:50.212866   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:50.212914   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:50.227269   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:50.227295   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:50.301363   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:50.301392   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:50.301406   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:19:50.376293   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:50.376330   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:50.192002   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:52.192488   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:54.524436   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:56.526032   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:52.801396   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:54.802393   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:57.301066   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:52.916567   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:52.930445   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:52.930525   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:52.965360   75577 cri.go:89] found id: ""
	I0920 18:19:52.965392   75577 logs.go:276] 0 containers: []
	W0920 18:19:52.965403   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:52.965411   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:52.965468   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:53.000439   75577 cri.go:89] found id: ""
	I0920 18:19:53.000480   75577 logs.go:276] 0 containers: []
	W0920 18:19:53.000495   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:53.000503   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:53.000577   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:53.034598   75577 cri.go:89] found id: ""
	I0920 18:19:53.034630   75577 logs.go:276] 0 containers: []
	W0920 18:19:53.034640   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:53.034647   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:53.034744   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:53.068636   75577 cri.go:89] found id: ""
	I0920 18:19:53.068664   75577 logs.go:276] 0 containers: []
	W0920 18:19:53.068673   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:53.068678   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:53.068750   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:53.104381   75577 cri.go:89] found id: ""
	I0920 18:19:53.104408   75577 logs.go:276] 0 containers: []
	W0920 18:19:53.104416   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:53.104421   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:53.104474   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:53.137865   75577 cri.go:89] found id: ""
	I0920 18:19:53.137897   75577 logs.go:276] 0 containers: []
	W0920 18:19:53.137909   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:53.137922   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:53.137983   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:53.171825   75577 cri.go:89] found id: ""
	I0920 18:19:53.171861   75577 logs.go:276] 0 containers: []
	W0920 18:19:53.171874   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:53.171883   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:53.171952   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:53.210742   75577 cri.go:89] found id: ""
	I0920 18:19:53.210774   75577 logs.go:276] 0 containers: []
	W0920 18:19:53.210784   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:53.210796   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:53.210811   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:19:53.285597   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:53.285637   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:53.329821   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:53.329871   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:53.381362   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:53.381415   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:53.396044   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:53.396074   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:53.471582   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:55.972369   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:55.987205   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:55.987281   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:56.031322   75577 cri.go:89] found id: ""
	I0920 18:19:56.031355   75577 logs.go:276] 0 containers: []
	W0920 18:19:56.031363   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:56.031368   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:56.031420   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:56.066442   75577 cri.go:89] found id: ""
	I0920 18:19:56.066487   75577 logs.go:276] 0 containers: []
	W0920 18:19:56.066576   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:56.066617   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:56.066695   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:56.101818   75577 cri.go:89] found id: ""
	I0920 18:19:56.101864   75577 logs.go:276] 0 containers: []
	W0920 18:19:56.101876   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:56.101883   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:56.101947   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:56.139513   75577 cri.go:89] found id: ""
	I0920 18:19:56.139545   75577 logs.go:276] 0 containers: []
	W0920 18:19:56.139557   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:56.139565   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:56.139618   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:56.173638   75577 cri.go:89] found id: ""
	I0920 18:19:56.173669   75577 logs.go:276] 0 containers: []
	W0920 18:19:56.173680   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:56.173688   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:56.173752   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:56.210657   75577 cri.go:89] found id: ""
	I0920 18:19:56.210689   75577 logs.go:276] 0 containers: []
	W0920 18:19:56.210700   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:56.210709   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:56.210768   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:56.246035   75577 cri.go:89] found id: ""
	I0920 18:19:56.246063   75577 logs.go:276] 0 containers: []
	W0920 18:19:56.246071   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:56.246077   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:56.246123   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:56.280766   75577 cri.go:89] found id: ""
	I0920 18:19:56.280796   75577 logs.go:276] 0 containers: []
	W0920 18:19:56.280807   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:56.280818   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:56.280834   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:56.320511   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:56.320540   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:56.373746   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:56.373785   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:56.389294   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:56.389322   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:56.460079   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:56.460100   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:56.460112   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:19:54.692781   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:57.192706   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:59.025015   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:01.025196   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:59.302190   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:01.801923   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:59.044541   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:59.058395   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:59.058464   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:59.097442   75577 cri.go:89] found id: ""
	I0920 18:19:59.097482   75577 logs.go:276] 0 containers: []
	W0920 18:19:59.097495   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:59.097512   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:59.097593   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:59.133091   75577 cri.go:89] found id: ""
	I0920 18:19:59.133116   75577 logs.go:276] 0 containers: []
	W0920 18:19:59.133128   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:59.133135   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:59.133264   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:59.166902   75577 cri.go:89] found id: ""
	I0920 18:19:59.166927   75577 logs.go:276] 0 containers: []
	W0920 18:19:59.166938   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:59.166945   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:59.167008   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:59.202551   75577 cri.go:89] found id: ""
	I0920 18:19:59.202573   75577 logs.go:276] 0 containers: []
	W0920 18:19:59.202581   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:59.202586   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:59.202633   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:59.235197   75577 cri.go:89] found id: ""
	I0920 18:19:59.235222   75577 logs.go:276] 0 containers: []
	W0920 18:19:59.235230   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:59.235236   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:59.235286   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:59.269148   75577 cri.go:89] found id: ""
	I0920 18:19:59.269176   75577 logs.go:276] 0 containers: []
	W0920 18:19:59.269187   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:59.269196   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:59.269262   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:59.307670   75577 cri.go:89] found id: ""
	I0920 18:19:59.307692   75577 logs.go:276] 0 containers: []
	W0920 18:19:59.307699   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:59.307705   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:59.307753   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:59.340554   75577 cri.go:89] found id: ""
	I0920 18:19:59.340589   75577 logs.go:276] 0 containers: []
	W0920 18:19:59.340599   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:59.340610   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:59.340623   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:59.382339   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:59.382470   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:59.432176   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:59.432211   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:59.445889   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:59.445922   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:59.516564   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:59.516590   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:59.516609   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:02.101918   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:02.115064   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:02.115128   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:02.148718   75577 cri.go:89] found id: ""
	I0920 18:20:02.148749   75577 logs.go:276] 0 containers: []
	W0920 18:20:02.148757   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:02.148764   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:02.148822   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:02.182673   75577 cri.go:89] found id: ""
	I0920 18:20:02.182711   75577 logs.go:276] 0 containers: []
	W0920 18:20:02.182722   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:02.182729   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:02.182797   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:02.218734   75577 cri.go:89] found id: ""
	I0920 18:20:02.218771   75577 logs.go:276] 0 containers: []
	W0920 18:20:02.218782   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:02.218791   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:02.218856   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:02.258956   75577 cri.go:89] found id: ""
	I0920 18:20:02.258981   75577 logs.go:276] 0 containers: []
	W0920 18:20:02.258989   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:02.258995   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:02.259066   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:02.294879   75577 cri.go:89] found id: ""
	I0920 18:20:02.294910   75577 logs.go:276] 0 containers: []
	W0920 18:20:02.294919   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:02.294925   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:02.294971   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:02.331010   75577 cri.go:89] found id: ""
	I0920 18:20:02.331039   75577 logs.go:276] 0 containers: []
	W0920 18:20:02.331049   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:02.331057   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:02.331122   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:02.365023   75577 cri.go:89] found id: ""
	I0920 18:20:02.365056   75577 logs.go:276] 0 containers: []
	W0920 18:20:02.365066   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:02.365072   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:02.365122   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:02.400501   75577 cri.go:89] found id: ""
	I0920 18:20:02.400528   75577 logs.go:276] 0 containers: []
	W0920 18:20:02.400537   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:02.400545   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:02.400556   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:02.450597   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:02.450636   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:02.467637   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:02.467671   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:02.540668   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:02.540690   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:02.540706   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:02.629706   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:02.629752   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:59.192799   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:01.691537   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:03.692613   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:03.524517   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:05.525418   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:04.300845   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:06.301250   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:05.168119   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:05.182954   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:05.183043   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:05.221938   75577 cri.go:89] found id: ""
	I0920 18:20:05.221971   75577 logs.go:276] 0 containers: []
	W0920 18:20:05.221980   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:05.221990   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:05.222051   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:05.258454   75577 cri.go:89] found id: ""
	I0920 18:20:05.258479   75577 logs.go:276] 0 containers: []
	W0920 18:20:05.258487   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:05.258492   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:05.258540   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:05.293091   75577 cri.go:89] found id: ""
	I0920 18:20:05.293125   75577 logs.go:276] 0 containers: []
	W0920 18:20:05.293138   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:05.293146   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:05.293196   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:05.332992   75577 cri.go:89] found id: ""
	I0920 18:20:05.333025   75577 logs.go:276] 0 containers: []
	W0920 18:20:05.333034   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:05.333040   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:05.333088   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:05.368743   75577 cri.go:89] found id: ""
	I0920 18:20:05.368778   75577 logs.go:276] 0 containers: []
	W0920 18:20:05.368790   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:05.368798   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:05.368859   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:05.404913   75577 cri.go:89] found id: ""
	I0920 18:20:05.404941   75577 logs.go:276] 0 containers: []
	W0920 18:20:05.404948   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:05.404954   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:05.405003   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:05.442111   75577 cri.go:89] found id: ""
	I0920 18:20:05.442143   75577 logs.go:276] 0 containers: []
	W0920 18:20:05.442154   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:05.442163   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:05.442228   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:05.478808   75577 cri.go:89] found id: ""
	I0920 18:20:05.478842   75577 logs.go:276] 0 containers: []
	W0920 18:20:05.478853   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:05.478865   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:05.478879   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:05.531653   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:05.531691   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:05.545181   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:05.545210   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:05.615009   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:05.615041   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:05.615059   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:05.690842   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:05.690871   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:06.193177   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:08.691596   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:08.034009   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:10.524419   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:08.301457   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:10.801788   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:08.230851   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:08.244539   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:08.244609   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:08.281123   75577 cri.go:89] found id: ""
	I0920 18:20:08.281155   75577 logs.go:276] 0 containers: []
	W0920 18:20:08.281167   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:08.281174   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:08.281226   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:08.319704   75577 cri.go:89] found id: ""
	I0920 18:20:08.319740   75577 logs.go:276] 0 containers: []
	W0920 18:20:08.319754   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:08.319763   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:08.319828   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:08.354589   75577 cri.go:89] found id: ""
	I0920 18:20:08.354619   75577 logs.go:276] 0 containers: []
	W0920 18:20:08.354631   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:08.354638   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:08.354703   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:08.391580   75577 cri.go:89] found id: ""
	I0920 18:20:08.391603   75577 logs.go:276] 0 containers: []
	W0920 18:20:08.391612   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:08.391617   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:08.391666   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:08.425596   75577 cri.go:89] found id: ""
	I0920 18:20:08.425622   75577 logs.go:276] 0 containers: []
	W0920 18:20:08.425630   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:08.425638   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:08.425704   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:08.458720   75577 cri.go:89] found id: ""
	I0920 18:20:08.458747   75577 logs.go:276] 0 containers: []
	W0920 18:20:08.458758   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:08.458764   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:08.458812   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:08.496104   75577 cri.go:89] found id: ""
	I0920 18:20:08.496137   75577 logs.go:276] 0 containers: []
	W0920 18:20:08.496148   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:08.496155   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:08.496210   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:08.530961   75577 cri.go:89] found id: ""
	I0920 18:20:08.530989   75577 logs.go:276] 0 containers: []
	W0920 18:20:08.531000   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:08.531010   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:08.531023   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:08.568512   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:08.568541   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:08.619716   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:08.619754   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:08.634358   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:08.634390   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:08.721465   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:08.721488   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:08.721501   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:11.303942   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:11.316686   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:11.316759   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:11.353137   75577 cri.go:89] found id: ""
	I0920 18:20:11.353161   75577 logs.go:276] 0 containers: []
	W0920 18:20:11.353169   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:11.353176   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:11.353229   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:11.388271   75577 cri.go:89] found id: ""
	I0920 18:20:11.388298   75577 logs.go:276] 0 containers: []
	W0920 18:20:11.388315   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:11.388322   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:11.388388   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:11.422673   75577 cri.go:89] found id: ""
	I0920 18:20:11.422700   75577 logs.go:276] 0 containers: []
	W0920 18:20:11.422708   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:11.422714   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:11.422768   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:11.458869   75577 cri.go:89] found id: ""
	I0920 18:20:11.458900   75577 logs.go:276] 0 containers: []
	W0920 18:20:11.458910   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:11.458917   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:11.459068   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:11.494128   75577 cri.go:89] found id: ""
	I0920 18:20:11.494161   75577 logs.go:276] 0 containers: []
	W0920 18:20:11.494172   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:11.494180   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:11.494246   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:11.529111   75577 cri.go:89] found id: ""
	I0920 18:20:11.529135   75577 logs.go:276] 0 containers: []
	W0920 18:20:11.529150   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:11.529157   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:11.529223   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:11.562287   75577 cri.go:89] found id: ""
	I0920 18:20:11.562314   75577 logs.go:276] 0 containers: []
	W0920 18:20:11.562323   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:11.562329   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:11.562381   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:11.600066   75577 cri.go:89] found id: ""
	I0920 18:20:11.600106   75577 logs.go:276] 0 containers: []
	W0920 18:20:11.600117   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:11.600128   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:11.600143   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:11.681628   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:11.681665   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:11.722173   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:11.722205   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:11.773132   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:11.773171   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:11.787183   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:11.787215   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:11.856304   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:10.695174   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:13.191743   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:13.025069   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:15.525020   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:12.803677   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:15.301431   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:14.356881   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:14.371658   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:14.371729   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:14.406983   75577 cri.go:89] found id: ""
	I0920 18:20:14.407009   75577 logs.go:276] 0 containers: []
	W0920 18:20:14.407017   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:14.407022   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:14.407075   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:14.481048   75577 cri.go:89] found id: ""
	I0920 18:20:14.481075   75577 logs.go:276] 0 containers: []
	W0920 18:20:14.481086   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:14.481094   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:14.481156   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:14.513683   75577 cri.go:89] found id: ""
	I0920 18:20:14.513711   75577 logs.go:276] 0 containers: []
	W0920 18:20:14.513719   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:14.513725   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:14.513797   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:14.553330   75577 cri.go:89] found id: ""
	I0920 18:20:14.553363   75577 logs.go:276] 0 containers: []
	W0920 18:20:14.553375   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:14.553381   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:14.553446   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:14.590803   75577 cri.go:89] found id: ""
	I0920 18:20:14.590837   75577 logs.go:276] 0 containers: []
	W0920 18:20:14.590848   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:14.590856   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:14.590927   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:14.625100   75577 cri.go:89] found id: ""
	I0920 18:20:14.625130   75577 logs.go:276] 0 containers: []
	W0920 18:20:14.625141   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:14.625151   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:14.625219   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:14.659313   75577 cri.go:89] found id: ""
	I0920 18:20:14.659342   75577 logs.go:276] 0 containers: []
	W0920 18:20:14.659351   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:14.659357   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:14.659418   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:14.694901   75577 cri.go:89] found id: ""
	I0920 18:20:14.694931   75577 logs.go:276] 0 containers: []
	W0920 18:20:14.694939   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:14.694951   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:14.694966   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:14.708406   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:14.708437   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:14.785174   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:14.785200   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:14.785214   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:14.873622   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:14.873666   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:14.919130   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:14.919166   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:17.468595   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:17.483397   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:17.483473   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:17.522415   75577 cri.go:89] found id: ""
	I0920 18:20:17.522445   75577 logs.go:276] 0 containers: []
	W0920 18:20:17.522455   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:17.522463   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:17.522523   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:17.558957   75577 cri.go:89] found id: ""
	I0920 18:20:17.558991   75577 logs.go:276] 0 containers: []
	W0920 18:20:17.559002   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:17.559010   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:17.559174   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:17.595183   75577 cri.go:89] found id: ""
	I0920 18:20:17.595217   75577 logs.go:276] 0 containers: []
	W0920 18:20:17.595229   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:17.595237   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:17.595302   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:17.631727   75577 cri.go:89] found id: ""
	I0920 18:20:17.631757   75577 logs.go:276] 0 containers: []
	W0920 18:20:17.631768   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:17.631775   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:17.631894   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:17.666374   75577 cri.go:89] found id: ""
	I0920 18:20:17.666409   75577 logs.go:276] 0 containers: []
	W0920 18:20:17.666420   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:17.666427   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:17.666488   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:17.702256   75577 cri.go:89] found id: ""
	I0920 18:20:17.702281   75577 logs.go:276] 0 containers: []
	W0920 18:20:17.702291   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:17.702299   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:17.702370   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:15.191971   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:17.192990   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:17.525552   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:20.025229   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:17.802045   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:20.302538   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:17.742133   75577 cri.go:89] found id: ""
	I0920 18:20:17.742161   75577 logs.go:276] 0 containers: []
	W0920 18:20:17.742172   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:17.742179   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:17.742249   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:17.781269   75577 cri.go:89] found id: ""
	I0920 18:20:17.781300   75577 logs.go:276] 0 containers: []
	W0920 18:20:17.781311   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:17.781321   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:17.781336   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:17.835886   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:17.835923   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:17.851365   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:17.851396   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:17.932682   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:17.932711   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:17.932726   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:18.013680   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:18.013731   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:20.555074   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:20.568007   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:20.568071   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:20.602250   75577 cri.go:89] found id: ""
	I0920 18:20:20.602278   75577 logs.go:276] 0 containers: []
	W0920 18:20:20.602287   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:20.602293   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:20.602353   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:20.636711   75577 cri.go:89] found id: ""
	I0920 18:20:20.636739   75577 logs.go:276] 0 containers: []
	W0920 18:20:20.636748   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:20.636753   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:20.636807   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:20.669529   75577 cri.go:89] found id: ""
	I0920 18:20:20.669570   75577 logs.go:276] 0 containers: []
	W0920 18:20:20.669583   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:20.669593   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:20.669651   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:20.710330   75577 cri.go:89] found id: ""
	I0920 18:20:20.710364   75577 logs.go:276] 0 containers: []
	W0920 18:20:20.710372   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:20.710378   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:20.710427   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:20.747664   75577 cri.go:89] found id: ""
	I0920 18:20:20.747690   75577 logs.go:276] 0 containers: []
	W0920 18:20:20.747699   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:20.747705   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:20.747760   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:20.785680   75577 cri.go:89] found id: ""
	I0920 18:20:20.785717   75577 logs.go:276] 0 containers: []
	W0920 18:20:20.785726   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:20.785732   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:20.785794   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:20.822570   75577 cri.go:89] found id: ""
	I0920 18:20:20.822599   75577 logs.go:276] 0 containers: []
	W0920 18:20:20.822608   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:20.822613   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:20.822659   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:20.856739   75577 cri.go:89] found id: ""
	I0920 18:20:20.856772   75577 logs.go:276] 0 containers: []
	W0920 18:20:20.856786   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:20.856806   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:20.856822   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:20.896606   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:20.896643   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:20.948275   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:20.948313   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:20.962576   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:20.962606   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:21.033511   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:21.033534   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:21.033547   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:19.691723   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:21.694099   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:22.524982   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:24.527065   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:27.026374   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:22.803300   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:25.302416   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:23.615731   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:23.628619   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:23.628698   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:23.664344   75577 cri.go:89] found id: ""
	I0920 18:20:23.664367   75577 logs.go:276] 0 containers: []
	W0920 18:20:23.664375   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:23.664381   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:23.664431   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:23.698791   75577 cri.go:89] found id: ""
	I0920 18:20:23.698823   75577 logs.go:276] 0 containers: []
	W0920 18:20:23.698832   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:23.698839   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:23.698889   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:23.732552   75577 cri.go:89] found id: ""
	I0920 18:20:23.732590   75577 logs.go:276] 0 containers: []
	W0920 18:20:23.732602   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:23.732610   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:23.732676   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:23.769463   75577 cri.go:89] found id: ""
	I0920 18:20:23.769490   75577 logs.go:276] 0 containers: []
	W0920 18:20:23.769501   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:23.769508   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:23.769567   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:23.807329   75577 cri.go:89] found id: ""
	I0920 18:20:23.807361   75577 logs.go:276] 0 containers: []
	W0920 18:20:23.807374   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:23.807382   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:23.807442   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:23.847889   75577 cri.go:89] found id: ""
	I0920 18:20:23.847913   75577 logs.go:276] 0 containers: []
	W0920 18:20:23.847920   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:23.847927   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:23.847985   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:23.883287   75577 cri.go:89] found id: ""
	I0920 18:20:23.883314   75577 logs.go:276] 0 containers: []
	W0920 18:20:23.883322   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:23.883335   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:23.883389   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:23.920006   75577 cri.go:89] found id: ""
	I0920 18:20:23.920034   75577 logs.go:276] 0 containers: []
	W0920 18:20:23.920045   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:23.920057   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:23.920070   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:23.995572   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:23.995608   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:24.035953   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:24.035983   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:24.085803   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:24.085860   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:24.100226   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:24.100255   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:24.173555   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:26.673726   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:26.687372   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:26.687449   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:26.726544   75577 cri.go:89] found id: ""
	I0920 18:20:26.726574   75577 logs.go:276] 0 containers: []
	W0920 18:20:26.726583   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:26.726590   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:26.726651   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:26.761542   75577 cri.go:89] found id: ""
	I0920 18:20:26.761571   75577 logs.go:276] 0 containers: []
	W0920 18:20:26.761580   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:26.761587   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:26.761639   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:26.803869   75577 cri.go:89] found id: ""
	I0920 18:20:26.803896   75577 logs.go:276] 0 containers: []
	W0920 18:20:26.803904   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:26.803910   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:26.803970   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:26.854835   75577 cri.go:89] found id: ""
	I0920 18:20:26.854876   75577 logs.go:276] 0 containers: []
	W0920 18:20:26.854888   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:26.854896   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:26.854958   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:26.896113   75577 cri.go:89] found id: ""
	I0920 18:20:26.896151   75577 logs.go:276] 0 containers: []
	W0920 18:20:26.896162   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:26.896170   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:26.896255   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:26.942828   75577 cri.go:89] found id: ""
	I0920 18:20:26.942853   75577 logs.go:276] 0 containers: []
	W0920 18:20:26.942863   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:26.942930   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:26.943007   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:26.977246   75577 cri.go:89] found id: ""
	I0920 18:20:26.977278   75577 logs.go:276] 0 containers: []
	W0920 18:20:26.977289   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:26.977296   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:26.977367   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:27.012408   75577 cri.go:89] found id: ""
	I0920 18:20:27.012440   75577 logs.go:276] 0 containers: []
	W0920 18:20:27.012451   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:27.012462   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:27.012477   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:27.063970   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:27.064017   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:27.078082   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:27.078119   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:27.148050   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:27.148079   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:27.148094   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:27.230836   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:27.230880   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:24.192842   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:26.196350   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:28.692682   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:29.525508   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:32.025275   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:27.802268   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:30.301519   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:29.771845   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:29.785479   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:29.785553   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:29.823101   75577 cri.go:89] found id: ""
	I0920 18:20:29.823132   75577 logs.go:276] 0 containers: []
	W0920 18:20:29.823143   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:29.823150   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:29.823228   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:29.858679   75577 cri.go:89] found id: ""
	I0920 18:20:29.858713   75577 logs.go:276] 0 containers: []
	W0920 18:20:29.858724   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:29.858732   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:29.858796   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:29.901044   75577 cri.go:89] found id: ""
	I0920 18:20:29.901073   75577 logs.go:276] 0 containers: []
	W0920 18:20:29.901083   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:29.901091   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:29.901160   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:29.937743   75577 cri.go:89] found id: ""
	I0920 18:20:29.937775   75577 logs.go:276] 0 containers: []
	W0920 18:20:29.937792   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:29.937800   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:29.937884   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:29.972813   75577 cri.go:89] found id: ""
	I0920 18:20:29.972838   75577 logs.go:276] 0 containers: []
	W0920 18:20:29.972846   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:29.972852   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:29.972901   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:30.008024   75577 cri.go:89] found id: ""
	I0920 18:20:30.008053   75577 logs.go:276] 0 containers: []
	W0920 18:20:30.008062   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:30.008068   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:30.008117   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:30.048544   75577 cri.go:89] found id: ""
	I0920 18:20:30.048577   75577 logs.go:276] 0 containers: []
	W0920 18:20:30.048585   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:30.048591   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:30.048643   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:30.084501   75577 cri.go:89] found id: ""
	I0920 18:20:30.084525   75577 logs.go:276] 0 containers: []
	W0920 18:20:30.084534   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:30.084545   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:30.084559   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:30.136234   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:30.136279   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:30.149557   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:30.149591   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:30.229765   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:30.229792   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:30.229806   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:30.307786   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:30.307825   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:31.191475   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:33.192864   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:34.025515   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:36.027276   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:32.801952   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:34.802033   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:37.301059   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:32.845220   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:32.859734   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:32.859810   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:32.896390   75577 cri.go:89] found id: ""
	I0920 18:20:32.896434   75577 logs.go:276] 0 containers: []
	W0920 18:20:32.896446   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:32.896464   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:32.896538   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:32.934548   75577 cri.go:89] found id: ""
	I0920 18:20:32.934572   75577 logs.go:276] 0 containers: []
	W0920 18:20:32.934580   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:32.934587   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:32.934640   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:32.982987   75577 cri.go:89] found id: ""
	I0920 18:20:32.983013   75577 logs.go:276] 0 containers: []
	W0920 18:20:32.983020   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:32.983026   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:32.983079   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:33.019059   75577 cri.go:89] found id: ""
	I0920 18:20:33.019085   75577 logs.go:276] 0 containers: []
	W0920 18:20:33.019093   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:33.019099   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:33.019148   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:33.057704   75577 cri.go:89] found id: ""
	I0920 18:20:33.057738   75577 logs.go:276] 0 containers: []
	W0920 18:20:33.057750   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:33.057759   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:33.057821   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:33.093702   75577 cri.go:89] found id: ""
	I0920 18:20:33.093732   75577 logs.go:276] 0 containers: []
	W0920 18:20:33.093743   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:33.093751   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:33.093809   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:33.128476   75577 cri.go:89] found id: ""
	I0920 18:20:33.128504   75577 logs.go:276] 0 containers: []
	W0920 18:20:33.128516   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:33.128523   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:33.128591   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:33.165513   75577 cri.go:89] found id: ""
	I0920 18:20:33.165542   75577 logs.go:276] 0 containers: []
	W0920 18:20:33.165550   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:33.165559   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:33.165569   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:33.219613   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:33.219650   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:33.232291   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:33.232317   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:33.304172   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:33.304197   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:33.304212   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:33.380057   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:33.380094   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:35.920842   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:35.935500   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:35.935593   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:35.974445   75577 cri.go:89] found id: ""
	I0920 18:20:35.974471   75577 logs.go:276] 0 containers: []
	W0920 18:20:35.974479   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:35.974485   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:35.974548   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:36.009487   75577 cri.go:89] found id: ""
	I0920 18:20:36.009518   75577 logs.go:276] 0 containers: []
	W0920 18:20:36.009530   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:36.009538   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:36.009706   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:36.049428   75577 cri.go:89] found id: ""
	I0920 18:20:36.049451   75577 logs.go:276] 0 containers: []
	W0920 18:20:36.049463   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:36.049469   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:36.049518   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:36.083973   75577 cri.go:89] found id: ""
	I0920 18:20:36.084011   75577 logs.go:276] 0 containers: []
	W0920 18:20:36.084022   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:36.084030   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:36.084119   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:36.117952   75577 cri.go:89] found id: ""
	I0920 18:20:36.117985   75577 logs.go:276] 0 containers: []
	W0920 18:20:36.117993   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:36.117998   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:36.118052   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:36.151222   75577 cri.go:89] found id: ""
	I0920 18:20:36.151256   75577 logs.go:276] 0 containers: []
	W0920 18:20:36.151265   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:36.151271   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:36.151319   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:36.184665   75577 cri.go:89] found id: ""
	I0920 18:20:36.184697   75577 logs.go:276] 0 containers: []
	W0920 18:20:36.184708   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:36.184716   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:36.184771   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:36.222974   75577 cri.go:89] found id: ""
	I0920 18:20:36.223003   75577 logs.go:276] 0 containers: []
	W0920 18:20:36.223012   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:36.223021   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:36.223033   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:36.300403   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:36.300424   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:36.300437   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:36.381220   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:36.381260   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:36.419010   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:36.419042   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:36.471758   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:36.471799   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:35.192949   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:37.693384   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:38.526219   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:41.024309   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:39.301511   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:41.801558   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:38.985359   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:38.998817   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:38.998898   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:39.036705   75577 cri.go:89] found id: ""
	I0920 18:20:39.036737   75577 logs.go:276] 0 containers: []
	W0920 18:20:39.036747   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:39.036755   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:39.036815   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:39.074254   75577 cri.go:89] found id: ""
	I0920 18:20:39.074285   75577 logs.go:276] 0 containers: []
	W0920 18:20:39.074294   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:39.074300   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:39.074367   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:39.108385   75577 cri.go:89] found id: ""
	I0920 18:20:39.108420   75577 logs.go:276] 0 containers: []
	W0920 18:20:39.108493   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:39.108506   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:39.108561   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:39.142283   75577 cri.go:89] found id: ""
	I0920 18:20:39.142313   75577 logs.go:276] 0 containers: []
	W0920 18:20:39.142325   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:39.142332   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:39.142396   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:39.176881   75577 cri.go:89] found id: ""
	I0920 18:20:39.176918   75577 logs.go:276] 0 containers: []
	W0920 18:20:39.176933   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:39.176941   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:39.177002   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:39.217734   75577 cri.go:89] found id: ""
	I0920 18:20:39.217759   75577 logs.go:276] 0 containers: []
	W0920 18:20:39.217767   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:39.217773   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:39.217852   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:39.250823   75577 cri.go:89] found id: ""
	I0920 18:20:39.250852   75577 logs.go:276] 0 containers: []
	W0920 18:20:39.250860   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:39.250868   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:39.250936   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:39.287494   75577 cri.go:89] found id: ""
	I0920 18:20:39.287519   75577 logs.go:276] 0 containers: []
	W0920 18:20:39.287528   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:39.287540   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:39.287552   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:39.343091   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:39.343126   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:39.359028   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:39.359063   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:39.425293   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:39.425321   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:39.425336   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:39.503303   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:39.503350   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:42.044784   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:42.057764   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:42.057876   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:42.091798   75577 cri.go:89] found id: ""
	I0920 18:20:42.091825   75577 logs.go:276] 0 containers: []
	W0920 18:20:42.091833   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:42.091839   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:42.091888   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:42.125325   75577 cri.go:89] found id: ""
	I0920 18:20:42.125352   75577 logs.go:276] 0 containers: []
	W0920 18:20:42.125362   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:42.125370   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:42.125423   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:42.158641   75577 cri.go:89] found id: ""
	I0920 18:20:42.158680   75577 logs.go:276] 0 containers: []
	W0920 18:20:42.158692   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:42.158701   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:42.158771   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:42.194065   75577 cri.go:89] found id: ""
	I0920 18:20:42.194090   75577 logs.go:276] 0 containers: []
	W0920 18:20:42.194101   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:42.194109   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:42.194174   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:42.227237   75577 cri.go:89] found id: ""
	I0920 18:20:42.227262   75577 logs.go:276] 0 containers: []
	W0920 18:20:42.227270   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:42.227275   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:42.227324   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:42.260205   75577 cri.go:89] found id: ""
	I0920 18:20:42.260240   75577 logs.go:276] 0 containers: []
	W0920 18:20:42.260251   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:42.260258   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:42.260325   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:42.300124   75577 cri.go:89] found id: ""
	I0920 18:20:42.300159   75577 logs.go:276] 0 containers: []
	W0920 18:20:42.300169   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:42.300175   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:42.300273   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:42.335638   75577 cri.go:89] found id: ""
	I0920 18:20:42.335674   75577 logs.go:276] 0 containers: []
	W0920 18:20:42.335685   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:42.335695   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:42.335710   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:42.386176   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:42.386214   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:42.400113   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:42.400147   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:42.479877   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:42.479900   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:42.479912   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:42.554654   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:42.554701   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:40.191251   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:42.192910   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:43.026185   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:45.525362   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:43.801927   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:45.802309   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:45.093962   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:45.107747   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:45.107811   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:45.147091   75577 cri.go:89] found id: ""
	I0920 18:20:45.147120   75577 logs.go:276] 0 containers: []
	W0920 18:20:45.147132   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:45.147140   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:45.147205   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:45.185332   75577 cri.go:89] found id: ""
	I0920 18:20:45.185396   75577 logs.go:276] 0 containers: []
	W0920 18:20:45.185431   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:45.185446   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:45.185523   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:45.224496   75577 cri.go:89] found id: ""
	I0920 18:20:45.224525   75577 logs.go:276] 0 containers: []
	W0920 18:20:45.224535   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:45.224542   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:45.224612   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:45.260686   75577 cri.go:89] found id: ""
	I0920 18:20:45.260718   75577 logs.go:276] 0 containers: []
	W0920 18:20:45.260729   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:45.260737   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:45.260801   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:45.301231   75577 cri.go:89] found id: ""
	I0920 18:20:45.301259   75577 logs.go:276] 0 containers: []
	W0920 18:20:45.301269   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:45.301277   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:45.301343   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:45.346441   75577 cri.go:89] found id: ""
	I0920 18:20:45.346468   75577 logs.go:276] 0 containers: []
	W0920 18:20:45.346476   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:45.346482   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:45.346537   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:45.385042   75577 cri.go:89] found id: ""
	I0920 18:20:45.385071   75577 logs.go:276] 0 containers: []
	W0920 18:20:45.385082   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:45.385090   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:45.385149   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:45.422208   75577 cri.go:89] found id: ""
	I0920 18:20:45.422238   75577 logs.go:276] 0 containers: []
	W0920 18:20:45.422249   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:45.422260   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:45.422275   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:45.472692   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:45.472742   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:45.487981   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:45.488011   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:45.563012   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:45.563035   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:45.563051   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:45.640750   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:45.640786   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:44.691791   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:46.692359   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:48.693165   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:48.024959   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:50.025944   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:48.302699   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:50.801740   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:48.181093   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:48.194433   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:48.194516   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:48.234349   75577 cri.go:89] found id: ""
	I0920 18:20:48.234383   75577 logs.go:276] 0 containers: []
	W0920 18:20:48.234394   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:48.234403   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:48.234467   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:48.269424   75577 cri.go:89] found id: ""
	I0920 18:20:48.269450   75577 logs.go:276] 0 containers: []
	W0920 18:20:48.269457   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:48.269462   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:48.269514   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:48.303968   75577 cri.go:89] found id: ""
	I0920 18:20:48.303990   75577 logs.go:276] 0 containers: []
	W0920 18:20:48.303997   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:48.304002   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:48.304061   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:48.339149   75577 cri.go:89] found id: ""
	I0920 18:20:48.339180   75577 logs.go:276] 0 containers: []
	W0920 18:20:48.339191   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:48.339198   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:48.339259   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:48.374529   75577 cri.go:89] found id: ""
	I0920 18:20:48.374559   75577 logs.go:276] 0 containers: []
	W0920 18:20:48.374571   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:48.374578   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:48.374644   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:48.409170   75577 cri.go:89] found id: ""
	I0920 18:20:48.409203   75577 logs.go:276] 0 containers: []
	W0920 18:20:48.409211   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:48.409217   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:48.409292   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:48.443960   75577 cri.go:89] found id: ""
	I0920 18:20:48.443990   75577 logs.go:276] 0 containers: []
	W0920 18:20:48.444009   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:48.444017   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:48.444074   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:48.480934   75577 cri.go:89] found id: ""
	I0920 18:20:48.480965   75577 logs.go:276] 0 containers: []
	W0920 18:20:48.480978   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:48.480990   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:48.481006   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:48.533261   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:48.533295   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:48.546460   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:48.546488   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:48.615183   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:48.615206   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:48.615219   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:48.695299   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:48.695337   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:51.240343   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:51.255327   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:51.255411   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:51.294611   75577 cri.go:89] found id: ""
	I0920 18:20:51.294642   75577 logs.go:276] 0 containers: []
	W0920 18:20:51.294650   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:51.294656   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:51.294710   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:51.328569   75577 cri.go:89] found id: ""
	I0920 18:20:51.328598   75577 logs.go:276] 0 containers: []
	W0920 18:20:51.328613   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:51.328621   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:51.328675   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:51.364255   75577 cri.go:89] found id: ""
	I0920 18:20:51.364283   75577 logs.go:276] 0 containers: []
	W0920 18:20:51.364291   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:51.364297   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:51.364347   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:51.406178   75577 cri.go:89] found id: ""
	I0920 18:20:51.406204   75577 logs.go:276] 0 containers: []
	W0920 18:20:51.406215   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:51.406223   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:51.406284   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:51.449491   75577 cri.go:89] found id: ""
	I0920 18:20:51.449519   75577 logs.go:276] 0 containers: []
	W0920 18:20:51.449529   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:51.449536   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:51.449600   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:51.483243   75577 cri.go:89] found id: ""
	I0920 18:20:51.483269   75577 logs.go:276] 0 containers: []
	W0920 18:20:51.483278   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:51.483284   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:51.483334   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:51.517280   75577 cri.go:89] found id: ""
	I0920 18:20:51.517304   75577 logs.go:276] 0 containers: []
	W0920 18:20:51.517311   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:51.517316   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:51.517378   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:51.553517   75577 cri.go:89] found id: ""
	I0920 18:20:51.553545   75577 logs.go:276] 0 containers: []
	W0920 18:20:51.553556   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:51.553565   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:51.553576   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:51.607330   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:51.607369   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:51.620628   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:51.620662   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:51.687586   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:51.687618   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:51.687638   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:51.764882   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:51.764924   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:51.191732   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:53.193078   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:52.524529   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:54.526138   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:57.025365   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:52.802328   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:54.804955   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:57.301987   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:54.307276   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:54.321777   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:54.321860   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:54.355565   75577 cri.go:89] found id: ""
	I0920 18:20:54.355598   75577 logs.go:276] 0 containers: []
	W0920 18:20:54.355609   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:54.355618   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:54.355680   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:54.391184   75577 cri.go:89] found id: ""
	I0920 18:20:54.391216   75577 logs.go:276] 0 containers: []
	W0920 18:20:54.391227   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:54.391236   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:54.391301   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:54.425680   75577 cri.go:89] found id: ""
	I0920 18:20:54.425709   75577 logs.go:276] 0 containers: []
	W0920 18:20:54.425718   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:54.425725   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:54.425775   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:54.461514   75577 cri.go:89] found id: ""
	I0920 18:20:54.461541   75577 logs.go:276] 0 containers: []
	W0920 18:20:54.461552   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:54.461559   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:54.461625   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:54.495290   75577 cri.go:89] found id: ""
	I0920 18:20:54.495319   75577 logs.go:276] 0 containers: []
	W0920 18:20:54.495327   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:54.495333   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:54.495384   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:54.530014   75577 cri.go:89] found id: ""
	I0920 18:20:54.530038   75577 logs.go:276] 0 containers: []
	W0920 18:20:54.530046   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:54.530052   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:54.530103   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:54.562559   75577 cri.go:89] found id: ""
	I0920 18:20:54.562597   75577 logs.go:276] 0 containers: []
	W0920 18:20:54.562611   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:54.562621   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:54.562694   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:54.599894   75577 cri.go:89] found id: ""
	I0920 18:20:54.599920   75577 logs.go:276] 0 containers: []
	W0920 18:20:54.599928   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:54.599946   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:54.599982   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:54.636853   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:54.636880   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:54.687932   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:54.687978   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:54.701642   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:54.701673   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:54.769649   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:54.769678   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:54.769695   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:57.356015   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:57.368860   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:57.368923   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:57.409339   75577 cri.go:89] found id: ""
	I0920 18:20:57.409365   75577 logs.go:276] 0 containers: []
	W0920 18:20:57.409375   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:57.409382   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:57.409444   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:57.443055   75577 cri.go:89] found id: ""
	I0920 18:20:57.443085   75577 logs.go:276] 0 containers: []
	W0920 18:20:57.443095   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:57.443102   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:57.443158   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:57.481816   75577 cri.go:89] found id: ""
	I0920 18:20:57.481859   75577 logs.go:276] 0 containers: []
	W0920 18:20:57.481871   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:57.481879   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:57.481942   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:57.517327   75577 cri.go:89] found id: ""
	I0920 18:20:57.517361   75577 logs.go:276] 0 containers: []
	W0920 18:20:57.517372   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:57.517379   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:57.517442   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:57.555121   75577 cri.go:89] found id: ""
	I0920 18:20:57.555151   75577 logs.go:276] 0 containers: []
	W0920 18:20:57.555159   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:57.555164   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:57.555222   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:57.592632   75577 cri.go:89] found id: ""
	I0920 18:20:57.592666   75577 logs.go:276] 0 containers: []
	W0920 18:20:57.592679   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:57.592685   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:57.592734   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:57.627519   75577 cri.go:89] found id: ""
	I0920 18:20:57.627556   75577 logs.go:276] 0 containers: []
	W0920 18:20:57.627567   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:57.627574   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:57.627636   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:57.663817   75577 cri.go:89] found id: ""
	I0920 18:20:57.663844   75577 logs.go:276] 0 containers: []
	W0920 18:20:57.663853   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:57.663862   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:57.663877   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:57.704896   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:57.704924   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:55.692746   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:58.191973   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:59.524732   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:01.525346   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:59.801537   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:01.802088   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:57.757933   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:57.757972   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:57.772646   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:57.772673   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:57.850634   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:57.850665   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:57.850681   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:00.431504   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:00.445175   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:00.445241   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:00.482050   75577 cri.go:89] found id: ""
	I0920 18:21:00.482076   75577 logs.go:276] 0 containers: []
	W0920 18:21:00.482088   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:00.482095   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:00.482150   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:00.515793   75577 cri.go:89] found id: ""
	I0920 18:21:00.515822   75577 logs.go:276] 0 containers: []
	W0920 18:21:00.515833   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:00.515841   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:00.515903   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:00.550250   75577 cri.go:89] found id: ""
	I0920 18:21:00.550278   75577 logs.go:276] 0 containers: []
	W0920 18:21:00.550288   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:00.550296   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:00.550374   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:00.585977   75577 cri.go:89] found id: ""
	I0920 18:21:00.586011   75577 logs.go:276] 0 containers: []
	W0920 18:21:00.586034   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:00.586051   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:00.586118   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:00.621828   75577 cri.go:89] found id: ""
	I0920 18:21:00.621869   75577 logs.go:276] 0 containers: []
	W0920 18:21:00.621879   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:00.621886   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:00.621937   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:00.655492   75577 cri.go:89] found id: ""
	I0920 18:21:00.655528   75577 logs.go:276] 0 containers: []
	W0920 18:21:00.655540   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:00.655600   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:00.655673   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:00.709445   75577 cri.go:89] found id: ""
	I0920 18:21:00.709476   75577 logs.go:276] 0 containers: []
	W0920 18:21:00.709488   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:00.709496   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:00.709566   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:00.744098   75577 cri.go:89] found id: ""
	I0920 18:21:00.744123   75577 logs.go:276] 0 containers: []
	W0920 18:21:00.744134   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:00.744144   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:00.744164   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:00.792576   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:00.792612   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:00.807766   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:00.807794   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:00.883626   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:00.883649   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:00.883663   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:00.966233   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:00.966269   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:00.692181   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:03.192227   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:04.024582   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:06.029376   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:04.301078   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:06.301727   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:03.505502   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:03.520407   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:03.520534   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:03.557891   75577 cri.go:89] found id: ""
	I0920 18:21:03.557926   75577 logs.go:276] 0 containers: []
	W0920 18:21:03.557936   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:03.557944   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:03.558006   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:03.593869   75577 cri.go:89] found id: ""
	I0920 18:21:03.593898   75577 logs.go:276] 0 containers: []
	W0920 18:21:03.593908   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:03.593916   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:03.593982   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:03.629270   75577 cri.go:89] found id: ""
	I0920 18:21:03.629302   75577 logs.go:276] 0 containers: []
	W0920 18:21:03.629311   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:03.629317   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:03.629366   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:03.664658   75577 cri.go:89] found id: ""
	I0920 18:21:03.664688   75577 logs.go:276] 0 containers: []
	W0920 18:21:03.664699   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:03.664706   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:03.664769   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:03.700846   75577 cri.go:89] found id: ""
	I0920 18:21:03.700868   75577 logs.go:276] 0 containers: []
	W0920 18:21:03.700875   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:03.700882   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:03.700941   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:03.736311   75577 cri.go:89] found id: ""
	I0920 18:21:03.736345   75577 logs.go:276] 0 containers: []
	W0920 18:21:03.736355   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:03.736363   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:03.736421   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:03.770760   75577 cri.go:89] found id: ""
	I0920 18:21:03.770788   75577 logs.go:276] 0 containers: []
	W0920 18:21:03.770800   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:03.770808   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:03.770868   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:03.808724   75577 cri.go:89] found id: ""
	I0920 18:21:03.808749   75577 logs.go:276] 0 containers: []
	W0920 18:21:03.808756   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:03.808764   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:03.808775   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:03.851231   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:03.851265   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:03.899607   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:03.899641   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:03.915051   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:03.915079   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:03.984016   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:03.984038   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:03.984053   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:06.564776   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:06.578524   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:06.578604   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:06.614305   75577 cri.go:89] found id: ""
	I0920 18:21:06.614340   75577 logs.go:276] 0 containers: []
	W0920 18:21:06.614351   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:06.614365   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:06.614427   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:06.648929   75577 cri.go:89] found id: ""
	I0920 18:21:06.648958   75577 logs.go:276] 0 containers: []
	W0920 18:21:06.648968   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:06.648976   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:06.649036   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:06.689975   75577 cri.go:89] found id: ""
	I0920 18:21:06.690016   75577 logs.go:276] 0 containers: []
	W0920 18:21:06.690027   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:06.690034   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:06.690092   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:06.729721   75577 cri.go:89] found id: ""
	I0920 18:21:06.729747   75577 logs.go:276] 0 containers: []
	W0920 18:21:06.729755   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:06.729762   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:06.729808   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:06.766310   75577 cri.go:89] found id: ""
	I0920 18:21:06.766343   75577 logs.go:276] 0 containers: []
	W0920 18:21:06.766354   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:06.766361   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:06.766437   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:06.800801   75577 cri.go:89] found id: ""
	I0920 18:21:06.800829   75577 logs.go:276] 0 containers: []
	W0920 18:21:06.800839   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:06.800847   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:06.800909   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:06.836391   75577 cri.go:89] found id: ""
	I0920 18:21:06.836429   75577 logs.go:276] 0 containers: []
	W0920 18:21:06.836447   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:06.836455   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:06.836521   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:06.873014   75577 cri.go:89] found id: ""
	I0920 18:21:06.873041   75577 logs.go:276] 0 containers: []
	W0920 18:21:06.873049   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:06.873057   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:06.873070   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:06.953084   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:06.953110   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:06.953122   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:07.032398   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:07.032434   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:07.082196   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:07.082224   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:07.159604   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:07.159640   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:05.691350   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:07.692127   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:08.526757   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:10.526990   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:08.801736   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:11.301001   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:09.675022   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:09.687924   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:09.687999   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:09.722963   75577 cri.go:89] found id: ""
	I0920 18:21:09.722994   75577 logs.go:276] 0 containers: []
	W0920 18:21:09.723005   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:09.723017   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:09.723084   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:09.756433   75577 cri.go:89] found id: ""
	I0920 18:21:09.756458   75577 logs.go:276] 0 containers: []
	W0920 18:21:09.756472   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:09.756486   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:09.756549   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:09.792200   75577 cri.go:89] found id: ""
	I0920 18:21:09.792232   75577 logs.go:276] 0 containers: []
	W0920 18:21:09.792248   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:09.792256   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:09.792323   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:09.829058   75577 cri.go:89] found id: ""
	I0920 18:21:09.829081   75577 logs.go:276] 0 containers: []
	W0920 18:21:09.829098   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:09.829104   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:09.829150   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:09.863225   75577 cri.go:89] found id: ""
	I0920 18:21:09.863252   75577 logs.go:276] 0 containers: []
	W0920 18:21:09.863259   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:09.863265   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:09.863312   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:09.897682   75577 cri.go:89] found id: ""
	I0920 18:21:09.897708   75577 logs.go:276] 0 containers: []
	W0920 18:21:09.897725   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:09.897731   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:09.897789   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:09.936799   75577 cri.go:89] found id: ""
	I0920 18:21:09.936826   75577 logs.go:276] 0 containers: []
	W0920 18:21:09.936836   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:09.936843   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:09.936904   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:09.970502   75577 cri.go:89] found id: ""
	I0920 18:21:09.970539   75577 logs.go:276] 0 containers: []
	W0920 18:21:09.970550   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:09.970560   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:09.970574   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:09.983676   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:09.983703   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:10.058844   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:10.058865   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:10.058874   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:10.136998   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:10.137040   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:10.178902   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:10.178933   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:12.729619   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:09.692361   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:12.192257   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:13.025403   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:15.025667   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:13.302087   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:15.802019   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:12.743816   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:12.743876   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:12.779407   75577 cri.go:89] found id: ""
	I0920 18:21:12.779431   75577 logs.go:276] 0 containers: []
	W0920 18:21:12.779439   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:12.779446   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:12.779503   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:12.820700   75577 cri.go:89] found id: ""
	I0920 18:21:12.820730   75577 logs.go:276] 0 containers: []
	W0920 18:21:12.820740   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:12.820748   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:12.820812   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:12.862179   75577 cri.go:89] found id: ""
	I0920 18:21:12.862210   75577 logs.go:276] 0 containers: []
	W0920 18:21:12.862221   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:12.862228   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:12.862284   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:12.908976   75577 cri.go:89] found id: ""
	I0920 18:21:12.908999   75577 logs.go:276] 0 containers: []
	W0920 18:21:12.909007   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:12.909013   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:12.909072   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:12.953653   75577 cri.go:89] found id: ""
	I0920 18:21:12.953688   75577 logs.go:276] 0 containers: []
	W0920 18:21:12.953696   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:12.953702   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:12.953757   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:12.998519   75577 cri.go:89] found id: ""
	I0920 18:21:12.998546   75577 logs.go:276] 0 containers: []
	W0920 18:21:12.998557   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:12.998564   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:12.998640   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:13.035518   75577 cri.go:89] found id: ""
	I0920 18:21:13.035541   75577 logs.go:276] 0 containers: []
	W0920 18:21:13.035549   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:13.035555   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:13.035609   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:13.073551   75577 cri.go:89] found id: ""
	I0920 18:21:13.073591   75577 logs.go:276] 0 containers: []
	W0920 18:21:13.073605   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:13.073617   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:13.073638   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:13.125118   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:13.125155   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:13.139384   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:13.139415   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:13.204980   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:13.205006   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:13.205021   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:13.289400   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:13.289446   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:15.829405   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:15.842291   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:15.842354   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:15.875886   75577 cri.go:89] found id: ""
	I0920 18:21:15.875921   75577 logs.go:276] 0 containers: []
	W0920 18:21:15.875930   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:15.875937   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:15.875990   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:15.911706   75577 cri.go:89] found id: ""
	I0920 18:21:15.911746   75577 logs.go:276] 0 containers: []
	W0920 18:21:15.911760   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:15.911768   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:15.911831   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:15.950117   75577 cri.go:89] found id: ""
	I0920 18:21:15.950150   75577 logs.go:276] 0 containers: []
	W0920 18:21:15.950161   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:15.950170   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:15.950243   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:15.986029   75577 cri.go:89] found id: ""
	I0920 18:21:15.986066   75577 logs.go:276] 0 containers: []
	W0920 18:21:15.986083   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:15.986091   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:15.986159   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:16.020307   75577 cri.go:89] found id: ""
	I0920 18:21:16.020335   75577 logs.go:276] 0 containers: []
	W0920 18:21:16.020346   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:16.020354   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:16.020412   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:16.054744   75577 cri.go:89] found id: ""
	I0920 18:21:16.054769   75577 logs.go:276] 0 containers: []
	W0920 18:21:16.054777   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:16.054782   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:16.054831   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:16.091756   75577 cri.go:89] found id: ""
	I0920 18:21:16.091789   75577 logs.go:276] 0 containers: []
	W0920 18:21:16.091800   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:16.091807   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:16.091868   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:16.127818   75577 cri.go:89] found id: ""
	I0920 18:21:16.127843   75577 logs.go:276] 0 containers: []
	W0920 18:21:16.127851   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:16.127861   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:16.127877   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:16.200114   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:16.200138   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:16.200149   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:16.279473   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:16.279508   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:16.319139   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:16.319171   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:16.370721   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:16.370769   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:14.192782   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:16.192866   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:18.691599   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:17.524026   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:19.524385   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:21.525244   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:18.301696   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:20.301993   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:18.884966   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:18.900270   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:18.900355   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:18.938541   75577 cri.go:89] found id: ""
	I0920 18:21:18.938582   75577 logs.go:276] 0 containers: []
	W0920 18:21:18.938594   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:18.938602   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:18.938673   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:18.972343   75577 cri.go:89] found id: ""
	I0920 18:21:18.972380   75577 logs.go:276] 0 containers: []
	W0920 18:21:18.972391   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:18.972400   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:18.972458   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:19.011996   75577 cri.go:89] found id: ""
	I0920 18:21:19.012037   75577 logs.go:276] 0 containers: []
	W0920 18:21:19.012048   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:19.012054   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:19.012105   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:19.053789   75577 cri.go:89] found id: ""
	I0920 18:21:19.053818   75577 logs.go:276] 0 containers: []
	W0920 18:21:19.053839   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:19.053849   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:19.053907   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:19.094830   75577 cri.go:89] found id: ""
	I0920 18:21:19.094862   75577 logs.go:276] 0 containers: []
	W0920 18:21:19.094872   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:19.094881   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:19.094941   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:19.133894   75577 cri.go:89] found id: ""
	I0920 18:21:19.133923   75577 logs.go:276] 0 containers: []
	W0920 18:21:19.133934   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:19.133943   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:19.134001   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:19.171640   75577 cri.go:89] found id: ""
	I0920 18:21:19.171662   75577 logs.go:276] 0 containers: []
	W0920 18:21:19.171670   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:19.171676   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:19.171730   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:19.207821   75577 cri.go:89] found id: ""
	I0920 18:21:19.207852   75577 logs.go:276] 0 containers: []
	W0920 18:21:19.207861   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:19.207869   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:19.207880   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:19.292486   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:19.292530   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:19.332664   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:19.332695   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:19.383793   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:19.383828   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:19.399234   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:19.399269   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:19.470636   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:21.971772   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:21.985558   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:21.985630   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:22.018745   75577 cri.go:89] found id: ""
	I0920 18:21:22.018775   75577 logs.go:276] 0 containers: []
	W0920 18:21:22.018785   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:22.018795   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:22.018854   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:22.053586   75577 cri.go:89] found id: ""
	I0920 18:21:22.053617   75577 logs.go:276] 0 containers: []
	W0920 18:21:22.053627   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:22.053635   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:22.053697   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:22.088290   75577 cri.go:89] found id: ""
	I0920 18:21:22.088320   75577 logs.go:276] 0 containers: []
	W0920 18:21:22.088337   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:22.088344   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:22.088394   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:22.121026   75577 cri.go:89] found id: ""
	I0920 18:21:22.121050   75577 logs.go:276] 0 containers: []
	W0920 18:21:22.121057   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:22.121063   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:22.121117   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:22.153669   75577 cri.go:89] found id: ""
	I0920 18:21:22.153702   75577 logs.go:276] 0 containers: []
	W0920 18:21:22.153715   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:22.153725   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:22.153793   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:22.192179   75577 cri.go:89] found id: ""
	I0920 18:21:22.192208   75577 logs.go:276] 0 containers: []
	W0920 18:21:22.192218   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:22.192226   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:22.192294   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:22.226115   75577 cri.go:89] found id: ""
	I0920 18:21:22.226142   75577 logs.go:276] 0 containers: []
	W0920 18:21:22.226153   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:22.226161   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:22.226231   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:22.259032   75577 cri.go:89] found id: ""
	I0920 18:21:22.259059   75577 logs.go:276] 0 containers: []
	W0920 18:21:22.259070   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:22.259080   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:22.259094   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:22.308989   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:22.309020   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:22.322084   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:22.322113   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:22.397567   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:22.397587   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:22.397598   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:22.476551   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:22.476596   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:20.691837   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:23.191939   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:24.024785   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:26.024824   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:22.801102   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:24.801697   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:26.801789   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:25.017659   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:25.032637   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:25.032698   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:25.067671   75577 cri.go:89] found id: ""
	I0920 18:21:25.067701   75577 logs.go:276] 0 containers: []
	W0920 18:21:25.067711   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:25.067718   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:25.067774   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:25.109354   75577 cri.go:89] found id: ""
	I0920 18:21:25.109385   75577 logs.go:276] 0 containers: []
	W0920 18:21:25.109396   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:25.109403   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:25.109463   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:25.142888   75577 cri.go:89] found id: ""
	I0920 18:21:25.142924   75577 logs.go:276] 0 containers: []
	W0920 18:21:25.142935   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:25.142942   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:25.143004   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:25.179005   75577 cri.go:89] found id: ""
	I0920 18:21:25.179032   75577 logs.go:276] 0 containers: []
	W0920 18:21:25.179043   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:25.179050   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:25.179114   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:25.211639   75577 cri.go:89] found id: ""
	I0920 18:21:25.211662   75577 logs.go:276] 0 containers: []
	W0920 18:21:25.211669   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:25.211676   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:25.211729   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:25.246678   75577 cri.go:89] found id: ""
	I0920 18:21:25.246709   75577 logs.go:276] 0 containers: []
	W0920 18:21:25.246718   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:25.246724   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:25.246780   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:25.284187   75577 cri.go:89] found id: ""
	I0920 18:21:25.284216   75577 logs.go:276] 0 containers: []
	W0920 18:21:25.284240   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:25.284247   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:25.284309   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:25.318882   75577 cri.go:89] found id: ""
	I0920 18:21:25.318917   75577 logs.go:276] 0 containers: []
	W0920 18:21:25.318929   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:25.318940   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:25.318954   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:25.357017   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:25.357046   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:25.407584   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:25.407627   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:25.421405   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:25.421437   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:25.492849   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:25.492870   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:25.492881   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:25.192551   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:27.691730   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:28.025042   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:30.025806   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:29.301637   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:31.302080   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:28.076101   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:28.088741   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:28.088805   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:28.124889   75577 cri.go:89] found id: ""
	I0920 18:21:28.124925   75577 logs.go:276] 0 containers: []
	W0920 18:21:28.124944   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:28.124952   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:28.125013   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:28.158590   75577 cri.go:89] found id: ""
	I0920 18:21:28.158615   75577 logs.go:276] 0 containers: []
	W0920 18:21:28.158623   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:28.158629   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:28.158677   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:28.195636   75577 cri.go:89] found id: ""
	I0920 18:21:28.195670   75577 logs.go:276] 0 containers: []
	W0920 18:21:28.195683   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:28.195692   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:28.195762   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:28.228714   75577 cri.go:89] found id: ""
	I0920 18:21:28.228759   75577 logs.go:276] 0 containers: []
	W0920 18:21:28.228771   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:28.228780   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:28.228840   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:28.261488   75577 cri.go:89] found id: ""
	I0920 18:21:28.261519   75577 logs.go:276] 0 containers: []
	W0920 18:21:28.261531   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:28.261538   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:28.261606   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:28.293534   75577 cri.go:89] found id: ""
	I0920 18:21:28.293560   75577 logs.go:276] 0 containers: []
	W0920 18:21:28.293568   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:28.293573   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:28.293629   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:28.330095   75577 cri.go:89] found id: ""
	I0920 18:21:28.330120   75577 logs.go:276] 0 containers: []
	W0920 18:21:28.330128   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:28.330134   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:28.330191   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:28.365643   75577 cri.go:89] found id: ""
	I0920 18:21:28.365675   75577 logs.go:276] 0 containers: []
	W0920 18:21:28.365684   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:28.365693   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:28.365712   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:28.379982   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:28.380007   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:28.456355   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:28.456386   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:28.456402   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:28.538725   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:28.538759   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:28.576067   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:28.576104   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:31.129705   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:31.145814   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:31.145919   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:31.181469   75577 cri.go:89] found id: ""
	I0920 18:21:31.181497   75577 logs.go:276] 0 containers: []
	W0920 18:21:31.181507   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:31.181514   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:31.181562   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:31.216391   75577 cri.go:89] found id: ""
	I0920 18:21:31.216416   75577 logs.go:276] 0 containers: []
	W0920 18:21:31.216426   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:31.216433   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:31.216492   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:31.250236   75577 cri.go:89] found id: ""
	I0920 18:21:31.250266   75577 logs.go:276] 0 containers: []
	W0920 18:21:31.250277   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:31.250285   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:31.250357   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:31.283406   75577 cri.go:89] found id: ""
	I0920 18:21:31.283430   75577 logs.go:276] 0 containers: []
	W0920 18:21:31.283439   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:31.283446   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:31.283519   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:31.317276   75577 cri.go:89] found id: ""
	I0920 18:21:31.317308   75577 logs.go:276] 0 containers: []
	W0920 18:21:31.317327   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:31.317335   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:31.317400   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:31.351563   75577 cri.go:89] found id: ""
	I0920 18:21:31.351588   75577 logs.go:276] 0 containers: []
	W0920 18:21:31.351595   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:31.351600   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:31.351652   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:31.385932   75577 cri.go:89] found id: ""
	I0920 18:21:31.385972   75577 logs.go:276] 0 containers: []
	W0920 18:21:31.385984   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:31.385992   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:31.386056   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:31.422679   75577 cri.go:89] found id: ""
	I0920 18:21:31.422718   75577 logs.go:276] 0 containers: []
	W0920 18:21:31.422729   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:31.422740   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:31.422755   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:31.475827   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:31.475867   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:31.489324   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:31.489354   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:31.560357   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:31.560377   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:31.560390   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:31.642137   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:31.642171   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:30.191820   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:32.692178   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:32.524953   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:35.025736   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:33.801586   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:36.301145   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:34.179063   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:34.193077   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:34.193157   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:34.227451   75577 cri.go:89] found id: ""
	I0920 18:21:34.227484   75577 logs.go:276] 0 containers: []
	W0920 18:21:34.227495   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:34.227503   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:34.227571   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:34.263740   75577 cri.go:89] found id: ""
	I0920 18:21:34.263766   75577 logs.go:276] 0 containers: []
	W0920 18:21:34.263774   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:34.263779   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:34.263838   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:34.298094   75577 cri.go:89] found id: ""
	I0920 18:21:34.298121   75577 logs.go:276] 0 containers: []
	W0920 18:21:34.298132   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:34.298139   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:34.298202   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:34.334253   75577 cri.go:89] found id: ""
	I0920 18:21:34.334281   75577 logs.go:276] 0 containers: []
	W0920 18:21:34.334290   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:34.334297   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:34.334361   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:34.370656   75577 cri.go:89] found id: ""
	I0920 18:21:34.370688   75577 logs.go:276] 0 containers: []
	W0920 18:21:34.370699   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:34.370706   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:34.370759   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:34.405658   75577 cri.go:89] found id: ""
	I0920 18:21:34.405681   75577 logs.go:276] 0 containers: []
	W0920 18:21:34.405689   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:34.405695   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:34.405748   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:34.442284   75577 cri.go:89] found id: ""
	I0920 18:21:34.442311   75577 logs.go:276] 0 containers: []
	W0920 18:21:34.442319   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:34.442325   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:34.442394   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:34.477895   75577 cri.go:89] found id: ""
	I0920 18:21:34.477927   75577 logs.go:276] 0 containers: []
	W0920 18:21:34.477939   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:34.477950   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:34.477964   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:34.531225   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:34.531256   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:34.544325   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:34.544359   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:34.615914   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:34.615933   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:34.615948   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:34.692119   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:34.692160   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:37.228930   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:37.242070   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:37.242151   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:37.276489   75577 cri.go:89] found id: ""
	I0920 18:21:37.276531   75577 logs.go:276] 0 containers: []
	W0920 18:21:37.276543   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:37.276551   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:37.276617   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:37.311106   75577 cri.go:89] found id: ""
	I0920 18:21:37.311140   75577 logs.go:276] 0 containers: []
	W0920 18:21:37.311148   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:37.311156   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:37.311217   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:37.343147   75577 cri.go:89] found id: ""
	I0920 18:21:37.343173   75577 logs.go:276] 0 containers: []
	W0920 18:21:37.343181   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:37.343188   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:37.343237   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:37.378858   75577 cri.go:89] found id: ""
	I0920 18:21:37.378934   75577 logs.go:276] 0 containers: []
	W0920 18:21:37.378947   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:37.378955   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:37.379005   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:37.412321   75577 cri.go:89] found id: ""
	I0920 18:21:37.412355   75577 logs.go:276] 0 containers: []
	W0920 18:21:37.412366   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:37.412374   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:37.412443   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:37.447478   75577 cri.go:89] found id: ""
	I0920 18:21:37.447510   75577 logs.go:276] 0 containers: []
	W0920 18:21:37.447520   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:37.447526   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:37.447580   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:37.481172   75577 cri.go:89] found id: ""
	I0920 18:21:37.481201   75577 logs.go:276] 0 containers: []
	W0920 18:21:37.481209   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:37.481216   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:37.481269   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:37.518001   75577 cri.go:89] found id: ""
	I0920 18:21:37.518032   75577 logs.go:276] 0 containers: []
	W0920 18:21:37.518041   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:37.518050   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:37.518062   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:37.567675   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:37.567707   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:37.582279   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:37.582308   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:37.654514   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:37.654546   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:37.654563   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:37.735895   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:37.735929   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:35.192406   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:37.692460   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:37.524744   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:39.526068   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:42.024627   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:38.301432   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:40.302489   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:40.276416   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:40.291651   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:40.291713   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:40.328309   75577 cri.go:89] found id: ""
	I0920 18:21:40.328346   75577 logs.go:276] 0 containers: []
	W0920 18:21:40.328359   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:40.328378   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:40.328441   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:40.368126   75577 cri.go:89] found id: ""
	I0920 18:21:40.368154   75577 logs.go:276] 0 containers: []
	W0920 18:21:40.368162   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:40.368167   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:40.368217   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:40.408324   75577 cri.go:89] found id: ""
	I0920 18:21:40.408359   75577 logs.go:276] 0 containers: []
	W0920 18:21:40.408371   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:40.408380   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:40.408448   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:40.453867   75577 cri.go:89] found id: ""
	I0920 18:21:40.453892   75577 logs.go:276] 0 containers: []
	W0920 18:21:40.453900   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:40.453906   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:40.453969   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:40.500625   75577 cri.go:89] found id: ""
	I0920 18:21:40.500660   75577 logs.go:276] 0 containers: []
	W0920 18:21:40.500670   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:40.500678   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:40.500750   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:40.533998   75577 cri.go:89] found id: ""
	I0920 18:21:40.534028   75577 logs.go:276] 0 containers: []
	W0920 18:21:40.534039   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:40.534048   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:40.534111   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:40.566279   75577 cri.go:89] found id: ""
	I0920 18:21:40.566308   75577 logs.go:276] 0 containers: []
	W0920 18:21:40.566319   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:40.566326   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:40.566392   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:40.599154   75577 cri.go:89] found id: ""
	I0920 18:21:40.599179   75577 logs.go:276] 0 containers: []
	W0920 18:21:40.599186   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:40.599194   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:40.599210   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:40.668568   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:40.668596   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:40.668608   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:40.747895   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:40.747969   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:40.789568   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:40.789604   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:40.839816   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:40.839852   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:39.693537   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:42.192617   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:44.025059   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:46.524700   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:42.801135   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:44.802083   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:46.802537   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:43.354847   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:43.369077   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:43.369167   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:43.404667   75577 cri.go:89] found id: ""
	I0920 18:21:43.404698   75577 logs.go:276] 0 containers: []
	W0920 18:21:43.404710   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:43.404718   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:43.404778   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:43.440968   75577 cri.go:89] found id: ""
	I0920 18:21:43.441001   75577 logs.go:276] 0 containers: []
	W0920 18:21:43.441012   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:43.441021   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:43.441088   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:43.474481   75577 cri.go:89] found id: ""
	I0920 18:21:43.474511   75577 logs.go:276] 0 containers: []
	W0920 18:21:43.474520   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:43.474529   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:43.474592   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:43.508196   75577 cri.go:89] found id: ""
	I0920 18:21:43.508228   75577 logs.go:276] 0 containers: []
	W0920 18:21:43.508241   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:43.508248   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:43.508307   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:43.543679   75577 cri.go:89] found id: ""
	I0920 18:21:43.543710   75577 logs.go:276] 0 containers: []
	W0920 18:21:43.543721   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:43.543728   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:43.543788   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:43.577115   75577 cri.go:89] found id: ""
	I0920 18:21:43.577138   75577 logs.go:276] 0 containers: []
	W0920 18:21:43.577145   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:43.577152   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:43.577198   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:43.611552   75577 cri.go:89] found id: ""
	I0920 18:21:43.611588   75577 logs.go:276] 0 containers: []
	W0920 18:21:43.611602   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:43.611616   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:43.611685   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:43.647913   75577 cri.go:89] found id: ""
	I0920 18:21:43.647946   75577 logs.go:276] 0 containers: []
	W0920 18:21:43.647957   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:43.647969   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:43.647983   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:43.661014   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:43.661043   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:43.736584   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:43.736607   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:43.736620   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:43.814340   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:43.814380   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:43.855968   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:43.855996   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:46.410577   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:46.423733   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:46.423800   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:46.456819   75577 cri.go:89] found id: ""
	I0920 18:21:46.456847   75577 logs.go:276] 0 containers: []
	W0920 18:21:46.456855   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:46.456861   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:46.456927   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:46.493259   75577 cri.go:89] found id: ""
	I0920 18:21:46.493291   75577 logs.go:276] 0 containers: []
	W0920 18:21:46.493301   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:46.493307   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:46.493372   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:46.531200   75577 cri.go:89] found id: ""
	I0920 18:21:46.531233   75577 logs.go:276] 0 containers: []
	W0920 18:21:46.531241   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:46.531247   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:46.531294   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:46.567508   75577 cri.go:89] found id: ""
	I0920 18:21:46.567530   75577 logs.go:276] 0 containers: []
	W0920 18:21:46.567538   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:46.567544   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:46.567601   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:46.600257   75577 cri.go:89] found id: ""
	I0920 18:21:46.600290   75577 logs.go:276] 0 containers: []
	W0920 18:21:46.600303   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:46.600311   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:46.600375   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:46.635568   75577 cri.go:89] found id: ""
	I0920 18:21:46.635598   75577 logs.go:276] 0 containers: []
	W0920 18:21:46.635606   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:46.635612   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:46.635668   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:46.668257   75577 cri.go:89] found id: ""
	I0920 18:21:46.668304   75577 logs.go:276] 0 containers: []
	W0920 18:21:46.668316   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:46.668324   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:46.668392   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:46.703631   75577 cri.go:89] found id: ""
	I0920 18:21:46.703654   75577 logs.go:276] 0 containers: []
	W0920 18:21:46.703662   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:46.703671   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:46.703680   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:46.780232   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:46.780295   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:46.816623   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:46.816647   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:46.868911   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:46.868954   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:46.882716   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:46.882743   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:46.965166   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:44.692655   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:47.192969   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:48.524828   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:50.525045   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:49.301263   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:51.802343   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:49.466238   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:49.480167   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:49.480228   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:49.513507   75577 cri.go:89] found id: ""
	I0920 18:21:49.513537   75577 logs.go:276] 0 containers: []
	W0920 18:21:49.513549   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:49.513558   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:49.513620   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:49.548708   75577 cri.go:89] found id: ""
	I0920 18:21:49.548739   75577 logs.go:276] 0 containers: []
	W0920 18:21:49.548750   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:49.548758   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:49.548810   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:49.583586   75577 cri.go:89] found id: ""
	I0920 18:21:49.583614   75577 logs.go:276] 0 containers: []
	W0920 18:21:49.583625   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:49.583633   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:49.583695   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:49.617574   75577 cri.go:89] found id: ""
	I0920 18:21:49.617605   75577 logs.go:276] 0 containers: []
	W0920 18:21:49.617616   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:49.617623   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:49.617684   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:49.656517   75577 cri.go:89] found id: ""
	I0920 18:21:49.656552   75577 logs.go:276] 0 containers: []
	W0920 18:21:49.656563   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:49.656571   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:49.656634   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:49.692848   75577 cri.go:89] found id: ""
	I0920 18:21:49.692870   75577 logs.go:276] 0 containers: []
	W0920 18:21:49.692877   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:49.692883   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:49.692950   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:49.728593   75577 cri.go:89] found id: ""
	I0920 18:21:49.728620   75577 logs.go:276] 0 containers: []
	W0920 18:21:49.728630   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:49.728643   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:49.728705   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:49.764187   75577 cri.go:89] found id: ""
	I0920 18:21:49.764218   75577 logs.go:276] 0 containers: []
	W0920 18:21:49.764229   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:49.764242   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:49.764260   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:49.837741   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:49.837764   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:49.837777   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:49.914941   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:49.914978   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:49.957609   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:49.957639   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:50.012075   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:50.012115   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:52.527722   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:52.542125   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:52.542206   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:52.579884   75577 cri.go:89] found id: ""
	I0920 18:21:52.579910   75577 logs.go:276] 0 containers: []
	W0920 18:21:52.579919   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:52.579924   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:52.579982   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:52.619138   75577 cri.go:89] found id: ""
	I0920 18:21:52.619167   75577 logs.go:276] 0 containers: []
	W0920 18:21:52.619180   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:52.619188   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:52.619246   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:52.654460   75577 cri.go:89] found id: ""
	I0920 18:21:52.654498   75577 logs.go:276] 0 containers: []
	W0920 18:21:52.654508   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:52.654515   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:52.654578   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:52.708015   75577 cri.go:89] found id: ""
	I0920 18:21:52.708042   75577 logs.go:276] 0 containers: []
	W0920 18:21:52.708051   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:52.708057   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:52.708127   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:49.692309   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:52.192466   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:53.025700   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:55.026088   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:53.802620   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:55.802856   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:52.743917   75577 cri.go:89] found id: ""
	I0920 18:21:52.743952   75577 logs.go:276] 0 containers: []
	W0920 18:21:52.743964   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:52.743972   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:52.744025   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:52.783458   75577 cri.go:89] found id: ""
	I0920 18:21:52.783481   75577 logs.go:276] 0 containers: []
	W0920 18:21:52.783488   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:52.783495   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:52.783552   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:52.817716   75577 cri.go:89] found id: ""
	I0920 18:21:52.817749   75577 logs.go:276] 0 containers: []
	W0920 18:21:52.817762   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:52.817771   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:52.817882   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:52.857141   75577 cri.go:89] found id: ""
	I0920 18:21:52.857169   75577 logs.go:276] 0 containers: []
	W0920 18:21:52.857180   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:52.857190   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:52.857204   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:52.910555   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:52.910597   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:52.923843   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:52.923873   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:52.994263   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:52.994296   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:52.994313   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:53.079782   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:53.079829   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:55.619418   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:55.633854   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:55.633922   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:55.669778   75577 cri.go:89] found id: ""
	I0920 18:21:55.669813   75577 logs.go:276] 0 containers: []
	W0920 18:21:55.669824   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:55.669853   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:55.669920   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:55.705687   75577 cri.go:89] found id: ""
	I0920 18:21:55.705724   75577 logs.go:276] 0 containers: []
	W0920 18:21:55.705736   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:55.705746   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:55.705811   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:55.739249   75577 cri.go:89] found id: ""
	I0920 18:21:55.739288   75577 logs.go:276] 0 containers: []
	W0920 18:21:55.739296   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:55.739302   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:55.739438   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:55.775077   75577 cri.go:89] found id: ""
	I0920 18:21:55.775109   75577 logs.go:276] 0 containers: []
	W0920 18:21:55.775121   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:55.775130   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:55.775181   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:55.810291   75577 cri.go:89] found id: ""
	I0920 18:21:55.810329   75577 logs.go:276] 0 containers: []
	W0920 18:21:55.810340   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:55.810349   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:55.810415   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:55.844440   75577 cri.go:89] found id: ""
	I0920 18:21:55.844468   75577 logs.go:276] 0 containers: []
	W0920 18:21:55.844476   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:55.844482   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:55.844551   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:55.878611   75577 cri.go:89] found id: ""
	I0920 18:21:55.878647   75577 logs.go:276] 0 containers: []
	W0920 18:21:55.878659   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:55.878667   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:55.878733   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:55.922064   75577 cri.go:89] found id: ""
	I0920 18:21:55.922101   75577 logs.go:276] 0 containers: []
	W0920 18:21:55.922114   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:55.922127   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:55.922147   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:55.979406   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:55.979445   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:55.993082   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:55.993111   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:56.065650   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:56.065701   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:56.065717   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:56.142640   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:56.142684   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:54.691482   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:56.691912   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:57.525280   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:00.025224   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:02.025722   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:58.300359   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:00.301252   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:02.302209   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:58.683524   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:58.698775   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:58.698833   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:58.737369   75577 cri.go:89] found id: ""
	I0920 18:21:58.737398   75577 logs.go:276] 0 containers: []
	W0920 18:21:58.737409   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:58.737417   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:58.737476   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:58.779504   75577 cri.go:89] found id: ""
	I0920 18:21:58.779533   75577 logs.go:276] 0 containers: []
	W0920 18:21:58.779544   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:58.779552   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:58.779610   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:58.814363   75577 cri.go:89] found id: ""
	I0920 18:21:58.814393   75577 logs.go:276] 0 containers: []
	W0920 18:21:58.814401   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:58.814407   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:58.814454   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:58.846219   75577 cri.go:89] found id: ""
	I0920 18:21:58.846242   75577 logs.go:276] 0 containers: []
	W0920 18:21:58.846251   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:58.846256   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:58.846307   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:58.882387   75577 cri.go:89] found id: ""
	I0920 18:21:58.882423   75577 logs.go:276] 0 containers: []
	W0920 18:21:58.882431   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:58.882437   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:58.882497   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:58.918632   75577 cri.go:89] found id: ""
	I0920 18:21:58.918668   75577 logs.go:276] 0 containers: []
	W0920 18:21:58.918679   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:58.918686   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:58.918751   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:58.953521   75577 cri.go:89] found id: ""
	I0920 18:21:58.953547   75577 logs.go:276] 0 containers: []
	W0920 18:21:58.953557   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:58.953564   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:58.953624   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:58.987412   75577 cri.go:89] found id: ""
	I0920 18:21:58.987439   75577 logs.go:276] 0 containers: []
	W0920 18:21:58.987447   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:58.987457   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:58.987471   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:59.077108   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:59.077169   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:59.141723   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:59.141758   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:59.208035   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:59.208081   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:59.221380   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:59.221404   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:59.297151   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:22:01.797684   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:22:01.812275   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:22:01.812338   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:22:01.846431   75577 cri.go:89] found id: ""
	I0920 18:22:01.846459   75577 logs.go:276] 0 containers: []
	W0920 18:22:01.846477   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:22:01.846483   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:22:01.846535   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:22:01.889016   75577 cri.go:89] found id: ""
	I0920 18:22:01.889049   75577 logs.go:276] 0 containers: []
	W0920 18:22:01.889061   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:22:01.889069   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:22:01.889144   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:22:01.926048   75577 cri.go:89] found id: ""
	I0920 18:22:01.926078   75577 logs.go:276] 0 containers: []
	W0920 18:22:01.926090   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:22:01.926108   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:22:01.926185   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:22:01.964510   75577 cri.go:89] found id: ""
	I0920 18:22:01.964536   75577 logs.go:276] 0 containers: []
	W0920 18:22:01.964547   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:22:01.964555   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:22:01.964624   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:22:02.005549   75577 cri.go:89] found id: ""
	I0920 18:22:02.005575   75577 logs.go:276] 0 containers: []
	W0920 18:22:02.005583   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:22:02.005588   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:22:02.005642   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:22:02.047359   75577 cri.go:89] found id: ""
	I0920 18:22:02.047385   75577 logs.go:276] 0 containers: []
	W0920 18:22:02.047393   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:22:02.047399   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:22:02.047455   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:22:02.082961   75577 cri.go:89] found id: ""
	I0920 18:22:02.082999   75577 logs.go:276] 0 containers: []
	W0920 18:22:02.083008   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:22:02.083015   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:22:02.083062   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:22:02.117696   75577 cri.go:89] found id: ""
	I0920 18:22:02.117732   75577 logs.go:276] 0 containers: []
	W0920 18:22:02.117741   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:22:02.117753   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:22:02.117769   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:22:02.131282   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:22:02.131311   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:22:02.200738   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:22:02.200760   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:22:02.200776   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:22:02.281162   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:22:02.281200   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:22:02.321854   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:22:02.321888   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:59.191434   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:01.192475   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:01.685900   75086 pod_ready.go:82] duration metric: took 4m0.000752648s for pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace to be "Ready" ...
	E0920 18:22:01.685945   75086 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace to be "Ready" (will not retry!)
	I0920 18:22:01.685972   75086 pod_ready.go:39] duration metric: took 4m13.051752581s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:22:01.686033   75086 kubeadm.go:597] duration metric: took 4m20.498687791s to restartPrimaryControlPlane
	W0920 18:22:01.686114   75086 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0920 18:22:01.686150   75086 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0920 18:22:04.026729   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:06.524404   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:04.803249   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:07.301696   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:04.874353   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:22:04.889710   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:22:04.889773   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:22:04.945719   75577 cri.go:89] found id: ""
	I0920 18:22:04.945745   75577 logs.go:276] 0 containers: []
	W0920 18:22:04.945753   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:22:04.945759   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:22:04.945802   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:22:04.995492   75577 cri.go:89] found id: ""
	I0920 18:22:04.995535   75577 logs.go:276] 0 containers: []
	W0920 18:22:04.995547   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:22:04.995555   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:22:04.995615   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:22:05.037827   75577 cri.go:89] found id: ""
	I0920 18:22:05.037871   75577 logs.go:276] 0 containers: []
	W0920 18:22:05.037882   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:22:05.037888   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:22:05.037935   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:22:05.077652   75577 cri.go:89] found id: ""
	I0920 18:22:05.077691   75577 logs.go:276] 0 containers: []
	W0920 18:22:05.077704   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:22:05.077712   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:22:05.077772   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:22:05.114472   75577 cri.go:89] found id: ""
	I0920 18:22:05.114509   75577 logs.go:276] 0 containers: []
	W0920 18:22:05.114520   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:22:05.114527   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:22:05.114590   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:22:05.150809   75577 cri.go:89] found id: ""
	I0920 18:22:05.150841   75577 logs.go:276] 0 containers: []
	W0920 18:22:05.150853   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:22:05.150860   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:22:05.150908   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:22:05.188880   75577 cri.go:89] found id: ""
	I0920 18:22:05.188910   75577 logs.go:276] 0 containers: []
	W0920 18:22:05.188921   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:22:05.188929   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:22:05.188990   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:22:05.224990   75577 cri.go:89] found id: ""
	I0920 18:22:05.225015   75577 logs.go:276] 0 containers: []
	W0920 18:22:05.225023   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:22:05.225032   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:22:05.225043   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:22:05.298000   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:22:05.298023   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:22:05.298037   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:22:05.382969   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:22:05.383008   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:22:05.424911   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:22:05.424940   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:22:05.475988   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:22:05.476024   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:22:08.525098   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:10.525321   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:09.801936   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:12.301073   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:07.990660   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:22:08.005572   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:22:08.005637   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:22:08.044354   75577 cri.go:89] found id: ""
	I0920 18:22:08.044381   75577 logs.go:276] 0 containers: []
	W0920 18:22:08.044390   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:22:08.044401   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:22:08.044449   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:22:08.082899   75577 cri.go:89] found id: ""
	I0920 18:22:08.082928   75577 logs.go:276] 0 containers: []
	W0920 18:22:08.082939   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:22:08.082948   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:22:08.083009   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:22:08.129297   75577 cri.go:89] found id: ""
	I0920 18:22:08.129325   75577 logs.go:276] 0 containers: []
	W0920 18:22:08.129335   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:22:08.129343   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:22:08.129404   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:22:08.168744   75577 cri.go:89] found id: ""
	I0920 18:22:08.168774   75577 logs.go:276] 0 containers: []
	W0920 18:22:08.168787   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:22:08.168792   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:22:08.168849   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:22:08.203830   75577 cri.go:89] found id: ""
	I0920 18:22:08.203860   75577 logs.go:276] 0 containers: []
	W0920 18:22:08.203871   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:22:08.203879   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:22:08.203940   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:22:08.238226   75577 cri.go:89] found id: ""
	I0920 18:22:08.238250   75577 logs.go:276] 0 containers: []
	W0920 18:22:08.238258   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:22:08.238263   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:22:08.238331   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:22:08.271457   75577 cri.go:89] found id: ""
	I0920 18:22:08.271485   75577 logs.go:276] 0 containers: []
	W0920 18:22:08.271495   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:22:08.271502   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:22:08.271563   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:22:08.306661   75577 cri.go:89] found id: ""
	I0920 18:22:08.306692   75577 logs.go:276] 0 containers: []
	W0920 18:22:08.306703   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:22:08.306712   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:22:08.306730   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:22:08.357760   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:22:08.357793   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:22:08.372625   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:22:08.372660   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:22:08.442477   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:22:08.442501   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:22:08.442517   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:22:08.528381   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:22:08.528412   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:22:11.064842   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:22:11.078086   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:22:11.078158   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:22:11.113454   75577 cri.go:89] found id: ""
	I0920 18:22:11.113485   75577 logs.go:276] 0 containers: []
	W0920 18:22:11.113497   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:22:11.113511   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:22:11.113575   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:22:11.148039   75577 cri.go:89] found id: ""
	I0920 18:22:11.148064   75577 logs.go:276] 0 containers: []
	W0920 18:22:11.148072   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:22:11.148078   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:22:11.148127   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:22:11.182667   75577 cri.go:89] found id: ""
	I0920 18:22:11.182697   75577 logs.go:276] 0 containers: []
	W0920 18:22:11.182708   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:22:11.182715   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:22:11.182776   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:22:11.217022   75577 cri.go:89] found id: ""
	I0920 18:22:11.217055   75577 logs.go:276] 0 containers: []
	W0920 18:22:11.217067   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:22:11.217075   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:22:11.217141   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:22:11.253970   75577 cri.go:89] found id: ""
	I0920 18:22:11.254001   75577 logs.go:276] 0 containers: []
	W0920 18:22:11.254012   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:22:11.254019   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:22:11.254085   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:22:11.288070   75577 cri.go:89] found id: ""
	I0920 18:22:11.288103   75577 logs.go:276] 0 containers: []
	W0920 18:22:11.288114   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:22:11.288121   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:22:11.288189   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:22:11.324215   75577 cri.go:89] found id: ""
	I0920 18:22:11.324240   75577 logs.go:276] 0 containers: []
	W0920 18:22:11.324254   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:22:11.324261   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:22:11.324319   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:22:11.359377   75577 cri.go:89] found id: ""
	I0920 18:22:11.359406   75577 logs.go:276] 0 containers: []
	W0920 18:22:11.359414   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:22:11.359423   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:22:11.359433   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:22:11.416479   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:22:11.416520   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:22:11.430240   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:22:11.430269   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:22:11.501531   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:22:11.501553   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:22:11.501565   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:22:11.580748   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:22:11.580787   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:22:13.024330   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:15.025065   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:17.026642   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:14.301652   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:16.802017   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:14.119084   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:22:14.132428   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:22:14.132505   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:22:14.167674   75577 cri.go:89] found id: ""
	I0920 18:22:14.167699   75577 logs.go:276] 0 containers: []
	W0920 18:22:14.167710   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:22:14.167717   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:22:14.167772   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:22:14.204570   75577 cri.go:89] found id: ""
	I0920 18:22:14.204595   75577 logs.go:276] 0 containers: []
	W0920 18:22:14.204603   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:22:14.204608   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:22:14.204655   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:22:14.241687   75577 cri.go:89] found id: ""
	I0920 18:22:14.241716   75577 logs.go:276] 0 containers: []
	W0920 18:22:14.241724   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:22:14.241731   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:22:14.241789   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:22:14.275776   75577 cri.go:89] found id: ""
	I0920 18:22:14.275802   75577 logs.go:276] 0 containers: []
	W0920 18:22:14.275810   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:22:14.275816   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:22:14.275872   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:22:14.309565   75577 cri.go:89] found id: ""
	I0920 18:22:14.309589   75577 logs.go:276] 0 containers: []
	W0920 18:22:14.309596   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:22:14.309602   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:22:14.309662   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:22:14.341858   75577 cri.go:89] found id: ""
	I0920 18:22:14.341884   75577 logs.go:276] 0 containers: []
	W0920 18:22:14.341892   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:22:14.341898   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:22:14.341963   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:22:14.377863   75577 cri.go:89] found id: ""
	I0920 18:22:14.377896   75577 logs.go:276] 0 containers: []
	W0920 18:22:14.377906   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:22:14.377912   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:22:14.377988   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:22:14.413182   75577 cri.go:89] found id: ""
	I0920 18:22:14.413207   75577 logs.go:276] 0 containers: []
	W0920 18:22:14.413214   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:22:14.413236   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:22:14.413253   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:22:14.466782   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:22:14.466820   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:22:14.480627   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:22:14.480668   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:22:14.552071   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:22:14.552104   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:22:14.552121   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:22:14.626481   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:22:14.626518   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:22:17.163264   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:22:17.181609   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:22:17.181684   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:22:17.219871   75577 cri.go:89] found id: ""
	I0920 18:22:17.219899   75577 logs.go:276] 0 containers: []
	W0920 18:22:17.219923   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:22:17.219932   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:22:17.220013   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:22:17.256315   75577 cri.go:89] found id: ""
	I0920 18:22:17.256346   75577 logs.go:276] 0 containers: []
	W0920 18:22:17.256356   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:22:17.256364   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:22:17.256434   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:22:17.294326   75577 cri.go:89] found id: ""
	I0920 18:22:17.294352   75577 logs.go:276] 0 containers: []
	W0920 18:22:17.294360   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:22:17.294366   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:22:17.294421   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:22:17.336461   75577 cri.go:89] found id: ""
	I0920 18:22:17.336491   75577 logs.go:276] 0 containers: []
	W0920 18:22:17.336500   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:22:17.336506   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:22:17.336568   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:22:17.376242   75577 cri.go:89] found id: ""
	I0920 18:22:17.376283   75577 logs.go:276] 0 containers: []
	W0920 18:22:17.376295   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:22:17.376302   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:22:17.376363   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:22:17.409147   75577 cri.go:89] found id: ""
	I0920 18:22:17.409180   75577 logs.go:276] 0 containers: []
	W0920 18:22:17.409190   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:22:17.409198   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:22:17.409269   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:22:17.444698   75577 cri.go:89] found id: ""
	I0920 18:22:17.444725   75577 logs.go:276] 0 containers: []
	W0920 18:22:17.444734   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:22:17.444738   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:22:17.444791   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:22:17.485914   75577 cri.go:89] found id: ""
	I0920 18:22:17.485948   75577 logs.go:276] 0 containers: []
	W0920 18:22:17.485959   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:22:17.485970   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:22:17.485983   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:22:17.523567   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:22:17.523601   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:22:17.588864   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:22:17.588905   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:22:17.605052   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:22:17.605092   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:22:17.677012   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:22:17.677039   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:22:17.677056   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:22:19.525154   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:22.025422   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:18.802052   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:20.804024   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:20.252258   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:22:20.267607   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:22:20.267698   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:22:20.302260   75577 cri.go:89] found id: ""
	I0920 18:22:20.302291   75577 logs.go:276] 0 containers: []
	W0920 18:22:20.302301   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:22:20.302309   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:22:20.302373   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:22:20.335343   75577 cri.go:89] found id: ""
	I0920 18:22:20.335377   75577 logs.go:276] 0 containers: []
	W0920 18:22:20.335389   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:22:20.335397   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:22:20.335460   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:22:20.369612   75577 cri.go:89] found id: ""
	I0920 18:22:20.369641   75577 logs.go:276] 0 containers: []
	W0920 18:22:20.369649   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:22:20.369655   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:22:20.369703   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:22:20.403703   75577 cri.go:89] found id: ""
	I0920 18:22:20.403732   75577 logs.go:276] 0 containers: []
	W0920 18:22:20.403740   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:22:20.403746   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:22:20.403804   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:22:20.436288   75577 cri.go:89] found id: ""
	I0920 18:22:20.436316   75577 logs.go:276] 0 containers: []
	W0920 18:22:20.436328   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:22:20.436336   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:22:20.436399   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:22:20.471536   75577 cri.go:89] found id: ""
	I0920 18:22:20.471572   75577 logs.go:276] 0 containers: []
	W0920 18:22:20.471584   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:22:20.471593   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:22:20.471657   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:22:20.506012   75577 cri.go:89] found id: ""
	I0920 18:22:20.506107   75577 logs.go:276] 0 containers: []
	W0920 18:22:20.506134   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:22:20.506143   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:22:20.506199   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:22:20.542614   75577 cri.go:89] found id: ""
	I0920 18:22:20.542650   75577 logs.go:276] 0 containers: []
	W0920 18:22:20.542660   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:22:20.542671   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:22:20.542687   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:22:20.596316   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:22:20.596357   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:22:20.610438   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:22:20.610465   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:22:20.688061   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:22:20.688079   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:22:20.688091   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:22:20.768249   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:22:20.768296   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:22:24.525259   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:25.524857   75264 pod_ready.go:82] duration metric: took 4m0.006563805s for pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace to be "Ready" ...
	E0920 18:22:25.524883   75264 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0920 18:22:25.524891   75264 pod_ready.go:39] duration metric: took 4m8.542422056s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:22:25.524906   75264 api_server.go:52] waiting for apiserver process to appear ...
	I0920 18:22:25.524977   75264 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:22:25.525029   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:22:25.579894   75264 cri.go:89] found id: "0ea0cfbd9902ae8e3797c77592bfe05a5f8162f75a391f5c2dc992eb5154c4ad"
	I0920 18:22:25.579914   75264 cri.go:89] found id: ""
	I0920 18:22:25.579923   75264 logs.go:276] 1 containers: [0ea0cfbd9902ae8e3797c77592bfe05a5f8162f75a391f5c2dc992eb5154c4ad]
	I0920 18:22:25.579979   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:25.585095   75264 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:22:25.585176   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:22:25.621769   75264 cri.go:89] found id: "65da0bae1c849a395cc241962bb6cbdf9f3edf11cdc95f4495ac2e427f9602d4"
	I0920 18:22:25.621796   75264 cri.go:89] found id: ""
	I0920 18:22:25.621805   75264 logs.go:276] 1 containers: [65da0bae1c849a395cc241962bb6cbdf9f3edf11cdc95f4495ac2e427f9602d4]
	I0920 18:22:25.621881   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:25.626318   75264 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:22:25.626400   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:22:25.662732   75264 cri.go:89] found id: "606f7c8a9a095f499ae422de76f76105607db72e57b9b0a6db3e3c5a81656c5c"
	I0920 18:22:25.662753   75264 cri.go:89] found id: ""
	I0920 18:22:25.662760   75264 logs.go:276] 1 containers: [606f7c8a9a095f499ae422de76f76105607db72e57b9b0a6db3e3c5a81656c5c]
	I0920 18:22:25.662818   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:25.667212   75264 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:22:25.667299   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:22:25.703639   75264 cri.go:89] found id: "6ba313deffc6185e59dac50051e933f6b2ea70d630cf36b2b8c2c07042f793ba"
	I0920 18:22:25.703663   75264 cri.go:89] found id: ""
	I0920 18:22:25.703670   75264 logs.go:276] 1 containers: [6ba313deffc6185e59dac50051e933f6b2ea70d630cf36b2b8c2c07042f793ba]
	I0920 18:22:25.703721   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:25.708034   75264 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:22:25.708115   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:22:25.745358   75264 cri.go:89] found id: "702f7f440eb6016c2c758a6be4e1072274ab6dcd169d7a081e5464869a7c299c"
	I0920 18:22:25.745387   75264 cri.go:89] found id: ""
	I0920 18:22:25.745397   75264 logs.go:276] 1 containers: [702f7f440eb6016c2c758a6be4e1072274ab6dcd169d7a081e5464869a7c299c]
	I0920 18:22:25.745455   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:25.749702   75264 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:22:25.749787   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:22:25.790592   75264 cri.go:89] found id: "4ca303b795795bd920f261c88630f30fd51a91b9694a9a6e67af8164bab8242b"
	I0920 18:22:25.790615   75264 cri.go:89] found id: ""
	I0920 18:22:25.790623   75264 logs.go:276] 1 containers: [4ca303b795795bd920f261c88630f30fd51a91b9694a9a6e67af8164bab8242b]
	I0920 18:22:25.790686   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:25.794993   75264 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:22:25.795062   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:22:25.836621   75264 cri.go:89] found id: ""
	I0920 18:22:25.836646   75264 logs.go:276] 0 containers: []
	W0920 18:22:25.836654   75264 logs.go:278] No container was found matching "kindnet"
	I0920 18:22:25.836661   75264 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0920 18:22:25.836723   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0920 18:22:25.874191   75264 cri.go:89] found id: "001bdc98537f0917f021c14a5d0733bd31cf19be1b4862502458a7a90e4300da"
	I0920 18:22:25.874215   75264 cri.go:89] found id: "c42201b6e3d55b6fd8beaf04d416658b228bf627e519ddb022dad6d17219cc1c"
	I0920 18:22:25.874220   75264 cri.go:89] found id: ""
	I0920 18:22:25.874229   75264 logs.go:276] 2 containers: [001bdc98537f0917f021c14a5d0733bd31cf19be1b4862502458a7a90e4300da c42201b6e3d55b6fd8beaf04d416658b228bf627e519ddb022dad6d17219cc1c]
	I0920 18:22:25.874316   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:25.878788   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:25.882840   75264 logs.go:123] Gathering logs for kube-proxy [702f7f440eb6016c2c758a6be4e1072274ab6dcd169d7a081e5464869a7c299c] ...
	I0920 18:22:25.882869   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 702f7f440eb6016c2c758a6be4e1072274ab6dcd169d7a081e5464869a7c299c"
	I0920 18:22:25.921609   75264 logs.go:123] Gathering logs for kube-controller-manager [4ca303b795795bd920f261c88630f30fd51a91b9694a9a6e67af8164bab8242b] ...
	I0920 18:22:25.921640   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ca303b795795bd920f261c88630f30fd51a91b9694a9a6e67af8164bab8242b"
	I0920 18:22:25.987199   75264 logs.go:123] Gathering logs for kubelet ...
	I0920 18:22:25.987236   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:22:26.061857   75264 logs.go:123] Gathering logs for kube-apiserver [0ea0cfbd9902ae8e3797c77592bfe05a5f8162f75a391f5c2dc992eb5154c4ad] ...
	I0920 18:22:26.061897   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ea0cfbd9902ae8e3797c77592bfe05a5f8162f75a391f5c2dc992eb5154c4ad"
	I0920 18:22:26.104901   75264 logs.go:123] Gathering logs for etcd [65da0bae1c849a395cc241962bb6cbdf9f3edf11cdc95f4495ac2e427f9602d4] ...
	I0920 18:22:26.104935   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 65da0bae1c849a395cc241962bb6cbdf9f3edf11cdc95f4495ac2e427f9602d4"
	I0920 18:22:26.160795   75264 logs.go:123] Gathering logs for kube-scheduler [6ba313deffc6185e59dac50051e933f6b2ea70d630cf36b2b8c2c07042f793ba] ...
	I0920 18:22:26.160833   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ba313deffc6185e59dac50051e933f6b2ea70d630cf36b2b8c2c07042f793ba"
	I0920 18:22:26.199286   75264 logs.go:123] Gathering logs for storage-provisioner [001bdc98537f0917f021c14a5d0733bd31cf19be1b4862502458a7a90e4300da] ...
	I0920 18:22:26.199316   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 001bdc98537f0917f021c14a5d0733bd31cf19be1b4862502458a7a90e4300da"
	I0920 18:22:26.239560   75264 logs.go:123] Gathering logs for storage-provisioner [c42201b6e3d55b6fd8beaf04d416658b228bf627e519ddb022dad6d17219cc1c] ...
	I0920 18:22:26.239591   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c42201b6e3d55b6fd8beaf04d416658b228bf627e519ddb022dad6d17219cc1c"
	I0920 18:22:26.275424   75264 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:22:26.275450   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:22:26.786849   75264 logs.go:123] Gathering logs for container status ...
	I0920 18:22:26.786895   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:22:26.848751   75264 logs.go:123] Gathering logs for dmesg ...
	I0920 18:22:26.848783   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:22:26.867329   75264 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:22:26.867375   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 18:22:27.017385   75264 logs.go:123] Gathering logs for coredns [606f7c8a9a095f499ae422de76f76105607db72e57b9b0a6db3e3c5a81656c5c] ...
	I0920 18:22:27.017419   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 606f7c8a9a095f499ae422de76f76105607db72e57b9b0a6db3e3c5a81656c5c"
	I0920 18:22:23.300658   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:25.302131   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:27.303929   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:23.307149   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:22:23.319889   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:22:23.319972   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:22:23.356613   75577 cri.go:89] found id: ""
	I0920 18:22:23.356645   75577 logs.go:276] 0 containers: []
	W0920 18:22:23.356656   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:22:23.356663   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:22:23.356728   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:22:23.391632   75577 cri.go:89] found id: ""
	I0920 18:22:23.391663   75577 logs.go:276] 0 containers: []
	W0920 18:22:23.391675   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:22:23.391683   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:22:23.391748   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:22:23.426908   75577 cri.go:89] found id: ""
	I0920 18:22:23.426936   75577 logs.go:276] 0 containers: []
	W0920 18:22:23.426946   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:22:23.426952   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:22:23.427013   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:22:23.461890   75577 cri.go:89] found id: ""
	I0920 18:22:23.461925   75577 logs.go:276] 0 containers: []
	W0920 18:22:23.461938   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:22:23.461947   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:22:23.462014   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:22:23.503517   75577 cri.go:89] found id: ""
	I0920 18:22:23.503549   75577 logs.go:276] 0 containers: []
	W0920 18:22:23.503560   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:22:23.503566   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:22:23.503615   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:22:23.540699   75577 cri.go:89] found id: ""
	I0920 18:22:23.540722   75577 logs.go:276] 0 containers: []
	W0920 18:22:23.540731   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:22:23.540736   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:22:23.540783   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:22:23.575485   75577 cri.go:89] found id: ""
	I0920 18:22:23.575509   75577 logs.go:276] 0 containers: []
	W0920 18:22:23.575517   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:22:23.575523   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:22:23.575576   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:22:23.610998   75577 cri.go:89] found id: ""
	I0920 18:22:23.611028   75577 logs.go:276] 0 containers: []
	W0920 18:22:23.611039   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:22:23.611051   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:22:23.611065   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:22:23.687072   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:22:23.687109   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:22:23.724790   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:22:23.724824   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:22:23.778905   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:22:23.778945   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:22:23.794298   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:22:23.794329   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:22:23.873341   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:22:26.374541   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:22:26.388151   75577 kubeadm.go:597] duration metric: took 4m2.956358154s to restartPrimaryControlPlane
	W0920 18:22:26.388229   75577 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0920 18:22:26.388260   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0920 18:22:27.454521   75577 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.066231187s)
	I0920 18:22:27.454602   75577 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:22:27.469409   75577 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 18:22:27.480314   75577 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 18:22:27.491389   75577 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 18:22:27.491416   75577 kubeadm.go:157] found existing configuration files:
	
	I0920 18:22:27.491468   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 18:22:27.501448   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 18:22:27.501534   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 18:22:27.511370   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 18:22:27.521767   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 18:22:27.521855   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 18:22:27.531717   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 18:22:27.540517   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 18:22:27.540575   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 18:22:27.549644   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 18:22:27.558412   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 18:22:27.558480   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 18:22:27.567702   75577 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 18:22:28.070939   75086 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.384759009s)
	I0920 18:22:28.071025   75086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:22:28.087712   75086 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 18:22:28.097709   75086 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 18:22:28.107217   75086 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 18:22:28.107243   75086 kubeadm.go:157] found existing configuration files:
	
	I0920 18:22:28.107297   75086 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 18:22:28.116947   75086 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 18:22:28.117013   75086 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 18:22:28.126559   75086 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 18:22:28.135261   75086 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 18:22:28.135338   75086 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 18:22:28.144456   75086 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 18:22:28.153412   75086 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 18:22:28.153465   75086 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 18:22:28.165825   75086 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 18:22:28.175003   75086 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 18:22:28.175068   75086 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 18:22:28.184917   75086 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 18:22:28.236825   75086 kubeadm.go:310] W0920 18:22:28.216092    2540 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 18:22:28.237654   75086 kubeadm.go:310] W0920 18:22:28.216986    2540 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 18:22:28.352695   75086 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 18:22:29.557496   75264 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:22:29.578981   75264 api_server.go:72] duration metric: took 4m20.322735753s to wait for apiserver process to appear ...
	I0920 18:22:29.579009   75264 api_server.go:88] waiting for apiserver healthz status ...
	I0920 18:22:29.579044   75264 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:22:29.579093   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:22:29.616252   75264 cri.go:89] found id: "0ea0cfbd9902ae8e3797c77592bfe05a5f8162f75a391f5c2dc992eb5154c4ad"
	I0920 18:22:29.616281   75264 cri.go:89] found id: ""
	I0920 18:22:29.616292   75264 logs.go:276] 1 containers: [0ea0cfbd9902ae8e3797c77592bfe05a5f8162f75a391f5c2dc992eb5154c4ad]
	I0920 18:22:29.616345   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:29.620605   75264 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:22:29.620678   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:22:29.658077   75264 cri.go:89] found id: "65da0bae1c849a395cc241962bb6cbdf9f3edf11cdc95f4495ac2e427f9602d4"
	I0920 18:22:29.658106   75264 cri.go:89] found id: ""
	I0920 18:22:29.658114   75264 logs.go:276] 1 containers: [65da0bae1c849a395cc241962bb6cbdf9f3edf11cdc95f4495ac2e427f9602d4]
	I0920 18:22:29.658170   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:29.662227   75264 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:22:29.662288   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:22:29.695906   75264 cri.go:89] found id: "606f7c8a9a095f499ae422de76f76105607db72e57b9b0a6db3e3c5a81656c5c"
	I0920 18:22:29.695927   75264 cri.go:89] found id: ""
	I0920 18:22:29.695934   75264 logs.go:276] 1 containers: [606f7c8a9a095f499ae422de76f76105607db72e57b9b0a6db3e3c5a81656c5c]
	I0920 18:22:29.695979   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:29.699986   75264 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:22:29.700083   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:22:29.736560   75264 cri.go:89] found id: "6ba313deffc6185e59dac50051e933f6b2ea70d630cf36b2b8c2c07042f793ba"
	I0920 18:22:29.736589   75264 cri.go:89] found id: ""
	I0920 18:22:29.736600   75264 logs.go:276] 1 containers: [6ba313deffc6185e59dac50051e933f6b2ea70d630cf36b2b8c2c07042f793ba]
	I0920 18:22:29.736660   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:29.740751   75264 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:22:29.740803   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:22:29.778930   75264 cri.go:89] found id: "702f7f440eb6016c2c758a6be4e1072274ab6dcd169d7a081e5464869a7c299c"
	I0920 18:22:29.778952   75264 cri.go:89] found id: ""
	I0920 18:22:29.778959   75264 logs.go:276] 1 containers: [702f7f440eb6016c2c758a6be4e1072274ab6dcd169d7a081e5464869a7c299c]
	I0920 18:22:29.779011   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:29.783022   75264 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:22:29.783092   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:22:29.829491   75264 cri.go:89] found id: "4ca303b795795bd920f261c88630f30fd51a91b9694a9a6e67af8164bab8242b"
	I0920 18:22:29.829521   75264 cri.go:89] found id: ""
	I0920 18:22:29.829531   75264 logs.go:276] 1 containers: [4ca303b795795bd920f261c88630f30fd51a91b9694a9a6e67af8164bab8242b]
	I0920 18:22:29.829597   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:29.833853   75264 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:22:29.833924   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:22:29.872376   75264 cri.go:89] found id: ""
	I0920 18:22:29.872404   75264 logs.go:276] 0 containers: []
	W0920 18:22:29.872412   75264 logs.go:278] No container was found matching "kindnet"
	I0920 18:22:29.872419   75264 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0920 18:22:29.872482   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0920 18:22:29.923096   75264 cri.go:89] found id: "001bdc98537f0917f021c14a5d0733bd31cf19be1b4862502458a7a90e4300da"
	I0920 18:22:29.923136   75264 cri.go:89] found id: "c42201b6e3d55b6fd8beaf04d416658b228bf627e519ddb022dad6d17219cc1c"
	I0920 18:22:29.923143   75264 cri.go:89] found id: ""
	I0920 18:22:29.923152   75264 logs.go:276] 2 containers: [001bdc98537f0917f021c14a5d0733bd31cf19be1b4862502458a7a90e4300da c42201b6e3d55b6fd8beaf04d416658b228bf627e519ddb022dad6d17219cc1c]
	I0920 18:22:29.923226   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:29.930023   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:29.935091   75264 logs.go:123] Gathering logs for coredns [606f7c8a9a095f499ae422de76f76105607db72e57b9b0a6db3e3c5a81656c5c] ...
	I0920 18:22:29.935119   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 606f7c8a9a095f499ae422de76f76105607db72e57b9b0a6db3e3c5a81656c5c"
	I0920 18:22:29.974064   75264 logs.go:123] Gathering logs for kube-scheduler [6ba313deffc6185e59dac50051e933f6b2ea70d630cf36b2b8c2c07042f793ba] ...
	I0920 18:22:29.974101   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ba313deffc6185e59dac50051e933f6b2ea70d630cf36b2b8c2c07042f793ba"
	I0920 18:22:30.018770   75264 logs.go:123] Gathering logs for kube-controller-manager [4ca303b795795bd920f261c88630f30fd51a91b9694a9a6e67af8164bab8242b] ...
	I0920 18:22:30.018805   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ca303b795795bd920f261c88630f30fd51a91b9694a9a6e67af8164bab8242b"
	I0920 18:22:30.077578   75264 logs.go:123] Gathering logs for storage-provisioner [c42201b6e3d55b6fd8beaf04d416658b228bf627e519ddb022dad6d17219cc1c] ...
	I0920 18:22:30.077625   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c42201b6e3d55b6fd8beaf04d416658b228bf627e519ddb022dad6d17219cc1c"
	I0920 18:22:30.123874   75264 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:22:30.123910   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:22:30.553215   75264 logs.go:123] Gathering logs for kube-proxy [702f7f440eb6016c2c758a6be4e1072274ab6dcd169d7a081e5464869a7c299c] ...
	I0920 18:22:30.553261   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 702f7f440eb6016c2c758a6be4e1072274ab6dcd169d7a081e5464869a7c299c"
	I0920 18:22:30.597397   75264 logs.go:123] Gathering logs for storage-provisioner [001bdc98537f0917f021c14a5d0733bd31cf19be1b4862502458a7a90e4300da] ...
	I0920 18:22:30.597428   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 001bdc98537f0917f021c14a5d0733bd31cf19be1b4862502458a7a90e4300da"
	I0920 18:22:30.643777   75264 logs.go:123] Gathering logs for container status ...
	I0920 18:22:30.643807   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:22:30.688174   75264 logs.go:123] Gathering logs for kubelet ...
	I0920 18:22:30.688205   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:22:30.760547   75264 logs.go:123] Gathering logs for dmesg ...
	I0920 18:22:30.760591   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:22:30.776702   75264 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:22:30.776729   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 18:22:30.913200   75264 logs.go:123] Gathering logs for kube-apiserver [0ea0cfbd9902ae8e3797c77592bfe05a5f8162f75a391f5c2dc992eb5154c4ad] ...
	I0920 18:22:30.913240   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ea0cfbd9902ae8e3797c77592bfe05a5f8162f75a391f5c2dc992eb5154c4ad"
	I0920 18:22:30.957548   75264 logs.go:123] Gathering logs for etcd [65da0bae1c849a395cc241962bb6cbdf9f3edf11cdc95f4495ac2e427f9602d4] ...
	I0920 18:22:30.957589   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 65da0bae1c849a395cc241962bb6cbdf9f3edf11cdc95f4495ac2e427f9602d4"
	I0920 18:22:29.803036   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:32.301986   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:27.790386   75577 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 18:22:36.572379   75086 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 18:22:36.572452   75086 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 18:22:36.572558   75086 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 18:22:36.572702   75086 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 18:22:36.572826   75086 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 18:22:36.572926   75086 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 18:22:36.574903   75086 out.go:235]   - Generating certificates and keys ...
	I0920 18:22:36.575032   75086 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 18:22:36.575117   75086 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 18:22:36.575224   75086 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 18:22:36.575342   75086 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 18:22:36.575446   75086 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 18:22:36.575535   75086 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 18:22:36.575606   75086 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 18:22:36.575687   75086 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 18:22:36.575790   75086 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 18:22:36.575867   75086 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 18:22:36.575903   75086 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 18:22:36.575970   75086 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 18:22:36.576054   75086 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 18:22:36.576141   75086 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 18:22:36.576223   75086 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 18:22:36.576317   75086 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 18:22:36.576373   75086 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 18:22:36.576442   75086 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 18:22:36.576506   75086 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 18:22:36.577982   75086 out.go:235]   - Booting up control plane ...
	I0920 18:22:36.578071   75086 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 18:22:36.578147   75086 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 18:22:36.578204   75086 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 18:22:36.578312   75086 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 18:22:36.578400   75086 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 18:22:36.578438   75086 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 18:22:36.578568   75086 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 18:22:36.578660   75086 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 18:22:36.578719   75086 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.174126ms
	I0920 18:22:36.578788   75086 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 18:22:36.578839   75086 kubeadm.go:310] [api-check] The API server is healthy after 5.501599925s
	I0920 18:22:36.578933   75086 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 18:22:36.579037   75086 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 18:22:36.579090   75086 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 18:22:36.579268   75086 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-768431 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 18:22:36.579335   75086 kubeadm.go:310] [bootstrap-token] Using token: junoe1.zmowu95mn9yb1z1f
	I0920 18:22:36.580810   75086 out.go:235]   - Configuring RBAC rules ...
	I0920 18:22:36.580919   75086 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 18:22:36.580989   75086 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 18:22:36.581127   75086 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 18:22:36.581290   75086 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 18:22:36.581456   75086 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 18:22:36.581589   75086 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 18:22:36.581742   75086 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 18:22:36.581827   75086 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 18:22:36.581916   75086 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 18:22:36.581925   75086 kubeadm.go:310] 
	I0920 18:22:36.581980   75086 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 18:22:36.581986   75086 kubeadm.go:310] 
	I0920 18:22:36.582086   75086 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 18:22:36.582097   75086 kubeadm.go:310] 
	I0920 18:22:36.582137   75086 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 18:22:36.582204   75086 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 18:22:36.582287   75086 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 18:22:36.582297   75086 kubeadm.go:310] 
	I0920 18:22:36.582389   75086 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 18:22:36.582398   75086 kubeadm.go:310] 
	I0920 18:22:36.582479   75086 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 18:22:36.582495   75086 kubeadm.go:310] 
	I0920 18:22:36.582540   75086 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 18:22:36.582605   75086 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 18:22:36.582682   75086 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 18:22:36.582697   75086 kubeadm.go:310] 
	I0920 18:22:36.582796   75086 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 18:22:36.582892   75086 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 18:22:36.582901   75086 kubeadm.go:310] 
	I0920 18:22:36.583026   75086 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token junoe1.zmowu95mn9yb1z1f \
	I0920 18:22:36.583163   75086 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:736f3fb521855813decb4a7214f2b8beff5f81c3971b60decfa20a8807626a2d \
	I0920 18:22:36.583199   75086 kubeadm.go:310] 	--control-plane 
	I0920 18:22:36.583208   75086 kubeadm.go:310] 
	I0920 18:22:36.583316   75086 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 18:22:36.583335   75086 kubeadm.go:310] 
	I0920 18:22:36.583489   75086 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token junoe1.zmowu95mn9yb1z1f \
	I0920 18:22:36.583648   75086 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:736f3fb521855813decb4a7214f2b8beff5f81c3971b60decfa20a8807626a2d 
	I0920 18:22:36.583664   75086 cni.go:84] Creating CNI manager for ""
	I0920 18:22:36.583673   75086 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:22:36.585378   75086 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 18:22:33.523585   75264 api_server.go:253] Checking apiserver healthz at https://192.168.72.190:8444/healthz ...
	I0920 18:22:33.528658   75264 api_server.go:279] https://192.168.72.190:8444/healthz returned 200:
	ok
	I0920 18:22:33.529826   75264 api_server.go:141] control plane version: v1.31.1
	I0920 18:22:33.529861   75264 api_server.go:131] duration metric: took 3.95084435s to wait for apiserver health ...
	I0920 18:22:33.529870   75264 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 18:22:33.529894   75264 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:22:33.529938   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:22:33.571762   75264 cri.go:89] found id: "0ea0cfbd9902ae8e3797c77592bfe05a5f8162f75a391f5c2dc992eb5154c4ad"
	I0920 18:22:33.571783   75264 cri.go:89] found id: ""
	I0920 18:22:33.571789   75264 logs.go:276] 1 containers: [0ea0cfbd9902ae8e3797c77592bfe05a5f8162f75a391f5c2dc992eb5154c4ad]
	I0920 18:22:33.571840   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:33.576180   75264 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:22:33.576268   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:22:33.621887   75264 cri.go:89] found id: "65da0bae1c849a395cc241962bb6cbdf9f3edf11cdc95f4495ac2e427f9602d4"
	I0920 18:22:33.621915   75264 cri.go:89] found id: ""
	I0920 18:22:33.621926   75264 logs.go:276] 1 containers: [65da0bae1c849a395cc241962bb6cbdf9f3edf11cdc95f4495ac2e427f9602d4]
	I0920 18:22:33.622013   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:33.626552   75264 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:22:33.626609   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:22:33.664879   75264 cri.go:89] found id: "606f7c8a9a095f499ae422de76f76105607db72e57b9b0a6db3e3c5a81656c5c"
	I0920 18:22:33.664902   75264 cri.go:89] found id: ""
	I0920 18:22:33.664912   75264 logs.go:276] 1 containers: [606f7c8a9a095f499ae422de76f76105607db72e57b9b0a6db3e3c5a81656c5c]
	I0920 18:22:33.664980   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:33.669155   75264 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:22:33.669231   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:22:33.710930   75264 cri.go:89] found id: "6ba313deffc6185e59dac50051e933f6b2ea70d630cf36b2b8c2c07042f793ba"
	I0920 18:22:33.710957   75264 cri.go:89] found id: ""
	I0920 18:22:33.710968   75264 logs.go:276] 1 containers: [6ba313deffc6185e59dac50051e933f6b2ea70d630cf36b2b8c2c07042f793ba]
	I0920 18:22:33.711030   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:33.715618   75264 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:22:33.715689   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:22:33.751646   75264 cri.go:89] found id: "702f7f440eb6016c2c758a6be4e1072274ab6dcd169d7a081e5464869a7c299c"
	I0920 18:22:33.751672   75264 cri.go:89] found id: ""
	I0920 18:22:33.751690   75264 logs.go:276] 1 containers: [702f7f440eb6016c2c758a6be4e1072274ab6dcd169d7a081e5464869a7c299c]
	I0920 18:22:33.751758   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:33.756376   75264 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:22:33.756441   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:22:33.803246   75264 cri.go:89] found id: "4ca303b795795bd920f261c88630f30fd51a91b9694a9a6e67af8164bab8242b"
	I0920 18:22:33.803267   75264 cri.go:89] found id: ""
	I0920 18:22:33.803276   75264 logs.go:276] 1 containers: [4ca303b795795bd920f261c88630f30fd51a91b9694a9a6e67af8164bab8242b]
	I0920 18:22:33.803334   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:33.807825   75264 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:22:33.807892   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:22:33.849531   75264 cri.go:89] found id: ""
	I0920 18:22:33.849559   75264 logs.go:276] 0 containers: []
	W0920 18:22:33.849569   75264 logs.go:278] No container was found matching "kindnet"
	I0920 18:22:33.849583   75264 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0920 18:22:33.849645   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0920 18:22:33.891058   75264 cri.go:89] found id: "001bdc98537f0917f021c14a5d0733bd31cf19be1b4862502458a7a90e4300da"
	I0920 18:22:33.891084   75264 cri.go:89] found id: "c42201b6e3d55b6fd8beaf04d416658b228bf627e519ddb022dad6d17219cc1c"
	I0920 18:22:33.891090   75264 cri.go:89] found id: ""
	I0920 18:22:33.891099   75264 logs.go:276] 2 containers: [001bdc98537f0917f021c14a5d0733bd31cf19be1b4862502458a7a90e4300da c42201b6e3d55b6fd8beaf04d416658b228bf627e519ddb022dad6d17219cc1c]
	I0920 18:22:33.891157   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:33.896028   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:33.900174   75264 logs.go:123] Gathering logs for container status ...
	I0920 18:22:33.900209   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:22:33.944777   75264 logs.go:123] Gathering logs for dmesg ...
	I0920 18:22:33.944803   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:22:33.960062   75264 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:22:33.960094   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 18:22:34.087507   75264 logs.go:123] Gathering logs for etcd [65da0bae1c849a395cc241962bb6cbdf9f3edf11cdc95f4495ac2e427f9602d4] ...
	I0920 18:22:34.087556   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 65da0bae1c849a395cc241962bb6cbdf9f3edf11cdc95f4495ac2e427f9602d4"
	I0920 18:22:34.140051   75264 logs.go:123] Gathering logs for kube-proxy [702f7f440eb6016c2c758a6be4e1072274ab6dcd169d7a081e5464869a7c299c] ...
	I0920 18:22:34.140088   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 702f7f440eb6016c2c758a6be4e1072274ab6dcd169d7a081e5464869a7c299c"
	I0920 18:22:34.183540   75264 logs.go:123] Gathering logs for kube-controller-manager [4ca303b795795bd920f261c88630f30fd51a91b9694a9a6e67af8164bab8242b] ...
	I0920 18:22:34.183582   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ca303b795795bd920f261c88630f30fd51a91b9694a9a6e67af8164bab8242b"
	I0920 18:22:34.251978   75264 logs.go:123] Gathering logs for storage-provisioner [001bdc98537f0917f021c14a5d0733bd31cf19be1b4862502458a7a90e4300da] ...
	I0920 18:22:34.252025   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 001bdc98537f0917f021c14a5d0733bd31cf19be1b4862502458a7a90e4300da"
	I0920 18:22:34.296522   75264 logs.go:123] Gathering logs for storage-provisioner [c42201b6e3d55b6fd8beaf04d416658b228bf627e519ddb022dad6d17219cc1c] ...
	I0920 18:22:34.296562   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c42201b6e3d55b6fd8beaf04d416658b228bf627e519ddb022dad6d17219cc1c"
	I0920 18:22:34.338227   75264 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:22:34.338254   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:22:34.703986   75264 logs.go:123] Gathering logs for kubelet ...
	I0920 18:22:34.704037   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:22:34.788091   75264 logs.go:123] Gathering logs for kube-apiserver [0ea0cfbd9902ae8e3797c77592bfe05a5f8162f75a391f5c2dc992eb5154c4ad] ...
	I0920 18:22:34.788139   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ea0cfbd9902ae8e3797c77592bfe05a5f8162f75a391f5c2dc992eb5154c4ad"
	I0920 18:22:34.861403   75264 logs.go:123] Gathering logs for coredns [606f7c8a9a095f499ae422de76f76105607db72e57b9b0a6db3e3c5a81656c5c] ...
	I0920 18:22:34.861435   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 606f7c8a9a095f499ae422de76f76105607db72e57b9b0a6db3e3c5a81656c5c"
	I0920 18:22:34.906740   75264 logs.go:123] Gathering logs for kube-scheduler [6ba313deffc6185e59dac50051e933f6b2ea70d630cf36b2b8c2c07042f793ba] ...
	I0920 18:22:34.906773   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ba313deffc6185e59dac50051e933f6b2ea70d630cf36b2b8c2c07042f793ba"
	I0920 18:22:37.454205   75264 system_pods.go:59] 8 kube-system pods found
	I0920 18:22:37.454243   75264 system_pods.go:61] "coredns-7c65d6cfc9-dmdfb" [8e4ffdb3-37e0-4498-bdf5-d6e7dbcad020] Running
	I0920 18:22:37.454252   75264 system_pods.go:61] "etcd-default-k8s-diff-port-553719" [e8de0e96-5f3a-4f3f-ae69-c5da7d3c2eb7] Running
	I0920 18:22:37.454257   75264 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-553719" [5164e26c-7fcd-41dc-865f-429046a9ad61] Running
	I0920 18:22:37.454263   75264 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-553719" [b2c00a76-9645-4c63-9812-43e6ed9f1d4f] Running
	I0920 18:22:37.454268   75264 system_pods.go:61] "kube-proxy-p9crq" [83e0f53d-6960-42c4-904d-ea85ba9160f4] Running
	I0920 18:22:37.454273   75264 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-553719" [42155f7b-95bb-467b-acf5-89eb4f80cf76] Running
	I0920 18:22:37.454289   75264 system_pods.go:61] "metrics-server-6867b74b74-vtl79" [29e0b6eb-22a9-4e37-97f9-83b48cc38193] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 18:22:37.454296   75264 system_pods.go:61] "storage-provisioner" [6fad2d07-f99e-45ac-9657-bce6d73d7fce] Running
	I0920 18:22:37.454310   75264 system_pods.go:74] duration metric: took 3.924431145s to wait for pod list to return data ...
	I0920 18:22:37.454324   75264 default_sa.go:34] waiting for default service account to be created ...
	I0920 18:22:37.457599   75264 default_sa.go:45] found service account: "default"
	I0920 18:22:37.457619   75264 default_sa.go:55] duration metric: took 3.289936ms for default service account to be created ...
	I0920 18:22:37.457627   75264 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 18:22:37.461977   75264 system_pods.go:86] 8 kube-system pods found
	I0920 18:22:37.462001   75264 system_pods.go:89] "coredns-7c65d6cfc9-dmdfb" [8e4ffdb3-37e0-4498-bdf5-d6e7dbcad020] Running
	I0920 18:22:37.462006   75264 system_pods.go:89] "etcd-default-k8s-diff-port-553719" [e8de0e96-5f3a-4f3f-ae69-c5da7d3c2eb7] Running
	I0920 18:22:37.462011   75264 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-553719" [5164e26c-7fcd-41dc-865f-429046a9ad61] Running
	I0920 18:22:37.462015   75264 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-553719" [b2c00a76-9645-4c63-9812-43e6ed9f1d4f] Running
	I0920 18:22:37.462019   75264 system_pods.go:89] "kube-proxy-p9crq" [83e0f53d-6960-42c4-904d-ea85ba9160f4] Running
	I0920 18:22:37.462022   75264 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-553719" [42155f7b-95bb-467b-acf5-89eb4f80cf76] Running
	I0920 18:22:37.462028   75264 system_pods.go:89] "metrics-server-6867b74b74-vtl79" [29e0b6eb-22a9-4e37-97f9-83b48cc38193] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 18:22:37.462032   75264 system_pods.go:89] "storage-provisioner" [6fad2d07-f99e-45ac-9657-bce6d73d7fce] Running
	I0920 18:22:37.462040   75264 system_pods.go:126] duration metric: took 4.409042ms to wait for k8s-apps to be running ...
	I0920 18:22:37.462047   75264 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 18:22:37.462087   75264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:22:37.479350   75264 system_svc.go:56] duration metric: took 17.294926ms WaitForService to wait for kubelet
	I0920 18:22:37.479380   75264 kubeadm.go:582] duration metric: took 4m28.223143222s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:22:37.479402   75264 node_conditions.go:102] verifying NodePressure condition ...
	I0920 18:22:37.482497   75264 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 18:22:37.482517   75264 node_conditions.go:123] node cpu capacity is 2
	I0920 18:22:37.482532   75264 node_conditions.go:105] duration metric: took 3.125658ms to run NodePressure ...
	I0920 18:22:37.482543   75264 start.go:241] waiting for startup goroutines ...
	I0920 18:22:37.482549   75264 start.go:246] waiting for cluster config update ...
	I0920 18:22:37.482561   75264 start.go:255] writing updated cluster config ...
	I0920 18:22:37.482817   75264 ssh_runner.go:195] Run: rm -f paused
	I0920 18:22:37.532982   75264 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 18:22:37.534936   75264 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-553719" cluster and "default" namespace by default
	I0920 18:22:34.302204   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:36.802991   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:36.586809   75086 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 18:22:36.599904   75086 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 18:22:36.620050   75086 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 18:22:36.620133   75086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:22:36.620149   75086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-768431 minikube.k8s.io/updated_at=2024_09_20T18_22_36_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=0626f22cf0d915d75e291a5bce701f94395056e1 minikube.k8s.io/name=embed-certs-768431 minikube.k8s.io/primary=true
	I0920 18:22:36.636625   75086 ops.go:34] apiserver oom_adj: -16
	I0920 18:22:36.817434   75086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:22:37.318133   75086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:22:37.818515   75086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:22:38.318005   75086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:22:38.817759   75086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:22:39.318481   75086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:22:39.817943   75086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:22:40.317957   75086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:22:40.423342   75086 kubeadm.go:1113] duration metric: took 3.803277177s to wait for elevateKubeSystemPrivileges
	I0920 18:22:40.423389   75086 kubeadm.go:394] duration metric: took 4m59.290098215s to StartCluster
	I0920 18:22:40.423413   75086 settings.go:142] acquiring lock: {Name:mk2a13c58cdc0faf8cddca5d6716175d45db9bfd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:22:40.423516   75086 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19672-8777/kubeconfig
	I0920 18:22:40.426054   75086 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/kubeconfig: {Name:mkf32a4c736808e023459b2f0e40188618a38db1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:22:40.426360   75086 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.202 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:22:40.426456   75086 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 18:22:40.426546   75086 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-768431"
	I0920 18:22:40.426566   75086 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-768431"
	W0920 18:22:40.426578   75086 addons.go:243] addon storage-provisioner should already be in state true
	I0920 18:22:40.426576   75086 addons.go:69] Setting default-storageclass=true in profile "embed-certs-768431"
	I0920 18:22:40.426608   75086 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-768431"
	I0920 18:22:40.426622   75086 host.go:66] Checking if "embed-certs-768431" exists ...
	I0920 18:22:40.426621   75086 addons.go:69] Setting metrics-server=true in profile "embed-certs-768431"
	I0920 18:22:40.426662   75086 config.go:182] Loaded profile config "embed-certs-768431": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:22:40.426669   75086 addons.go:234] Setting addon metrics-server=true in "embed-certs-768431"
	W0920 18:22:40.426699   75086 addons.go:243] addon metrics-server should already be in state true
	I0920 18:22:40.426739   75086 host.go:66] Checking if "embed-certs-768431" exists ...
	I0920 18:22:40.427075   75086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:22:40.427116   75086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:22:40.427120   75086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:22:40.427155   75086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:22:40.427161   75086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:22:40.427251   75086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:22:40.427977   75086 out.go:177] * Verifying Kubernetes components...
	I0920 18:22:40.429647   75086 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:22:40.443468   75086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40103
	I0920 18:22:40.443663   75086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37871
	I0920 18:22:40.444055   75086 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:22:40.444062   75086 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:22:40.444562   75086 main.go:141] libmachine: Using API Version  1
	I0920 18:22:40.444586   75086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:22:40.444732   75086 main.go:141] libmachine: Using API Version  1
	I0920 18:22:40.444750   75086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:22:40.445016   75086 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:22:40.445110   75086 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:22:40.445584   75086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:22:40.445634   75086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:22:40.445652   75086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:22:40.445654   75086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38771
	I0920 18:22:40.445684   75086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:22:40.446227   75086 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:22:40.446872   75086 main.go:141] libmachine: Using API Version  1
	I0920 18:22:40.446892   75086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:22:40.447280   75086 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:22:40.447692   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetState
	I0920 18:22:40.451332   75086 addons.go:234] Setting addon default-storageclass=true in "embed-certs-768431"
	W0920 18:22:40.451355   75086 addons.go:243] addon default-storageclass should already be in state true
	I0920 18:22:40.451382   75086 host.go:66] Checking if "embed-certs-768431" exists ...
	I0920 18:22:40.451753   75086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:22:40.451803   75086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:22:40.463317   75086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39353
	I0920 18:22:40.463816   75086 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:22:40.464544   75086 main.go:141] libmachine: Using API Version  1
	I0920 18:22:40.464577   75086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:22:40.464961   75086 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:22:40.467445   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetState
	I0920 18:22:40.469290   75086 main.go:141] libmachine: (embed-certs-768431) Calling .DriverName
	I0920 18:22:40.471212   75086 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0920 18:22:40.471862   75086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40689
	I0920 18:22:40.472254   75086 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:22:40.472515   75086 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 18:22:40.472535   75086 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 18:22:40.472557   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHHostname
	I0920 18:22:40.472718   75086 main.go:141] libmachine: Using API Version  1
	I0920 18:22:40.472741   75086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:22:40.473259   75086 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:22:40.473477   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetState
	I0920 18:22:40.473753   75086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36495
	I0920 18:22:40.474173   75086 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:22:40.474778   75086 main.go:141] libmachine: Using API Version  1
	I0920 18:22:40.474796   75086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:22:40.475166   75086 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:22:40.475726   75086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:22:40.475766   75086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:22:40.475984   75086 main.go:141] libmachine: (embed-certs-768431) Calling .DriverName
	I0920 18:22:40.476244   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:22:40.476583   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:22:40.476605   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:22:40.476773   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHPort
	I0920 18:22:40.476953   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:22:40.477053   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHUsername
	I0920 18:22:40.477196   75086 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/embed-certs-768431/id_rsa Username:docker}
	I0920 18:22:40.478244   75086 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:22:40.479443   75086 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 18:22:40.479459   75086 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 18:22:40.479476   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHHostname
	I0920 18:22:40.482119   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:22:40.482400   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:22:40.482423   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:22:40.482639   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHPort
	I0920 18:22:40.482840   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:22:40.482968   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHUsername
	I0920 18:22:40.483145   75086 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/embed-certs-768431/id_rsa Username:docker}
	I0920 18:22:40.494454   75086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35971
	I0920 18:22:40.494948   75086 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:22:40.495425   75086 main.go:141] libmachine: Using API Version  1
	I0920 18:22:40.495444   75086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:22:40.495798   75086 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:22:40.495990   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetState
	I0920 18:22:40.497591   75086 main.go:141] libmachine: (embed-certs-768431) Calling .DriverName
	I0920 18:22:40.497878   75086 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 18:22:40.497894   75086 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 18:22:40.497913   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHHostname
	I0920 18:22:40.500755   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:22:40.501130   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:22:40.501153   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:22:40.501355   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHPort
	I0920 18:22:40.501548   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:22:40.501725   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHUsername
	I0920 18:22:40.502091   75086 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/embed-certs-768431/id_rsa Username:docker}
	I0920 18:22:40.646738   75086 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:22:40.673285   75086 node_ready.go:35] waiting up to 6m0s for node "embed-certs-768431" to be "Ready" ...
	I0920 18:22:40.684618   75086 node_ready.go:49] node "embed-certs-768431" has status "Ready":"True"
	I0920 18:22:40.684643   75086 node_ready.go:38] duration metric: took 11.298825ms for node "embed-certs-768431" to be "Ready" ...
	I0920 18:22:40.684653   75086 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:22:40.689151   75086 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:22:40.734882   75086 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 18:22:40.744479   75086 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 18:22:40.884534   75086 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 18:22:40.884556   75086 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0920 18:22:41.018287   75086 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 18:22:41.018313   75086 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 18:22:41.091216   75086 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 18:22:41.091247   75086 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 18:22:41.132887   75086 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 18:22:41.538445   75086 main.go:141] libmachine: Making call to close driver server
	I0920 18:22:41.538472   75086 main.go:141] libmachine: (embed-certs-768431) Calling .Close
	I0920 18:22:41.538774   75086 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:22:41.538795   75086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:22:41.538806   75086 main.go:141] libmachine: Making call to close driver server
	I0920 18:22:41.538815   75086 main.go:141] libmachine: (embed-certs-768431) Calling .Close
	I0920 18:22:41.539010   75086 main.go:141] libmachine: Making call to close driver server
	I0920 18:22:41.539027   75086 main.go:141] libmachine: (embed-certs-768431) Calling .Close
	I0920 18:22:41.539118   75086 main.go:141] libmachine: (embed-certs-768431) DBG | Closing plugin on server side
	I0920 18:22:41.539137   75086 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:22:41.539205   75086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:22:41.539273   75086 main.go:141] libmachine: (embed-certs-768431) DBG | Closing plugin on server side
	I0920 18:22:41.539286   75086 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:22:41.539300   75086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:22:41.539309   75086 main.go:141] libmachine: Making call to close driver server
	I0920 18:22:41.539348   75086 main.go:141] libmachine: (embed-certs-768431) Calling .Close
	I0920 18:22:41.539629   75086 main.go:141] libmachine: (embed-certs-768431) DBG | Closing plugin on server side
	I0920 18:22:41.539693   75086 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:22:41.539712   75086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:22:41.559164   75086 main.go:141] libmachine: Making call to close driver server
	I0920 18:22:41.559191   75086 main.go:141] libmachine: (embed-certs-768431) Calling .Close
	I0920 18:22:41.559517   75086 main.go:141] libmachine: (embed-certs-768431) DBG | Closing plugin on server side
	I0920 18:22:41.559580   75086 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:22:41.559596   75086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:22:41.880601   75086 main.go:141] libmachine: Making call to close driver server
	I0920 18:22:41.880640   75086 main.go:141] libmachine: (embed-certs-768431) Calling .Close
	I0920 18:22:41.880998   75086 main.go:141] libmachine: (embed-certs-768431) DBG | Closing plugin on server side
	I0920 18:22:41.881026   75086 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:22:41.881039   75086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:22:41.881047   75086 main.go:141] libmachine: Making call to close driver server
	I0920 18:22:41.881052   75086 main.go:141] libmachine: (embed-certs-768431) Calling .Close
	I0920 18:22:41.881304   75086 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:22:41.881322   75086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:22:41.881336   75086 addons.go:475] Verifying addon metrics-server=true in "embed-certs-768431"
	I0920 18:22:41.883315   75086 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0920 18:22:39.302453   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:41.802428   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:41.885367   75086 addons.go:510] duration metric: took 1.45891561s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0920 18:22:42.696266   75086 pod_ready.go:103] pod "etcd-embed-certs-768431" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:44.300965   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:46.301969   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:45.195160   75086 pod_ready.go:103] pod "etcd-embed-certs-768431" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:46.200142   75086 pod_ready.go:93] pod "etcd-embed-certs-768431" in "kube-system" namespace has status "Ready":"True"
	I0920 18:22:46.200166   75086 pod_ready.go:82] duration metric: took 5.510989222s for pod "etcd-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:22:46.200175   75086 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:22:46.205619   75086 pod_ready.go:93] pod "kube-apiserver-embed-certs-768431" in "kube-system" namespace has status "Ready":"True"
	I0920 18:22:46.205647   75086 pod_ready.go:82] duration metric: took 5.464252ms for pod "kube-apiserver-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:22:46.205659   75086 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:22:46.210040   75086 pod_ready.go:93] pod "kube-controller-manager-embed-certs-768431" in "kube-system" namespace has status "Ready":"True"
	I0920 18:22:46.210066   75086 pod_ready.go:82] duration metric: took 4.397776ms for pod "kube-controller-manager-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:22:46.210079   75086 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:22:48.216587   75086 pod_ready.go:103] pod "kube-scheduler-embed-certs-768431" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:49.217996   75086 pod_ready.go:93] pod "kube-scheduler-embed-certs-768431" in "kube-system" namespace has status "Ready":"True"
	I0920 18:22:49.218019   75086 pod_ready.go:82] duration metric: took 3.007931797s for pod "kube-scheduler-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:22:49.218025   75086 pod_ready.go:39] duration metric: took 8.533358652s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:22:49.218039   75086 api_server.go:52] waiting for apiserver process to appear ...
	I0920 18:22:49.218100   75086 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:22:49.235456   75086 api_server.go:72] duration metric: took 8.809056301s to wait for apiserver process to appear ...
	I0920 18:22:49.235480   75086 api_server.go:88] waiting for apiserver healthz status ...
	I0920 18:22:49.235499   75086 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0920 18:22:49.239562   75086 api_server.go:279] https://192.168.61.202:8443/healthz returned 200:
	ok
	I0920 18:22:49.240999   75086 api_server.go:141] control plane version: v1.31.1
	I0920 18:22:49.241024   75086 api_server.go:131] duration metric: took 5.537177ms to wait for apiserver health ...
	I0920 18:22:49.241033   75086 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 18:22:49.246297   75086 system_pods.go:59] 9 kube-system pods found
	I0920 18:22:49.246335   75086 system_pods.go:61] "coredns-7c65d6cfc9-g5tkc" [8877c0e8-6c8f-4a62-94bd-508982faee3c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 18:22:49.246342   75086 system_pods.go:61] "coredns-7c65d6cfc9-jkkdn" [f64288e4-009c-4ba8-93e7-30ca5296af46] Running
	I0920 18:22:49.246348   75086 system_pods.go:61] "etcd-embed-certs-768431" [f0ba7c9a-cc8b-43d4-aa48-a8f923d3c515] Running
	I0920 18:22:49.246352   75086 system_pods.go:61] "kube-apiserver-embed-certs-768431" [dba3b6cc-95db-47a0-ba41-3aa3ed3e8adb] Running
	I0920 18:22:49.246356   75086 system_pods.go:61] "kube-controller-manager-embed-certs-768431" [ff933e33-4c5c-4a8c-8ee4-6f0cb3ea4c3a] Running
	I0920 18:22:49.246359   75086 system_pods.go:61] "kube-proxy-c4527" [2e2d5102-0c42-4b87-8a27-dd53b8eb41f9] Running
	I0920 18:22:49.246362   75086 system_pods.go:61] "kube-scheduler-embed-certs-768431" [8cca3406-a395-4dcd-97ac-d8ce3e8deac6] Running
	I0920 18:22:49.246367   75086 system_pods.go:61] "metrics-server-6867b74b74-9snmf" [5fb654f5-5e73-436e-bc9d-04ef5077deb4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 18:22:49.246373   75086 system_pods.go:61] "storage-provisioner" [09227b45-ea2a-4fcc-b082-9978e5f00a4b] Running
	I0920 18:22:49.246379   75086 system_pods.go:74] duration metric: took 5.340615ms to wait for pod list to return data ...
	I0920 18:22:49.246388   75086 default_sa.go:34] waiting for default service account to be created ...
	I0920 18:22:49.249593   75086 default_sa.go:45] found service account: "default"
	I0920 18:22:49.249617   75086 default_sa.go:55] duration metric: took 3.222486ms for default service account to be created ...
	I0920 18:22:49.249626   75086 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 18:22:49.255747   75086 system_pods.go:86] 9 kube-system pods found
	I0920 18:22:49.255780   75086 system_pods.go:89] "coredns-7c65d6cfc9-g5tkc" [8877c0e8-6c8f-4a62-94bd-508982faee3c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 18:22:49.255790   75086 system_pods.go:89] "coredns-7c65d6cfc9-jkkdn" [f64288e4-009c-4ba8-93e7-30ca5296af46] Running
	I0920 18:22:49.255799   75086 system_pods.go:89] "etcd-embed-certs-768431" [f0ba7c9a-cc8b-43d4-aa48-a8f923d3c515] Running
	I0920 18:22:49.255805   75086 system_pods.go:89] "kube-apiserver-embed-certs-768431" [dba3b6cc-95db-47a0-ba41-3aa3ed3e8adb] Running
	I0920 18:22:49.255811   75086 system_pods.go:89] "kube-controller-manager-embed-certs-768431" [ff933e33-4c5c-4a8c-8ee4-6f0cb3ea4c3a] Running
	I0920 18:22:49.255817   75086 system_pods.go:89] "kube-proxy-c4527" [2e2d5102-0c42-4b87-8a27-dd53b8eb41f9] Running
	I0920 18:22:49.255826   75086 system_pods.go:89] "kube-scheduler-embed-certs-768431" [8cca3406-a395-4dcd-97ac-d8ce3e8deac6] Running
	I0920 18:22:49.255834   75086 system_pods.go:89] "metrics-server-6867b74b74-9snmf" [5fb654f5-5e73-436e-bc9d-04ef5077deb4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 18:22:49.255845   75086 system_pods.go:89] "storage-provisioner" [09227b45-ea2a-4fcc-b082-9978e5f00a4b] Running
	I0920 18:22:49.255852   75086 system_pods.go:126] duration metric: took 6.220419ms to wait for k8s-apps to be running ...
	I0920 18:22:49.255863   75086 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 18:22:49.255918   75086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:22:49.271916   75086 system_svc.go:56] duration metric: took 16.046857ms WaitForService to wait for kubelet
	I0920 18:22:49.271946   75086 kubeadm.go:582] duration metric: took 8.845551531s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:22:49.271962   75086 node_conditions.go:102] verifying NodePressure condition ...
	I0920 18:22:49.277150   75086 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 18:22:49.277190   75086 node_conditions.go:123] node cpu capacity is 2
	I0920 18:22:49.277203   75086 node_conditions.go:105] duration metric: took 5.234938ms to run NodePressure ...
	I0920 18:22:49.277216   75086 start.go:241] waiting for startup goroutines ...
	I0920 18:22:49.277226   75086 start.go:246] waiting for cluster config update ...
	I0920 18:22:49.277240   75086 start.go:255] writing updated cluster config ...
	I0920 18:22:49.278062   75086 ssh_runner.go:195] Run: rm -f paused
	I0920 18:22:49.329596   75086 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 18:22:49.331593   75086 out.go:177] * Done! kubectl is now configured to use "embed-certs-768431" cluster and "default" namespace by default
	I0920 18:22:48.802304   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:51.301288   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:53.801957   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:56.301310   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:58.302165   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:23:00.801931   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:23:03.301289   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:23:05.801777   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:23:08.301539   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:23:08.801990   74753 pod_ready.go:82] duration metric: took 4m0.006972987s for pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace to be "Ready" ...
	E0920 18:23:08.802010   74753 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0920 18:23:08.802019   74753 pod_ready.go:39] duration metric: took 4m2.40678087s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:23:08.802035   74753 api_server.go:52] waiting for apiserver process to appear ...
	I0920 18:23:08.802062   74753 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:23:08.802110   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:23:08.848687   74753 cri.go:89] found id: "334e4df5baa4f8f7d58af2308fab7f78de22f7e07d6dfbcbeaa0df2a52f6b207"
	I0920 18:23:08.848709   74753 cri.go:89] found id: ""
	I0920 18:23:08.848716   74753 logs.go:276] 1 containers: [334e4df5baa4f8f7d58af2308fab7f78de22f7e07d6dfbcbeaa0df2a52f6b207]
	I0920 18:23:08.848764   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:08.853180   74753 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:23:08.853239   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:23:08.889768   74753 cri.go:89] found id: "98aa96314cf8f863121b9953ad6335628481f536fcab49b7c00505a17eb35ff2"
	I0920 18:23:08.889793   74753 cri.go:89] found id: ""
	I0920 18:23:08.889803   74753 logs.go:276] 1 containers: [98aa96314cf8f863121b9953ad6335628481f536fcab49b7c00505a17eb35ff2]
	I0920 18:23:08.889869   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:08.893865   74753 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:23:08.893937   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:23:08.936592   74753 cri.go:89] found id: "35f0d8dd053d4c376ccf21215492d8509ff701e7d09b345fdbe06d505ba2d6b4"
	I0920 18:23:08.936619   74753 cri.go:89] found id: ""
	I0920 18:23:08.936629   74753 logs.go:276] 1 containers: [35f0d8dd053d4c376ccf21215492d8509ff701e7d09b345fdbe06d505ba2d6b4]
	I0920 18:23:08.936768   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:08.941222   74753 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:23:08.941311   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:23:08.978894   74753 cri.go:89] found id: "8153479cebb05353bec31c41778746c485cb294bf41c3f98e30559ad7da64c64"
	I0920 18:23:08.978918   74753 cri.go:89] found id: ""
	I0920 18:23:08.978929   74753 logs.go:276] 1 containers: [8153479cebb05353bec31c41778746c485cb294bf41c3f98e30559ad7da64c64]
	I0920 18:23:08.978977   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:08.982783   74753 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:23:08.982845   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:23:09.018400   74753 cri.go:89] found id: "6df198ca54e8004a532de9b15cfe7a48a659ae06623c00a78cad528762944497"
	I0920 18:23:09.018422   74753 cri.go:89] found id: ""
	I0920 18:23:09.018430   74753 logs.go:276] 1 containers: [6df198ca54e8004a532de9b15cfe7a48a659ae06623c00a78cad528762944497]
	I0920 18:23:09.018485   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:09.022403   74753 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:23:09.022470   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:23:09.062315   74753 cri.go:89] found id: "3ebf4c520d684127cd0b6146d44e863090b9c2db3ac866e9ce3ef2d10d2d6da1"
	I0920 18:23:09.062385   74753 cri.go:89] found id: ""
	I0920 18:23:09.062397   74753 logs.go:276] 1 containers: [3ebf4c520d684127cd0b6146d44e863090b9c2db3ac866e9ce3ef2d10d2d6da1]
	I0920 18:23:09.062453   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:09.066976   74753 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:23:09.067049   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:23:09.107467   74753 cri.go:89] found id: ""
	I0920 18:23:09.107504   74753 logs.go:276] 0 containers: []
	W0920 18:23:09.107515   74753 logs.go:278] No container was found matching "kindnet"
	I0920 18:23:09.107523   74753 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0920 18:23:09.107583   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0920 18:23:09.150489   74753 cri.go:89] found id: "179e4a02f3459a613d3307806b477d54581555b18babca62d0c5e553b9562442"
	I0920 18:23:09.150519   74753 cri.go:89] found id: "3eb9abdf57de51979118cfa77b613c22e28122fcb1da9e72fbf6f3a946ed3531"
	I0920 18:23:09.150526   74753 cri.go:89] found id: ""
	I0920 18:23:09.150536   74753 logs.go:276] 2 containers: [179e4a02f3459a613d3307806b477d54581555b18babca62d0c5e553b9562442 3eb9abdf57de51979118cfa77b613c22e28122fcb1da9e72fbf6f3a946ed3531]
	I0920 18:23:09.150589   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:09.154618   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:09.158682   74753 logs.go:123] Gathering logs for container status ...
	I0920 18:23:09.158708   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:23:09.198393   74753 logs.go:123] Gathering logs for kubelet ...
	I0920 18:23:09.198420   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:23:09.266161   74753 logs.go:123] Gathering logs for kube-apiserver [334e4df5baa4f8f7d58af2308fab7f78de22f7e07d6dfbcbeaa0df2a52f6b207] ...
	I0920 18:23:09.266197   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 334e4df5baa4f8f7d58af2308fab7f78de22f7e07d6dfbcbeaa0df2a52f6b207"
	I0920 18:23:09.311406   74753 logs.go:123] Gathering logs for etcd [98aa96314cf8f863121b9953ad6335628481f536fcab49b7c00505a17eb35ff2] ...
	I0920 18:23:09.311438   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 98aa96314cf8f863121b9953ad6335628481f536fcab49b7c00505a17eb35ff2"
	I0920 18:23:09.353233   74753 logs.go:123] Gathering logs for kube-controller-manager [3ebf4c520d684127cd0b6146d44e863090b9c2db3ac866e9ce3ef2d10d2d6da1] ...
	I0920 18:23:09.353271   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ebf4c520d684127cd0b6146d44e863090b9c2db3ac866e9ce3ef2d10d2d6da1"
	I0920 18:23:09.415966   74753 logs.go:123] Gathering logs for storage-provisioner [3eb9abdf57de51979118cfa77b613c22e28122fcb1da9e72fbf6f3a946ed3531] ...
	I0920 18:23:09.416010   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3eb9abdf57de51979118cfa77b613c22e28122fcb1da9e72fbf6f3a946ed3531"
	I0920 18:23:09.462018   74753 logs.go:123] Gathering logs for storage-provisioner [179e4a02f3459a613d3307806b477d54581555b18babca62d0c5e553b9562442] ...
	I0920 18:23:09.462054   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 179e4a02f3459a613d3307806b477d54581555b18babca62d0c5e553b9562442"
	I0920 18:23:09.499851   74753 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:23:09.499883   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:23:10.018536   74753 logs.go:123] Gathering logs for dmesg ...
	I0920 18:23:10.018586   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:23:10.033607   74753 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:23:10.033639   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 18:23:10.167598   74753 logs.go:123] Gathering logs for coredns [35f0d8dd053d4c376ccf21215492d8509ff701e7d09b345fdbe06d505ba2d6b4] ...
	I0920 18:23:10.167645   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35f0d8dd053d4c376ccf21215492d8509ff701e7d09b345fdbe06d505ba2d6b4"
	I0920 18:23:10.201819   74753 logs.go:123] Gathering logs for kube-scheduler [8153479cebb05353bec31c41778746c485cb294bf41c3f98e30559ad7da64c64] ...
	I0920 18:23:10.201856   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8153479cebb05353bec31c41778746c485cb294bf41c3f98e30559ad7da64c64"
	I0920 18:23:10.237079   74753 logs.go:123] Gathering logs for kube-proxy [6df198ca54e8004a532de9b15cfe7a48a659ae06623c00a78cad528762944497] ...
	I0920 18:23:10.237108   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6df198ca54e8004a532de9b15cfe7a48a659ae06623c00a78cad528762944497"
	I0920 18:23:12.778360   74753 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:23:12.796411   74753 api_server.go:72] duration metric: took 4m13.647076581s to wait for apiserver process to appear ...
	I0920 18:23:12.796438   74753 api_server.go:88] waiting for apiserver healthz status ...
	I0920 18:23:12.796475   74753 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:23:12.796525   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:23:12.833259   74753 cri.go:89] found id: "334e4df5baa4f8f7d58af2308fab7f78de22f7e07d6dfbcbeaa0df2a52f6b207"
	I0920 18:23:12.833279   74753 cri.go:89] found id: ""
	I0920 18:23:12.833287   74753 logs.go:276] 1 containers: [334e4df5baa4f8f7d58af2308fab7f78de22f7e07d6dfbcbeaa0df2a52f6b207]
	I0920 18:23:12.833334   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:12.837497   74753 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:23:12.837568   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:23:12.877602   74753 cri.go:89] found id: "98aa96314cf8f863121b9953ad6335628481f536fcab49b7c00505a17eb35ff2"
	I0920 18:23:12.877628   74753 cri.go:89] found id: ""
	I0920 18:23:12.877639   74753 logs.go:276] 1 containers: [98aa96314cf8f863121b9953ad6335628481f536fcab49b7c00505a17eb35ff2]
	I0920 18:23:12.877688   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:12.881869   74753 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:23:12.881951   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:23:12.924617   74753 cri.go:89] found id: "35f0d8dd053d4c376ccf21215492d8509ff701e7d09b345fdbe06d505ba2d6b4"
	I0920 18:23:12.924641   74753 cri.go:89] found id: ""
	I0920 18:23:12.924650   74753 logs.go:276] 1 containers: [35f0d8dd053d4c376ccf21215492d8509ff701e7d09b345fdbe06d505ba2d6b4]
	I0920 18:23:12.924710   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:12.929317   74753 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:23:12.929381   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:23:12.968642   74753 cri.go:89] found id: "8153479cebb05353bec31c41778746c485cb294bf41c3f98e30559ad7da64c64"
	I0920 18:23:12.968672   74753 cri.go:89] found id: ""
	I0920 18:23:12.968682   74753 logs.go:276] 1 containers: [8153479cebb05353bec31c41778746c485cb294bf41c3f98e30559ad7da64c64]
	I0920 18:23:12.968745   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:12.974588   74753 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:23:12.974665   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:23:13.010036   74753 cri.go:89] found id: "6df198ca54e8004a532de9b15cfe7a48a659ae06623c00a78cad528762944497"
	I0920 18:23:13.010067   74753 cri.go:89] found id: ""
	I0920 18:23:13.010078   74753 logs.go:276] 1 containers: [6df198ca54e8004a532de9b15cfe7a48a659ae06623c00a78cad528762944497]
	I0920 18:23:13.010136   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:13.014300   74753 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:23:13.014358   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:23:13.055560   74753 cri.go:89] found id: "3ebf4c520d684127cd0b6146d44e863090b9c2db3ac866e9ce3ef2d10d2d6da1"
	I0920 18:23:13.055591   74753 cri.go:89] found id: ""
	I0920 18:23:13.055601   74753 logs.go:276] 1 containers: [3ebf4c520d684127cd0b6146d44e863090b9c2db3ac866e9ce3ef2d10d2d6da1]
	I0920 18:23:13.055663   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:13.059828   74753 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:23:13.059887   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:23:13.100161   74753 cri.go:89] found id: ""
	I0920 18:23:13.100189   74753 logs.go:276] 0 containers: []
	W0920 18:23:13.100199   74753 logs.go:278] No container was found matching "kindnet"
	I0920 18:23:13.100207   74753 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0920 18:23:13.100269   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0920 18:23:13.136788   74753 cri.go:89] found id: "179e4a02f3459a613d3307806b477d54581555b18babca62d0c5e553b9562442"
	I0920 18:23:13.136818   74753 cri.go:89] found id: "3eb9abdf57de51979118cfa77b613c22e28122fcb1da9e72fbf6f3a946ed3531"
	I0920 18:23:13.136824   74753 cri.go:89] found id: ""
	I0920 18:23:13.136831   74753 logs.go:276] 2 containers: [179e4a02f3459a613d3307806b477d54581555b18babca62d0c5e553b9562442 3eb9abdf57de51979118cfa77b613c22e28122fcb1da9e72fbf6f3a946ed3531]
	I0920 18:23:13.136894   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:13.140956   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:13.145991   74753 logs.go:123] Gathering logs for container status ...
	I0920 18:23:13.146023   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:23:13.187274   74753 logs.go:123] Gathering logs for kubelet ...
	I0920 18:23:13.187303   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:23:13.257111   74753 logs.go:123] Gathering logs for dmesg ...
	I0920 18:23:13.257147   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:23:13.271678   74753 logs.go:123] Gathering logs for kube-controller-manager [3ebf4c520d684127cd0b6146d44e863090b9c2db3ac866e9ce3ef2d10d2d6da1] ...
	I0920 18:23:13.271704   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ebf4c520d684127cd0b6146d44e863090b9c2db3ac866e9ce3ef2d10d2d6da1"
	I0920 18:23:13.327163   74753 logs.go:123] Gathering logs for storage-provisioner [179e4a02f3459a613d3307806b477d54581555b18babca62d0c5e553b9562442] ...
	I0920 18:23:13.327191   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 179e4a02f3459a613d3307806b477d54581555b18babca62d0c5e553b9562442"
	I0920 18:23:13.362002   74753 logs.go:123] Gathering logs for storage-provisioner [3eb9abdf57de51979118cfa77b613c22e28122fcb1da9e72fbf6f3a946ed3531] ...
	I0920 18:23:13.362039   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3eb9abdf57de51979118cfa77b613c22e28122fcb1da9e72fbf6f3a946ed3531"
	I0920 18:23:13.409907   74753 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:23:13.409938   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:23:13.840233   74753 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:23:13.840275   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 18:23:13.956466   74753 logs.go:123] Gathering logs for kube-apiserver [334e4df5baa4f8f7d58af2308fab7f78de22f7e07d6dfbcbeaa0df2a52f6b207] ...
	I0920 18:23:13.956499   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 334e4df5baa4f8f7d58af2308fab7f78de22f7e07d6dfbcbeaa0df2a52f6b207"
	I0920 18:23:14.006028   74753 logs.go:123] Gathering logs for etcd [98aa96314cf8f863121b9953ad6335628481f536fcab49b7c00505a17eb35ff2] ...
	I0920 18:23:14.006063   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 98aa96314cf8f863121b9953ad6335628481f536fcab49b7c00505a17eb35ff2"
	I0920 18:23:14.051354   74753 logs.go:123] Gathering logs for coredns [35f0d8dd053d4c376ccf21215492d8509ff701e7d09b345fdbe06d505ba2d6b4] ...
	I0920 18:23:14.051382   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35f0d8dd053d4c376ccf21215492d8509ff701e7d09b345fdbe06d505ba2d6b4"
	I0920 18:23:14.086486   74753 logs.go:123] Gathering logs for kube-scheduler [8153479cebb05353bec31c41778746c485cb294bf41c3f98e30559ad7da64c64] ...
	I0920 18:23:14.086518   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8153479cebb05353bec31c41778746c485cb294bf41c3f98e30559ad7da64c64"
	I0920 18:23:14.128119   74753 logs.go:123] Gathering logs for kube-proxy [6df198ca54e8004a532de9b15cfe7a48a659ae06623c00a78cad528762944497] ...
	I0920 18:23:14.128149   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6df198ca54e8004a532de9b15cfe7a48a659ae06623c00a78cad528762944497"
	I0920 18:23:16.668901   74753 api_server.go:253] Checking apiserver healthz at https://192.168.50.47:8443/healthz ...
	I0920 18:23:16.674235   74753 api_server.go:279] https://192.168.50.47:8443/healthz returned 200:
	ok
	I0920 18:23:16.675431   74753 api_server.go:141] control plane version: v1.31.1
	I0920 18:23:16.675449   74753 api_server.go:131] duration metric: took 3.879005012s to wait for apiserver health ...
	I0920 18:23:16.675456   74753 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 18:23:16.675481   74753 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:23:16.675527   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:23:16.711645   74753 cri.go:89] found id: "334e4df5baa4f8f7d58af2308fab7f78de22f7e07d6dfbcbeaa0df2a52f6b207"
	I0920 18:23:16.711673   74753 cri.go:89] found id: ""
	I0920 18:23:16.711683   74753 logs.go:276] 1 containers: [334e4df5baa4f8f7d58af2308fab7f78de22f7e07d6dfbcbeaa0df2a52f6b207]
	I0920 18:23:16.711743   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:16.716466   74753 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:23:16.716536   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:23:16.763061   74753 cri.go:89] found id: "98aa96314cf8f863121b9953ad6335628481f536fcab49b7c00505a17eb35ff2"
	I0920 18:23:16.763085   74753 cri.go:89] found id: ""
	I0920 18:23:16.763095   74753 logs.go:276] 1 containers: [98aa96314cf8f863121b9953ad6335628481f536fcab49b7c00505a17eb35ff2]
	I0920 18:23:16.763155   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:16.767018   74753 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:23:16.767075   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:23:16.801111   74753 cri.go:89] found id: "35f0d8dd053d4c376ccf21215492d8509ff701e7d09b345fdbe06d505ba2d6b4"
	I0920 18:23:16.801131   74753 cri.go:89] found id: ""
	I0920 18:23:16.801138   74753 logs.go:276] 1 containers: [35f0d8dd053d4c376ccf21215492d8509ff701e7d09b345fdbe06d505ba2d6b4]
	I0920 18:23:16.801192   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:16.805372   74753 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:23:16.805436   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:23:16.843368   74753 cri.go:89] found id: "8153479cebb05353bec31c41778746c485cb294bf41c3f98e30559ad7da64c64"
	I0920 18:23:16.843399   74753 cri.go:89] found id: ""
	I0920 18:23:16.843410   74753 logs.go:276] 1 containers: [8153479cebb05353bec31c41778746c485cb294bf41c3f98e30559ad7da64c64]
	I0920 18:23:16.843461   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:16.847284   74753 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:23:16.847341   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:23:16.882749   74753 cri.go:89] found id: "6df198ca54e8004a532de9b15cfe7a48a659ae06623c00a78cad528762944497"
	I0920 18:23:16.882770   74753 cri.go:89] found id: ""
	I0920 18:23:16.882777   74753 logs.go:276] 1 containers: [6df198ca54e8004a532de9b15cfe7a48a659ae06623c00a78cad528762944497]
	I0920 18:23:16.882838   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:16.887003   74753 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:23:16.887083   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:23:16.923261   74753 cri.go:89] found id: "3ebf4c520d684127cd0b6146d44e863090b9c2db3ac866e9ce3ef2d10d2d6da1"
	I0920 18:23:16.923289   74753 cri.go:89] found id: ""
	I0920 18:23:16.923303   74753 logs.go:276] 1 containers: [3ebf4c520d684127cd0b6146d44e863090b9c2db3ac866e9ce3ef2d10d2d6da1]
	I0920 18:23:16.923357   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:16.927722   74753 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:23:16.927782   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:23:16.963753   74753 cri.go:89] found id: ""
	I0920 18:23:16.963780   74753 logs.go:276] 0 containers: []
	W0920 18:23:16.963791   74753 logs.go:278] No container was found matching "kindnet"
	I0920 18:23:16.963799   74753 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0920 18:23:16.963866   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0920 18:23:17.000548   74753 cri.go:89] found id: "179e4a02f3459a613d3307806b477d54581555b18babca62d0c5e553b9562442"
	I0920 18:23:17.000566   74753 cri.go:89] found id: "3eb9abdf57de51979118cfa77b613c22e28122fcb1da9e72fbf6f3a946ed3531"
	I0920 18:23:17.000571   74753 cri.go:89] found id: ""
	I0920 18:23:17.000578   74753 logs.go:276] 2 containers: [179e4a02f3459a613d3307806b477d54581555b18babca62d0c5e553b9562442 3eb9abdf57de51979118cfa77b613c22e28122fcb1da9e72fbf6f3a946ed3531]
	I0920 18:23:17.000627   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:17.004708   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:17.008489   74753 logs.go:123] Gathering logs for coredns [35f0d8dd053d4c376ccf21215492d8509ff701e7d09b345fdbe06d505ba2d6b4] ...
	I0920 18:23:17.008518   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35f0d8dd053d4c376ccf21215492d8509ff701e7d09b345fdbe06d505ba2d6b4"
	I0920 18:23:17.045971   74753 logs.go:123] Gathering logs for kube-scheduler [8153479cebb05353bec31c41778746c485cb294bf41c3f98e30559ad7da64c64] ...
	I0920 18:23:17.046005   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8153479cebb05353bec31c41778746c485cb294bf41c3f98e30559ad7da64c64"
	I0920 18:23:17.083976   74753 logs.go:123] Gathering logs for kube-controller-manager [3ebf4c520d684127cd0b6146d44e863090b9c2db3ac866e9ce3ef2d10d2d6da1] ...
	I0920 18:23:17.084005   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ebf4c520d684127cd0b6146d44e863090b9c2db3ac866e9ce3ef2d10d2d6da1"
	I0920 18:23:17.137192   74753 logs.go:123] Gathering logs for storage-provisioner [179e4a02f3459a613d3307806b477d54581555b18babca62d0c5e553b9562442] ...
	I0920 18:23:17.137228   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 179e4a02f3459a613d3307806b477d54581555b18babca62d0c5e553b9562442"
	I0920 18:23:17.177203   74753 logs.go:123] Gathering logs for dmesg ...
	I0920 18:23:17.177234   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:23:17.192612   74753 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:23:17.192639   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 18:23:17.300681   74753 logs.go:123] Gathering logs for kube-apiserver [334e4df5baa4f8f7d58af2308fab7f78de22f7e07d6dfbcbeaa0df2a52f6b207] ...
	I0920 18:23:17.300714   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 334e4df5baa4f8f7d58af2308fab7f78de22f7e07d6dfbcbeaa0df2a52f6b207"
	I0920 18:23:17.346835   74753 logs.go:123] Gathering logs for storage-provisioner [3eb9abdf57de51979118cfa77b613c22e28122fcb1da9e72fbf6f3a946ed3531] ...
	I0920 18:23:17.346866   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3eb9abdf57de51979118cfa77b613c22e28122fcb1da9e72fbf6f3a946ed3531"
	I0920 18:23:17.382625   74753 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:23:17.382650   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:23:17.750803   74753 logs.go:123] Gathering logs for container status ...
	I0920 18:23:17.750853   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:23:17.801114   74753 logs.go:123] Gathering logs for kubelet ...
	I0920 18:23:17.801142   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:23:17.866547   74753 logs.go:123] Gathering logs for etcd [98aa96314cf8f863121b9953ad6335628481f536fcab49b7c00505a17eb35ff2] ...
	I0920 18:23:17.866589   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 98aa96314cf8f863121b9953ad6335628481f536fcab49b7c00505a17eb35ff2"
	I0920 18:23:17.909489   74753 logs.go:123] Gathering logs for kube-proxy [6df198ca54e8004a532de9b15cfe7a48a659ae06623c00a78cad528762944497] ...
	I0920 18:23:17.909519   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6df198ca54e8004a532de9b15cfe7a48a659ae06623c00a78cad528762944497"
	I0920 18:23:20.456014   74753 system_pods.go:59] 8 kube-system pods found
	I0920 18:23:20.456055   74753 system_pods.go:61] "coredns-7c65d6cfc9-j2t5h" [dd2636f1-3200-4f22-957c-046277c9be8c] Running
	I0920 18:23:20.456063   74753 system_pods.go:61] "etcd-no-preload-956403" [daf84be0-e673-4685-a998-47ec64664484] Running
	I0920 18:23:20.456068   74753 system_pods.go:61] "kube-apiserver-no-preload-956403" [416217e6-3c91-49ec-bd69-812a49b4afd9] Running
	I0920 18:23:20.456074   74753 system_pods.go:61] "kube-controller-manager-no-preload-956403" [30fae740-b073-4619-bca5-010ca90b2667] Running
	I0920 18:23:20.456079   74753 system_pods.go:61] "kube-proxy-sz4bm" [269600fb-ef65-4b17-8c07-76c79e35f5a8] Running
	I0920 18:23:20.456085   74753 system_pods.go:61] "kube-scheduler-no-preload-956403" [46432927-e506-4665-8825-c5f6a2ed0458] Running
	I0920 18:23:20.456095   74753 system_pods.go:61] "metrics-server-6867b74b74-tfsff" [599ba06a-6d4d-483b-b390-a3595a814757] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 18:23:20.456101   74753 system_pods.go:61] "storage-provisioner" [df661627-7d32-4467-9805-1ae65d4fa35c] Running
	I0920 18:23:20.456109   74753 system_pods.go:74] duration metric: took 3.780646412s to wait for pod list to return data ...
	I0920 18:23:20.456116   74753 default_sa.go:34] waiting for default service account to be created ...
	I0920 18:23:20.459571   74753 default_sa.go:45] found service account: "default"
	I0920 18:23:20.459598   74753 default_sa.go:55] duration metric: took 3.475906ms for default service account to be created ...
	I0920 18:23:20.459607   74753 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 18:23:20.464453   74753 system_pods.go:86] 8 kube-system pods found
	I0920 18:23:20.464489   74753 system_pods.go:89] "coredns-7c65d6cfc9-j2t5h" [dd2636f1-3200-4f22-957c-046277c9be8c] Running
	I0920 18:23:20.464495   74753 system_pods.go:89] "etcd-no-preload-956403" [daf84be0-e673-4685-a998-47ec64664484] Running
	I0920 18:23:20.464499   74753 system_pods.go:89] "kube-apiserver-no-preload-956403" [416217e6-3c91-49ec-bd69-812a49b4afd9] Running
	I0920 18:23:20.464544   74753 system_pods.go:89] "kube-controller-manager-no-preload-956403" [30fae740-b073-4619-bca5-010ca90b2667] Running
	I0920 18:23:20.464559   74753 system_pods.go:89] "kube-proxy-sz4bm" [269600fb-ef65-4b17-8c07-76c79e35f5a8] Running
	I0920 18:23:20.464563   74753 system_pods.go:89] "kube-scheduler-no-preload-956403" [46432927-e506-4665-8825-c5f6a2ed0458] Running
	I0920 18:23:20.464570   74753 system_pods.go:89] "metrics-server-6867b74b74-tfsff" [599ba06a-6d4d-483b-b390-a3595a814757] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 18:23:20.464577   74753 system_pods.go:89] "storage-provisioner" [df661627-7d32-4467-9805-1ae65d4fa35c] Running
	I0920 18:23:20.464583   74753 system_pods.go:126] duration metric: took 4.971732ms to wait for k8s-apps to be running ...
	I0920 18:23:20.464592   74753 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 18:23:20.464637   74753 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:23:20.483425   74753 system_svc.go:56] duration metric: took 18.826162ms WaitForService to wait for kubelet
	I0920 18:23:20.483452   74753 kubeadm.go:582] duration metric: took 4m21.334124064s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:23:20.483469   74753 node_conditions.go:102] verifying NodePressure condition ...
	I0920 18:23:20.486703   74753 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 18:23:20.486725   74753 node_conditions.go:123] node cpu capacity is 2
	I0920 18:23:20.486736   74753 node_conditions.go:105] duration metric: took 3.263035ms to run NodePressure ...
	I0920 18:23:20.486745   74753 start.go:241] waiting for startup goroutines ...
	I0920 18:23:20.486752   74753 start.go:246] waiting for cluster config update ...
	I0920 18:23:20.486763   74753 start.go:255] writing updated cluster config ...
	I0920 18:23:20.487023   74753 ssh_runner.go:195] Run: rm -f paused
	I0920 18:23:20.538569   74753 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 18:23:20.540783   74753 out.go:177] * Done! kubectl is now configured to use "no-preload-956403" cluster and "default" namespace by default
	I0920 18:24:23.891635   75577 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0920 18:24:23.891735   75577 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0920 18:24:23.893591   75577 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0920 18:24:23.893681   75577 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 18:24:23.893785   75577 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 18:24:23.893913   75577 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 18:24:23.894056   75577 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0920 18:24:23.894134   75577 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 18:24:23.895929   75577 out.go:235]   - Generating certificates and keys ...
	I0920 18:24:23.896024   75577 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 18:24:23.896127   75577 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 18:24:23.896245   75577 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 18:24:23.896329   75577 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 18:24:23.896426   75577 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 18:24:23.896510   75577 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 18:24:23.896603   75577 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 18:24:23.896686   75577 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 18:24:23.896783   75577 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 18:24:23.896865   75577 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 18:24:23.896916   75577 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 18:24:23.896984   75577 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 18:24:23.897054   75577 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 18:24:23.897112   75577 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 18:24:23.897174   75577 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 18:24:23.897237   75577 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 18:24:23.897367   75577 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 18:24:23.897488   75577 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 18:24:23.897554   75577 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 18:24:23.897660   75577 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 18:24:23.899056   75577 out.go:235]   - Booting up control plane ...
	I0920 18:24:23.899173   75577 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 18:24:23.899287   75577 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 18:24:23.899371   75577 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 18:24:23.899460   75577 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 18:24:23.899639   75577 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0920 18:24:23.899719   75577 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0920 18:24:23.899823   75577 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:24:23.900047   75577 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:24:23.900124   75577 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:24:23.900322   75577 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:24:23.900392   75577 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:24:23.900556   75577 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:24:23.900634   75577 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:24:23.900826   75577 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:24:23.900884   75577 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:24:23.901044   75577 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:24:23.901052   75577 kubeadm.go:310] 
	I0920 18:24:23.901094   75577 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0920 18:24:23.901129   75577 kubeadm.go:310] 		timed out waiting for the condition
	I0920 18:24:23.901135   75577 kubeadm.go:310] 
	I0920 18:24:23.901179   75577 kubeadm.go:310] 	This error is likely caused by:
	I0920 18:24:23.901209   75577 kubeadm.go:310] 		- The kubelet is not running
	I0920 18:24:23.901314   75577 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0920 18:24:23.901335   75577 kubeadm.go:310] 
	I0920 18:24:23.901458   75577 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0920 18:24:23.901515   75577 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0920 18:24:23.901547   75577 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0920 18:24:23.901556   75577 kubeadm.go:310] 
	I0920 18:24:23.901719   75577 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0920 18:24:23.901859   75577 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0920 18:24:23.901870   75577 kubeadm.go:310] 
	I0920 18:24:23.902030   75577 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0920 18:24:23.902122   75577 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0920 18:24:23.902186   75577 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0920 18:24:23.902264   75577 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0920 18:24:23.902299   75577 kubeadm.go:310] 
	W0920 18:24:23.902388   75577 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0920 18:24:23.902435   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0920 18:24:24.370490   75577 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:24:24.385383   75577 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 18:24:24.395443   75577 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 18:24:24.395463   75577 kubeadm.go:157] found existing configuration files:
	
	I0920 18:24:24.395505   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 18:24:24.404947   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 18:24:24.405021   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 18:24:24.415145   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 18:24:24.424199   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 18:24:24.424279   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 18:24:24.435108   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 18:24:24.446684   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 18:24:24.446742   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 18:24:24.457199   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 18:24:24.467240   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 18:24:24.467315   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 18:24:24.479293   75577 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 18:24:24.704601   75577 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 18:26:21.167677   75577 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0920 18:26:21.167760   75577 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0920 18:26:21.170176   75577 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0920 18:26:21.170249   75577 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 18:26:21.170353   75577 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 18:26:21.170485   75577 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 18:26:21.170653   75577 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0920 18:26:21.170778   75577 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 18:26:21.172479   75577 out.go:235]   - Generating certificates and keys ...
	I0920 18:26:21.172559   75577 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 18:26:21.172623   75577 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 18:26:21.172734   75577 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 18:26:21.172820   75577 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 18:26:21.172901   75577 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 18:26:21.172948   75577 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 18:26:21.173054   75577 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 18:26:21.173145   75577 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 18:26:21.173238   75577 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 18:26:21.173325   75577 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 18:26:21.173381   75577 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 18:26:21.173468   75577 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 18:26:21.173517   75577 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 18:26:21.173562   75577 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 18:26:21.173657   75577 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 18:26:21.173753   75577 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 18:26:21.173931   75577 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 18:26:21.174042   75577 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 18:26:21.174120   75577 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 18:26:21.174226   75577 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 18:26:21.175785   75577 out.go:235]   - Booting up control plane ...
	I0920 18:26:21.175897   75577 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 18:26:21.176062   75577 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 18:26:21.176179   75577 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 18:26:21.176283   75577 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 18:26:21.176482   75577 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0920 18:26:21.176567   75577 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0920 18:26:21.176654   75577 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:26:21.176850   75577 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:26:21.176952   75577 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:26:21.177146   75577 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:26:21.177255   75577 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:26:21.177460   75577 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:26:21.177527   75577 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:26:21.177681   75577 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:26:21.177758   75577 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:26:21.177952   75577 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:26:21.177966   75577 kubeadm.go:310] 
	I0920 18:26:21.178001   75577 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0920 18:26:21.178035   75577 kubeadm.go:310] 		timed out waiting for the condition
	I0920 18:26:21.178041   75577 kubeadm.go:310] 
	I0920 18:26:21.178070   75577 kubeadm.go:310] 	This error is likely caused by:
	I0920 18:26:21.178099   75577 kubeadm.go:310] 		- The kubelet is not running
	I0920 18:26:21.178239   75577 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0920 18:26:21.178251   75577 kubeadm.go:310] 
	I0920 18:26:21.178348   75577 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0920 18:26:21.178378   75577 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0920 18:26:21.178415   75577 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0920 18:26:21.178421   75577 kubeadm.go:310] 
	I0920 18:26:21.178505   75577 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0920 18:26:21.178581   75577 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0920 18:26:21.178590   75577 kubeadm.go:310] 
	I0920 18:26:21.178695   75577 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0920 18:26:21.178770   75577 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0920 18:26:21.178832   75577 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0920 18:26:21.178893   75577 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0920 18:26:21.178950   75577 kubeadm.go:394] duration metric: took 7m57.80706374s to StartCluster
	I0920 18:26:21.178961   75577 kubeadm.go:310] 
	I0920 18:26:21.178989   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:26:21.179047   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:26:21.229528   75577 cri.go:89] found id: ""
	I0920 18:26:21.229554   75577 logs.go:276] 0 containers: []
	W0920 18:26:21.229564   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:26:21.229572   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:26:21.229644   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:26:21.265997   75577 cri.go:89] found id: ""
	I0920 18:26:21.266021   75577 logs.go:276] 0 containers: []
	W0920 18:26:21.266029   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:26:21.266034   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:26:21.266081   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:26:21.309358   75577 cri.go:89] found id: ""
	I0920 18:26:21.309391   75577 logs.go:276] 0 containers: []
	W0920 18:26:21.309402   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:26:21.309409   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:26:21.309469   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:26:21.349418   75577 cri.go:89] found id: ""
	I0920 18:26:21.349442   75577 logs.go:276] 0 containers: []
	W0920 18:26:21.349451   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:26:21.349457   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:26:21.349537   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:26:21.388067   75577 cri.go:89] found id: ""
	I0920 18:26:21.388092   75577 logs.go:276] 0 containers: []
	W0920 18:26:21.388105   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:26:21.388111   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:26:21.388165   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:26:21.428474   75577 cri.go:89] found id: ""
	I0920 18:26:21.428501   75577 logs.go:276] 0 containers: []
	W0920 18:26:21.428512   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:26:21.428519   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:26:21.428580   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:26:21.466188   75577 cri.go:89] found id: ""
	I0920 18:26:21.466215   75577 logs.go:276] 0 containers: []
	W0920 18:26:21.466225   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:26:21.466232   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:26:21.466291   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:26:21.504396   75577 cri.go:89] found id: ""
	I0920 18:26:21.504422   75577 logs.go:276] 0 containers: []
	W0920 18:26:21.504432   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:26:21.504443   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:26:21.504457   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:26:21.608057   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:26:21.608098   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:26:21.672536   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:26:21.672564   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:26:21.723219   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:26:21.723251   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:26:21.736436   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:26:21.736463   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:26:21.809055   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0920 18:26:21.809079   75577 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0920 18:26:21.809128   75577 out.go:270] * 
	W0920 18:26:21.809197   75577 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0920 18:26:21.809217   75577 out.go:270] * 
	W0920 18:26:21.810193   75577 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 18:26:21.814385   75577 out.go:201] 
	W0920 18:26:21.816118   75577 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0920 18:26:21.816186   75577 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0920 18:26:21.816225   75577 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0920 18:26:21.818478   75577 out.go:201] 
	
	
	==> CRI-O <==
	Sep 20 18:32:22 no-preload-956403 crio[706]: time="2024-09-20 18:32:22.643310396Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8d793c92-16f0-472a-be97-476ca7fc21da name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:32:22 no-preload-956403 crio[706]: time="2024-09-20 18:32:22.643507868Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:179e4a02f3459a613d3307806b477d54581555b18babca62d0c5e553b9562442,PodSandboxId:a4aa1bb68d3e80ac2c63b750b5f32d3cd913ebb3f52a99df9ad043df20b620b5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726856367655123101,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df661627-7d32-4467-9805-1ae65d4fa35c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e852082c0f944a19082360b499b94e415a260b9b2546225438d699561fa8f6ff,PodSandboxId:3e897f51edd9c5fbacee8fa85f2e3331df12d5c732ac33d4df5ec153f63f6d3b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726856348135985961,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a15af88e-18f6-4284-b32a-0cb1b432b683,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35f0d8dd053d4c376ccf21215492d8509ff701e7d09b345fdbe06d505ba2d6b4,PodSandboxId:80d9fde65826dd3f259c46475b2d7bbd6f14bf221d19965c3e8f98d281dfd37b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726856344484544735,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-j2t5h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd2636f1-3200-4f22-957c-046277c9be8c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6df198ca54e8004a532de9b15cfe7a48a659ae06623c00a78cad528762944497,PodSandboxId:ff41f43ceaf02185b62083b69ed688d4c4802c48a9f8fbd467db6075f2b1e9ae,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726856336830203740,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sz4bm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 269600fb-ef65-4b17-8c
07-76c79e35f5a8,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3eb9abdf57de51979118cfa77b613c22e28122fcb1da9e72fbf6f3a946ed3531,PodSandboxId:a4aa1bb68d3e80ac2c63b750b5f32d3cd913ebb3f52a99df9ad043df20b620b5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726856336782619509,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df661627-7d32-4467-9805-1ae65d4fa3
5c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98aa96314cf8f863121b9953ad6335628481f536fcab49b7c00505a17eb35ff2,PodSandboxId:55694df2cb78079dd185683bf9c759122274ea694e0c70098abdd859cb24d952,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726856333198659894,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-956403,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b0ec02901547ea388858d214d5765f1,},Annotations:map[string]string{io.kuber
netes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8153479cebb05353bec31c41778746c485cb294bf41c3f98e30559ad7da64c64,PodSandboxId:893589c707d79ed25204db3db6c7982806bb309236c6a94bb75d20f43798f2fa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726856333130561988,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-956403,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9af5b077640c6f0cb975dac7a2663e89,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:334e4df5baa4f8f7d58af2308fab7f78de22f7e07d6dfbcbeaa0df2a52f6b207,PodSandboxId:7a3535de57f415c09f824af177beb4f8f14f1cfb8cb3e4d97f9a738bc0c86574,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726856333100497654,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-956403,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e09fb1ce62fc85d5c8feacad02192a8,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2
713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ebf4c520d684127cd0b6146d44e863090b9c2db3ac866e9ce3ef2d10d2d6da1,PodSandboxId:0ea2e7a0745f6fe89a09475a576924d725ebc018614a00232bfd424cff3b86e9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726856333020591560,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-956403,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c0ae9548860afe245bdc265d4d3d790,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8d793c92-16f0-472a-be97-476ca7fc21da name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:32:22 no-preload-956403 crio[706]: time="2024-09-20 18:32:22.680042632Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="otel-collector/interceptors.go:62" id=2c60975f-6a43-42fc-98ab-d4f6650864f6 name=/runtime.v1.RuntimeService/Status
	Sep 20 18:32:22 no-preload-956403 crio[706]: time="2024-09-20 18:32:22.680141225Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=2c60975f-6a43-42fc-98ab-d4f6650864f6 name=/runtime.v1.RuntimeService/Status
	Sep 20 18:32:22 no-preload-956403 crio[706]: time="2024-09-20 18:32:22.687062354Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a9cfecb5-68b8-4036-af51-dc6516b36dd6 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:32:22 no-preload-956403 crio[706]: time="2024-09-20 18:32:22.687146868Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a9cfecb5-68b8-4036-af51-dc6516b36dd6 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:32:22 no-preload-956403 crio[706]: time="2024-09-20 18:32:22.688428549Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4144211b-c06a-4c30-a780-b63975fefd86 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:32:22 no-preload-956403 crio[706]: time="2024-09-20 18:32:22.688789830Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857142688760569,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4144211b-c06a-4c30-a780-b63975fefd86 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:32:22 no-preload-956403 crio[706]: time="2024-09-20 18:32:22.689260255Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bf0b3f16-afe9-441d-9e9b-ae05e7393f55 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:32:22 no-preload-956403 crio[706]: time="2024-09-20 18:32:22.689311237Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bf0b3f16-afe9-441d-9e9b-ae05e7393f55 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:32:22 no-preload-956403 crio[706]: time="2024-09-20 18:32:22.689514167Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:179e4a02f3459a613d3307806b477d54581555b18babca62d0c5e553b9562442,PodSandboxId:a4aa1bb68d3e80ac2c63b750b5f32d3cd913ebb3f52a99df9ad043df20b620b5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726856367655123101,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df661627-7d32-4467-9805-1ae65d4fa35c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e852082c0f944a19082360b499b94e415a260b9b2546225438d699561fa8f6ff,PodSandboxId:3e897f51edd9c5fbacee8fa85f2e3331df12d5c732ac33d4df5ec153f63f6d3b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726856348135985961,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a15af88e-18f6-4284-b32a-0cb1b432b683,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35f0d8dd053d4c376ccf21215492d8509ff701e7d09b345fdbe06d505ba2d6b4,PodSandboxId:80d9fde65826dd3f259c46475b2d7bbd6f14bf221d19965c3e8f98d281dfd37b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726856344484544735,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-j2t5h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd2636f1-3200-4f22-957c-046277c9be8c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6df198ca54e8004a532de9b15cfe7a48a659ae06623c00a78cad528762944497,PodSandboxId:ff41f43ceaf02185b62083b69ed688d4c4802c48a9f8fbd467db6075f2b1e9ae,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726856336830203740,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sz4bm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 269600fb-ef65-4b17-8c
07-76c79e35f5a8,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3eb9abdf57de51979118cfa77b613c22e28122fcb1da9e72fbf6f3a946ed3531,PodSandboxId:a4aa1bb68d3e80ac2c63b750b5f32d3cd913ebb3f52a99df9ad043df20b620b5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726856336782619509,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df661627-7d32-4467-9805-1ae65d4fa3
5c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98aa96314cf8f863121b9953ad6335628481f536fcab49b7c00505a17eb35ff2,PodSandboxId:55694df2cb78079dd185683bf9c759122274ea694e0c70098abdd859cb24d952,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726856333198659894,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-956403,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b0ec02901547ea388858d214d5765f1,},Annotations:map[string]string{io.kuber
netes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8153479cebb05353bec31c41778746c485cb294bf41c3f98e30559ad7da64c64,PodSandboxId:893589c707d79ed25204db3db6c7982806bb309236c6a94bb75d20f43798f2fa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726856333130561988,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-956403,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9af5b077640c6f0cb975dac7a2663e89,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:334e4df5baa4f8f7d58af2308fab7f78de22f7e07d6dfbcbeaa0df2a52f6b207,PodSandboxId:7a3535de57f415c09f824af177beb4f8f14f1cfb8cb3e4d97f9a738bc0c86574,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726856333100497654,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-956403,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e09fb1ce62fc85d5c8feacad02192a8,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2
713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ebf4c520d684127cd0b6146d44e863090b9c2db3ac866e9ce3ef2d10d2d6da1,PodSandboxId:0ea2e7a0745f6fe89a09475a576924d725ebc018614a00232bfd424cff3b86e9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726856333020591560,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-956403,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c0ae9548860afe245bdc265d4d3d790,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bf0b3f16-afe9-441d-9e9b-ae05e7393f55 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:32:22 no-preload-956403 crio[706]: time="2024-09-20 18:32:22.731873637Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ce32c572-366a-4f46-9871-36a441d6af87 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:32:22 no-preload-956403 crio[706]: time="2024-09-20 18:32:22.732039085Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ce32c572-366a-4f46-9871-36a441d6af87 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:32:22 no-preload-956403 crio[706]: time="2024-09-20 18:32:22.734061881Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fe6cc943-c029-4622-b0a1-09a0659316c2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:32:22 no-preload-956403 crio[706]: time="2024-09-20 18:32:22.734539188Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857142734509416,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fe6cc943-c029-4622-b0a1-09a0659316c2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:32:22 no-preload-956403 crio[706]: time="2024-09-20 18:32:22.735546267Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2ed93dea-0225-43c6-8ba1-8df6c5e49a3b name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:32:22 no-preload-956403 crio[706]: time="2024-09-20 18:32:22.735629809Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2ed93dea-0225-43c6-8ba1-8df6c5e49a3b name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:32:22 no-preload-956403 crio[706]: time="2024-09-20 18:32:22.735895209Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:179e4a02f3459a613d3307806b477d54581555b18babca62d0c5e553b9562442,PodSandboxId:a4aa1bb68d3e80ac2c63b750b5f32d3cd913ebb3f52a99df9ad043df20b620b5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726856367655123101,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df661627-7d32-4467-9805-1ae65d4fa35c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e852082c0f944a19082360b499b94e415a260b9b2546225438d699561fa8f6ff,PodSandboxId:3e897f51edd9c5fbacee8fa85f2e3331df12d5c732ac33d4df5ec153f63f6d3b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726856348135985961,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a15af88e-18f6-4284-b32a-0cb1b432b683,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35f0d8dd053d4c376ccf21215492d8509ff701e7d09b345fdbe06d505ba2d6b4,PodSandboxId:80d9fde65826dd3f259c46475b2d7bbd6f14bf221d19965c3e8f98d281dfd37b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726856344484544735,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-j2t5h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd2636f1-3200-4f22-957c-046277c9be8c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6df198ca54e8004a532de9b15cfe7a48a659ae06623c00a78cad528762944497,PodSandboxId:ff41f43ceaf02185b62083b69ed688d4c4802c48a9f8fbd467db6075f2b1e9ae,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726856336830203740,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sz4bm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 269600fb-ef65-4b17-8c
07-76c79e35f5a8,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3eb9abdf57de51979118cfa77b613c22e28122fcb1da9e72fbf6f3a946ed3531,PodSandboxId:a4aa1bb68d3e80ac2c63b750b5f32d3cd913ebb3f52a99df9ad043df20b620b5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726856336782619509,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df661627-7d32-4467-9805-1ae65d4fa3
5c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98aa96314cf8f863121b9953ad6335628481f536fcab49b7c00505a17eb35ff2,PodSandboxId:55694df2cb78079dd185683bf9c759122274ea694e0c70098abdd859cb24d952,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726856333198659894,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-956403,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b0ec02901547ea388858d214d5765f1,},Annotations:map[string]string{io.kuber
netes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8153479cebb05353bec31c41778746c485cb294bf41c3f98e30559ad7da64c64,PodSandboxId:893589c707d79ed25204db3db6c7982806bb309236c6a94bb75d20f43798f2fa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726856333130561988,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-956403,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9af5b077640c6f0cb975dac7a2663e89,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:334e4df5baa4f8f7d58af2308fab7f78de22f7e07d6dfbcbeaa0df2a52f6b207,PodSandboxId:7a3535de57f415c09f824af177beb4f8f14f1cfb8cb3e4d97f9a738bc0c86574,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726856333100497654,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-956403,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e09fb1ce62fc85d5c8feacad02192a8,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2
713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ebf4c520d684127cd0b6146d44e863090b9c2db3ac866e9ce3ef2d10d2d6da1,PodSandboxId:0ea2e7a0745f6fe89a09475a576924d725ebc018614a00232bfd424cff3b86e9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726856333020591560,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-956403,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c0ae9548860afe245bdc265d4d3d790,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2ed93dea-0225-43c6-8ba1-8df6c5e49a3b name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:32:22 no-preload-956403 crio[706]: time="2024-09-20 18:32:22.777283726Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fea0d44f-328d-47a8-b6f8-643ccd6d6f3b name=/runtime.v1.RuntimeService/Version
	Sep 20 18:32:22 no-preload-956403 crio[706]: time="2024-09-20 18:32:22.777411615Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fea0d44f-328d-47a8-b6f8-643ccd6d6f3b name=/runtime.v1.RuntimeService/Version
	Sep 20 18:32:22 no-preload-956403 crio[706]: time="2024-09-20 18:32:22.779193158Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=35f28d22-bf26-4ba3-97b9-3d729897a12b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:32:22 no-preload-956403 crio[706]: time="2024-09-20 18:32:22.779603053Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857142779568387,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=35f28d22-bf26-4ba3-97b9-3d729897a12b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:32:22 no-preload-956403 crio[706]: time="2024-09-20 18:32:22.780273524Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3a4b5f5e-103a-4932-8ab4-b91e206ec0ca name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:32:22 no-preload-956403 crio[706]: time="2024-09-20 18:32:22.780351881Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3a4b5f5e-103a-4932-8ab4-b91e206ec0ca name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:32:22 no-preload-956403 crio[706]: time="2024-09-20 18:32:22.780567171Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:179e4a02f3459a613d3307806b477d54581555b18babca62d0c5e553b9562442,PodSandboxId:a4aa1bb68d3e80ac2c63b750b5f32d3cd913ebb3f52a99df9ad043df20b620b5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726856367655123101,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df661627-7d32-4467-9805-1ae65d4fa35c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e852082c0f944a19082360b499b94e415a260b9b2546225438d699561fa8f6ff,PodSandboxId:3e897f51edd9c5fbacee8fa85f2e3331df12d5c732ac33d4df5ec153f63f6d3b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726856348135985961,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a15af88e-18f6-4284-b32a-0cb1b432b683,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35f0d8dd053d4c376ccf21215492d8509ff701e7d09b345fdbe06d505ba2d6b4,PodSandboxId:80d9fde65826dd3f259c46475b2d7bbd6f14bf221d19965c3e8f98d281dfd37b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726856344484544735,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-j2t5h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd2636f1-3200-4f22-957c-046277c9be8c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6df198ca54e8004a532de9b15cfe7a48a659ae06623c00a78cad528762944497,PodSandboxId:ff41f43ceaf02185b62083b69ed688d4c4802c48a9f8fbd467db6075f2b1e9ae,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726856336830203740,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sz4bm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 269600fb-ef65-4b17-8c
07-76c79e35f5a8,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3eb9abdf57de51979118cfa77b613c22e28122fcb1da9e72fbf6f3a946ed3531,PodSandboxId:a4aa1bb68d3e80ac2c63b750b5f32d3cd913ebb3f52a99df9ad043df20b620b5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726856336782619509,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df661627-7d32-4467-9805-1ae65d4fa3
5c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98aa96314cf8f863121b9953ad6335628481f536fcab49b7c00505a17eb35ff2,PodSandboxId:55694df2cb78079dd185683bf9c759122274ea694e0c70098abdd859cb24d952,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726856333198659894,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-956403,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b0ec02901547ea388858d214d5765f1,},Annotations:map[string]string{io.kuber
netes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8153479cebb05353bec31c41778746c485cb294bf41c3f98e30559ad7da64c64,PodSandboxId:893589c707d79ed25204db3db6c7982806bb309236c6a94bb75d20f43798f2fa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726856333130561988,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-956403,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9af5b077640c6f0cb975dac7a2663e89,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:334e4df5baa4f8f7d58af2308fab7f78de22f7e07d6dfbcbeaa0df2a52f6b207,PodSandboxId:7a3535de57f415c09f824af177beb4f8f14f1cfb8cb3e4d97f9a738bc0c86574,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726856333100497654,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-956403,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e09fb1ce62fc85d5c8feacad02192a8,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2
713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ebf4c520d684127cd0b6146d44e863090b9c2db3ac866e9ce3ef2d10d2d6da1,PodSandboxId:0ea2e7a0745f6fe89a09475a576924d725ebc018614a00232bfd424cff3b86e9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726856333020591560,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-956403,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c0ae9548860afe245bdc265d4d3d790,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3a4b5f5e-103a-4932-8ab4-b91e206ec0ca name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	179e4a02f3459       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   a4aa1bb68d3e8       storage-provisioner
	e852082c0f944       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   3e897f51edd9c       busybox
	35f0d8dd053d4       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      13 minutes ago      Running             coredns                   1                   80d9fde65826d       coredns-7c65d6cfc9-j2t5h
	6df198ca54e80       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      13 minutes ago      Running             kube-proxy                1                   ff41f43ceaf02       kube-proxy-sz4bm
	3eb9abdf57de5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   a4aa1bb68d3e8       storage-provisioner
	98aa96314cf8f       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago      Running             etcd                      1                   55694df2cb780       etcd-no-preload-956403
	8153479cebb05       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      13 minutes ago      Running             kube-scheduler            1                   893589c707d79       kube-scheduler-no-preload-956403
	334e4df5baa4f       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      13 minutes ago      Running             kube-apiserver            1                   7a3535de57f41       kube-apiserver-no-preload-956403
	3ebf4c520d684       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      13 minutes ago      Running             kube-controller-manager   1                   0ea2e7a0745f6       kube-controller-manager-no-preload-956403
	
	
	==> coredns [35f0d8dd053d4c376ccf21215492d8509ff701e7d09b345fdbe06d505ba2d6b4] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:38757 - 54136 "HINFO IN 5523722262679873145.418932425828733990. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.013530786s
	
	
	==> describe nodes <==
	Name:               no-preload-956403
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-956403
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0626f22cf0d915d75e291a5bce701f94395056e1
	                    minikube.k8s.io/name=no-preload-956403
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T18_09_38_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 18:09:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-956403
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 18:32:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 18:29:39 +0000   Fri, 20 Sep 2024 18:09:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 18:29:39 +0000   Fri, 20 Sep 2024 18:09:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 18:29:39 +0000   Fri, 20 Sep 2024 18:09:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 18:29:39 +0000   Fri, 20 Sep 2024 18:19:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.47
	  Hostname:    no-preload-956403
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c66b620f72724471b218dcc813962e67
	  System UUID:                c66b620f-7272-4471-b218-dcc813962e67
	  Boot ID:                    9eeb8437-d501-4de6-aecf-7cdd4dc11582
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 coredns-7c65d6cfc9-j2t5h                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     22m
	  kube-system                 etcd-no-preload-956403                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         22m
	  kube-system                 kube-apiserver-no-preload-956403             250m (12%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-controller-manager-no-preload-956403    200m (10%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-proxy-sz4bm                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-scheduler-no-preload-956403             100m (5%)     0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 metrics-server-6867b74b74-tfsff              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         22m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 22m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node no-preload-956403 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m (x7 over 22m)  kubelet          Node no-preload-956403 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node no-preload-956403 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    22m                kubelet          Node no-preload-956403 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  22m                kubelet          Node no-preload-956403 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     22m                kubelet          Node no-preload-956403 status is now: NodeHasSufficientPID
	  Normal  NodeReady                22m                kubelet          Node no-preload-956403 status is now: NodeReady
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           22m                node-controller  Node no-preload-956403 event: Registered Node no-preload-956403 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node no-preload-956403 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node no-preload-956403 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node no-preload-956403 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node no-preload-956403 event: Registered Node no-preload-956403 in Controller
	
	
	==> dmesg <==
	[Sep20 18:18] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.057298] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038706] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.380657] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.005074] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.628523] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.252715] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.067443] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.068029] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.192455] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +0.150068] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +0.301008] systemd-fstab-generator[698]: Ignoring "noauto" option for root device
	[ +15.196476] systemd-fstab-generator[1230]: Ignoring "noauto" option for root device
	[  +0.071531] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.701834] systemd-fstab-generator[1348]: Ignoring "noauto" option for root device
	[  +4.591288] kauditd_printk_skb: 97 callbacks suppressed
	[  +2.465848] systemd-fstab-generator[1979]: Ignoring "noauto" option for root device
	[Sep20 18:19] kauditd_printk_skb: 61 callbacks suppressed
	[  +5.691900] kauditd_printk_skb: 41 callbacks suppressed
	
	
	==> etcd [98aa96314cf8f863121b9953ad6335628481f536fcab49b7c00505a17eb35ff2] <==
	{"level":"info","ts":"2024-09-20T18:18:53.690155Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-20T18:18:53.676354Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"63d12f7d015473f3 switched to configuration voters=(7192582293827122163)"}
	{"level":"info","ts":"2024-09-20T18:18:53.690382Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"a66a701203d69b1d","local-member-id":"63d12f7d015473f3","added-peer-id":"63d12f7d015473f3","added-peer-peer-urls":["https://192.168.50.47:2380"]}
	{"level":"info","ts":"2024-09-20T18:18:53.690654Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"a66a701203d69b1d","local-member-id":"63d12f7d015473f3","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T18:18:53.690697Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T18:18:53.675585Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-09-20T18:18:54.616138Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"63d12f7d015473f3 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-20T18:18:54.616244Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"63d12f7d015473f3 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-20T18:18:54.616288Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"63d12f7d015473f3 received MsgPreVoteResp from 63d12f7d015473f3 at term 2"}
	{"level":"info","ts":"2024-09-20T18:18:54.616318Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"63d12f7d015473f3 became candidate at term 3"}
	{"level":"info","ts":"2024-09-20T18:18:54.616343Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"63d12f7d015473f3 received MsgVoteResp from 63d12f7d015473f3 at term 3"}
	{"level":"info","ts":"2024-09-20T18:18:54.616371Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"63d12f7d015473f3 became leader at term 3"}
	{"level":"info","ts":"2024-09-20T18:18:54.616396Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 63d12f7d015473f3 elected leader 63d12f7d015473f3 at term 3"}
	{"level":"info","ts":"2024-09-20T18:18:54.658570Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T18:18:54.659769Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T18:18:54.661119Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.47:2379"}
	{"level":"info","ts":"2024-09-20T18:18:54.661643Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T18:18:54.662761Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T18:18:54.664223Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-20T18:18:54.658527Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"63d12f7d015473f3","local-member-attributes":"{Name:no-preload-956403 ClientURLs:[https://192.168.50.47:2379]}","request-path":"/0/members/63d12f7d015473f3/attributes","cluster-id":"a66a701203d69b1d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-20T18:18:54.671015Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-20T18:18:54.671097Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-20T18:28:54.701082Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":859}
	{"level":"info","ts":"2024-09-20T18:28:54.712841Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":859,"took":"11.366825ms","hash":770300991,"current-db-size-bytes":2764800,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":2764800,"current-db-size-in-use":"2.8 MB"}
	{"level":"info","ts":"2024-09-20T18:28:54.712954Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":770300991,"revision":859,"compact-revision":-1}
	
	
	==> kernel <==
	 18:32:23 up 14 min,  0 users,  load average: 0.09, 0.11, 0.08
	Linux no-preload-956403 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [334e4df5baa4f8f7d58af2308fab7f78de22f7e07d6dfbcbeaa0df2a52f6b207] <==
	W0920 18:28:57.065891       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 18:28:57.065996       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0920 18:28:57.067147       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0920 18:28:57.067200       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0920 18:29:57.067405       1 handler_proxy.go:99] no RequestInfo found in the context
	W0920 18:29:57.067486       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 18:29:57.067626       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0920 18:29:57.067770       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0920 18:29:57.068995       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0920 18:29:57.069108       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0920 18:31:57.070182       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 18:31:57.070362       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0920 18:31:57.070440       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 18:31:57.070457       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0920 18:31:57.071552       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0920 18:31:57.071691       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [3ebf4c520d684127cd0b6146d44e863090b9c2db3ac866e9ce3ef2d10d2d6da1] <==
	E0920 18:26:59.582429       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 18:27:00.139480       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 18:27:29.590254       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 18:27:30.147480       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 18:27:59.597888       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 18:28:00.156242       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 18:28:29.603803       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 18:28:30.165642       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 18:28:59.610584       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 18:29:00.174087       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 18:29:29.618195       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 18:29:30.183008       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0920 18:29:39.294060       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-956403"
	E0920 18:29:59.625972       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 18:30:00.190851       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0920 18:30:00.464307       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="385.44µs"
	I0920 18:30:13.459977       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="215.528µs"
	E0920 18:30:29.633077       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 18:30:30.202724       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 18:30:59.639890       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 18:31:00.210428       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 18:31:29.647775       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 18:31:30.218241       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 18:31:59.654330       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 18:32:00.226190       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [6df198ca54e8004a532de9b15cfe7a48a659ae06623c00a78cad528762944497] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 18:18:57.114870       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0920 18:18:57.125507       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.47"]
	E0920 18:18:57.125590       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 18:18:57.167486       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 18:18:57.167533       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 18:18:57.167558       1 server_linux.go:169] "Using iptables Proxier"
	I0920 18:18:57.170114       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 18:18:57.170539       1 server.go:483] "Version info" version="v1.31.1"
	I0920 18:18:57.170588       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 18:18:57.172124       1 config.go:199] "Starting service config controller"
	I0920 18:18:57.172192       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 18:18:57.172235       1 config.go:105] "Starting endpoint slice config controller"
	I0920 18:18:57.172252       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 18:18:57.172782       1 config.go:328] "Starting node config controller"
	I0920 18:18:57.174613       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 18:18:57.272691       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 18:18:57.272774       1 shared_informer.go:320] Caches are synced for service config
	I0920 18:18:57.274746       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [8153479cebb05353bec31c41778746c485cb294bf41c3f98e30559ad7da64c64] <==
	I0920 18:18:54.351799       1 serving.go:386] Generated self-signed cert in-memory
	W0920 18:18:56.011382       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0920 18:18:56.011435       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0920 18:18:56.011449       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0920 18:18:56.011460       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0920 18:18:56.060855       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0920 18:18:56.063016       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 18:18:56.065953       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0920 18:18:56.066031       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0920 18:18:56.069155       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0920 18:18:56.066050       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0920 18:18:56.169291       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 20 18:31:12 no-preload-956403 kubelet[1355]: E0920 18:31:12.588405    1355 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857072587562664,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:31:16 no-preload-956403 kubelet[1355]: E0920 18:31:16.444705    1355 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-tfsff" podUID="599ba06a-6d4d-483b-b390-a3595a814757"
	Sep 20 18:31:22 no-preload-956403 kubelet[1355]: E0920 18:31:22.589967    1355 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857082589627230,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:31:22 no-preload-956403 kubelet[1355]: E0920 18:31:22.590009    1355 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857082589627230,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:31:30 no-preload-956403 kubelet[1355]: E0920 18:31:30.445228    1355 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-tfsff" podUID="599ba06a-6d4d-483b-b390-a3595a814757"
	Sep 20 18:31:32 no-preload-956403 kubelet[1355]: E0920 18:31:32.592006    1355 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857092591601484,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:31:32 no-preload-956403 kubelet[1355]: E0920 18:31:32.592037    1355 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857092591601484,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:31:42 no-preload-956403 kubelet[1355]: E0920 18:31:42.444596    1355 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-tfsff" podUID="599ba06a-6d4d-483b-b390-a3595a814757"
	Sep 20 18:31:42 no-preload-956403 kubelet[1355]: E0920 18:31:42.595195    1355 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857102593753549,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:31:42 no-preload-956403 kubelet[1355]: E0920 18:31:42.595306    1355 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857102593753549,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:31:52 no-preload-956403 kubelet[1355]: E0920 18:31:52.462164    1355 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 20 18:31:52 no-preload-956403 kubelet[1355]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 20 18:31:52 no-preload-956403 kubelet[1355]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 20 18:31:52 no-preload-956403 kubelet[1355]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 20 18:31:52 no-preload-956403 kubelet[1355]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 20 18:31:52 no-preload-956403 kubelet[1355]: E0920 18:31:52.597118    1355 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857112596514143,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:31:52 no-preload-956403 kubelet[1355]: E0920 18:31:52.597148    1355 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857112596514143,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:31:56 no-preload-956403 kubelet[1355]: E0920 18:31:56.444150    1355 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-tfsff" podUID="599ba06a-6d4d-483b-b390-a3595a814757"
	Sep 20 18:32:02 no-preload-956403 kubelet[1355]: E0920 18:32:02.599085    1355 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857122598596890,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:32:02 no-preload-956403 kubelet[1355]: E0920 18:32:02.599138    1355 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857122598596890,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:32:11 no-preload-956403 kubelet[1355]: E0920 18:32:11.443975    1355 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-tfsff" podUID="599ba06a-6d4d-483b-b390-a3595a814757"
	Sep 20 18:32:12 no-preload-956403 kubelet[1355]: E0920 18:32:12.601809    1355 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857132601196720,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:32:12 no-preload-956403 kubelet[1355]: E0920 18:32:12.601866    1355 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857132601196720,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:32:22 no-preload-956403 kubelet[1355]: E0920 18:32:22.604525    1355 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857142603986164,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:32:22 no-preload-956403 kubelet[1355]: E0920 18:32:22.604550    1355 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857142603986164,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [179e4a02f3459a613d3307806b477d54581555b18babca62d0c5e553b9562442] <==
	I0920 18:19:27.747786       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0920 18:19:27.766313       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0920 18:19:27.766389       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0920 18:19:45.172080       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0920 18:19:45.172463       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-956403_810e057e-8663-44ff-92d3-048d99113c97!
	I0920 18:19:45.173408       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3ef84c84-9c37-400b-af47-aa338eebb9db", APIVersion:"v1", ResourceVersion:"642", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-956403_810e057e-8663-44ff-92d3-048d99113c97 became leader
	I0920 18:19:45.289079       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-956403_810e057e-8663-44ff-92d3-048d99113c97!
	
	
	==> storage-provisioner [3eb9abdf57de51979118cfa77b613c22e28122fcb1da9e72fbf6f3a946ed3531] <==
	I0920 18:18:56.878791       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0920 18:19:26.881581       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-956403 -n no-preload-956403
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-956403 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-tfsff
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-956403 describe pod metrics-server-6867b74b74-tfsff
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-956403 describe pod metrics-server-6867b74b74-tfsff: exit status 1 (68.132287ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-tfsff" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-956403 describe pod metrics-server-6867b74b74-tfsff: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.74s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
E0920 18:26:25.844112   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/kindnet-833505/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
E0920 18:26:28.006290   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/auto-833505/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
E0920 18:26:39.932270   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/functional-945494/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
E0920 18:26:44.155198   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/calico-833505/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
E0920 18:27:23.518233   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/custom-flannel-833505/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
E0920 18:27:43.197055   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
E0920 18:28:04.124338   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/enable-default-cni-833505/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
E0920 18:28:07.219797   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/calico-833505/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
E0920 18:28:30.234654   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/bridge-833505/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
E0920 18:28:39.706896   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/flannel-833505/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
E0920 18:28:46.585773   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/custom-flannel-833505/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
E0920 18:29:27.189732   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/enable-default-cni-833505/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
E0920 18:29:53.298845   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/bridge-833505/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
E0920 18:30:02.770917   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/flannel-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:30:02.779447   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/kindnet-833505/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
E0920 18:30:04.942902   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/auto-833505/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
E0920 18:30:46.275617   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
E0920 18:31:39.931657   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/functional-945494/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
E0920 18:31:44.155058   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/calico-833505/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
E0920 18:32:23.517520   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/custom-flannel-833505/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
E0920 18:32:43.197148   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
E0920 18:33:04.125338   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/enable-default-cni-833505/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
E0920 18:33:30.233817   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/bridge-833505/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
E0920 18:33:39.707408   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/flannel-833505/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
E0920 18:34:43.007056   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/functional-945494/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
E0920 18:35:02.779700   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/kindnet-833505/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
E0920 18:35:04.942802   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/auto-833505/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-744025 -n old-k8s-version-744025
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-744025 -n old-k8s-version-744025: exit status 2 (234.834627ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-744025" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-744025 -n old-k8s-version-744025
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-744025 -n old-k8s-version-744025: exit status 2 (235.712273ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-744025 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-744025 logs -n 25: (1.781514053s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-833505 sudo cat                             | flannel-833505               | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC | 20 Sep 24 18:09 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p flannel-833505 sudo                                 | flannel-833505               | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC | 20 Sep 24 18:09 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p flannel-833505 sudo                                 | flannel-833505               | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC | 20 Sep 24 18:09 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p flannel-833505 sudo                                 | flannel-833505               | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC | 20 Sep 24 18:09 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p flannel-833505 sudo find                            | flannel-833505               | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC | 20 Sep 24 18:09 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p flannel-833505 sudo crio                            | flannel-833505               | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC | 20 Sep 24 18:09 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p flannel-833505                                      | flannel-833505               | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC | 20 Sep 24 18:09 UTC |
	| delete  | -p                                                     | disable-driver-mounts-739804 | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC | 20 Sep 24 18:09 UTC |
	|         | disable-driver-mounts-739804                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-553719 | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC | 20 Sep 24 18:10 UTC |
	|         | default-k8s-diff-port-553719                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-956403             | no-preload-956403            | jenkins | v1.34.0 | 20 Sep 24 18:10 UTC | 20 Sep 24 18:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-956403                                   | no-preload-956403            | jenkins | v1.34.0 | 20 Sep 24 18:10 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-768431            | embed-certs-768431           | jenkins | v1.34.0 | 20 Sep 24 18:10 UTC | 20 Sep 24 18:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-768431                                  | embed-certs-768431           | jenkins | v1.34.0 | 20 Sep 24 18:10 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-553719  | default-k8s-diff-port-553719 | jenkins | v1.34.0 | 20 Sep 24 18:11 UTC | 20 Sep 24 18:11 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-553719 | jenkins | v1.34.0 | 20 Sep 24 18:11 UTC |                     |
	|         | default-k8s-diff-port-553719                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-956403                  | no-preload-956403            | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-744025        | old-k8s-version-744025       | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p no-preload-956403                                   | no-preload-956403            | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC | 20 Sep 24 18:23 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-768431                 | embed-certs-768431           | jenkins | v1.34.0 | 20 Sep 24 18:13 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-768431                                  | embed-certs-768431           | jenkins | v1.34.0 | 20 Sep 24 18:13 UTC | 20 Sep 24 18:22 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-553719       | default-k8s-diff-port-553719 | jenkins | v1.34.0 | 20 Sep 24 18:13 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-553719 | jenkins | v1.34.0 | 20 Sep 24 18:13 UTC | 20 Sep 24 18:22 UTC |
	|         | default-k8s-diff-port-553719                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-744025                              | old-k8s-version-744025       | jenkins | v1.34.0 | 20 Sep 24 18:14 UTC | 20 Sep 24 18:14 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-744025             | old-k8s-version-744025       | jenkins | v1.34.0 | 20 Sep 24 18:14 UTC | 20 Sep 24 18:14 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-744025                              | old-k8s-version-744025       | jenkins | v1.34.0 | 20 Sep 24 18:14 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 18:14:12
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 18:14:12.735020   75577 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:14:12.735385   75577 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:14:12.735400   75577 out.go:358] Setting ErrFile to fd 2...
	I0920 18:14:12.735407   75577 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:14:12.735610   75577 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-8777/.minikube/bin
	I0920 18:14:12.736161   75577 out.go:352] Setting JSON to false
	I0920 18:14:12.737033   75577 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6996,"bootTime":1726849057,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 18:14:12.737131   75577 start.go:139] virtualization: kvm guest
	I0920 18:14:12.739127   75577 out.go:177] * [old-k8s-version-744025] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 18:14:12.740288   75577 notify.go:220] Checking for updates...
	I0920 18:14:12.740310   75577 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 18:14:12.741542   75577 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 18:14:12.742727   75577 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-8777/kubeconfig
	I0920 18:14:12.743806   75577 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-8777/.minikube
	I0920 18:14:12.744863   75577 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 18:14:12.745877   75577 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 18:14:12.747201   75577 config.go:182] Loaded profile config "old-k8s-version-744025": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0920 18:14:12.747636   75577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:14:12.747678   75577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:14:12.762352   75577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42889
	I0920 18:14:12.762734   75577 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:14:12.763321   75577 main.go:141] libmachine: Using API Version  1
	I0920 18:14:12.763348   75577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:14:12.763742   75577 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:14:12.763968   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .DriverName
	I0920 18:14:12.765767   75577 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0920 18:14:12.767036   75577 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 18:14:12.767377   75577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:14:12.767449   75577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:14:12.782473   75577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40777
	I0920 18:14:12.782971   75577 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:14:12.783455   75577 main.go:141] libmachine: Using API Version  1
	I0920 18:14:12.783477   75577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:14:12.783807   75577 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:14:12.784035   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .DriverName
	I0920 18:14:12.820265   75577 out.go:177] * Using the kvm2 driver based on existing profile
	I0920 18:14:12.821397   75577 start.go:297] selected driver: kvm2
	I0920 18:14:12.821409   75577 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-744025 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-744025 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.207 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:14:12.821519   75577 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 18:14:12.822217   75577 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:14:12.822309   75577 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19672-8777/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 18:14:12.837641   75577 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 18:14:12.838107   75577 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:14:12.838143   75577 cni.go:84] Creating CNI manager for ""
	I0920 18:14:12.838193   75577 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:14:12.838231   75577 start.go:340] cluster config:
	{Name:old-k8s-version-744025 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-744025 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.207 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:14:12.838329   75577 iso.go:125] acquiring lock: {Name:mkba95ef0488e46f622333e9f317f43def93040b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:14:12.840059   75577 out.go:177] * Starting "old-k8s-version-744025" primary control-plane node in "old-k8s-version-744025" cluster
	I0920 18:14:15.230099   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:14:12.841339   75577 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0920 18:14:12.841384   75577 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0920 18:14:12.841394   75577 cache.go:56] Caching tarball of preloaded images
	I0920 18:14:12.841473   75577 preload.go:172] Found /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 18:14:12.841482   75577 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0920 18:14:12.841594   75577 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/config.json ...
	I0920 18:14:12.841781   75577 start.go:360] acquireMachinesLock for old-k8s-version-744025: {Name:mkfeedb385cf08b5d2aa00913e85815d02a180c2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 18:14:18.302232   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:14:24.382087   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:14:27.454110   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:14:33.534076   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:14:36.606161   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:14:42.686057   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:14:45.758152   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:14:51.838087   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:14:54.910159   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:00.990183   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:04.062105   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:10.142153   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:13.218090   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:19.294132   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:22.366139   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:28.446103   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:31.518062   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:37.598126   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:40.670145   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:46.750116   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:49.822142   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:55.902120   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:58.974169   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:16:05.054139   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:16:08.126122   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:16:14.206089   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:16:17.278137   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:16:23.358156   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:16:26.430180   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:16:32.510115   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:16:35.582114   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:16:41.662126   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:16:44.734140   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:16:50.814123   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:16:53.886188   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:16:59.966139   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:17:03.038067   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:17:09.118169   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:17:12.190091   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:17:15.194165   75086 start.go:364] duration metric: took 4m1.091611985s to acquireMachinesLock for "embed-certs-768431"
	I0920 18:17:15.194223   75086 start.go:96] Skipping create...Using existing machine configuration
	I0920 18:17:15.194231   75086 fix.go:54] fixHost starting: 
	I0920 18:17:15.194737   75086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:17:15.194794   75086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:17:15.210558   75086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43191
	I0920 18:17:15.211084   75086 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:17:15.211527   75086 main.go:141] libmachine: Using API Version  1
	I0920 18:17:15.211550   75086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:17:15.211882   75086 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:17:15.212099   75086 main.go:141] libmachine: (embed-certs-768431) Calling .DriverName
	I0920 18:17:15.212217   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetState
	I0920 18:17:15.213955   75086 fix.go:112] recreateIfNeeded on embed-certs-768431: state=Stopped err=<nil>
	I0920 18:17:15.213995   75086 main.go:141] libmachine: (embed-certs-768431) Calling .DriverName
	W0920 18:17:15.214129   75086 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 18:17:15.215650   75086 out.go:177] * Restarting existing kvm2 VM for "embed-certs-768431" ...
	I0920 18:17:15.191760   74753 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:17:15.191794   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetMachineName
	I0920 18:17:15.192105   74753 buildroot.go:166] provisioning hostname "no-preload-956403"
	I0920 18:17:15.192133   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetMachineName
	I0920 18:17:15.192325   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:17:15.194039   74753 machine.go:96] duration metric: took 4m37.421116123s to provisionDockerMachine
	I0920 18:17:15.194077   74753 fix.go:56] duration metric: took 4m37.444193238s for fixHost
	I0920 18:17:15.194087   74753 start.go:83] releasing machines lock for "no-preload-956403", held for 4m37.444234787s
	W0920 18:17:15.194109   74753 start.go:714] error starting host: provision: host is not running
	W0920 18:17:15.194214   74753 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0920 18:17:15.194222   74753 start.go:729] Will try again in 5 seconds ...
	I0920 18:17:15.216590   75086 main.go:141] libmachine: (embed-certs-768431) Calling .Start
	I0920 18:17:15.216773   75086 main.go:141] libmachine: (embed-certs-768431) Ensuring networks are active...
	I0920 18:17:15.217439   75086 main.go:141] libmachine: (embed-certs-768431) Ensuring network default is active
	I0920 18:17:15.217769   75086 main.go:141] libmachine: (embed-certs-768431) Ensuring network mk-embed-certs-768431 is active
	I0920 18:17:15.218058   75086 main.go:141] libmachine: (embed-certs-768431) Getting domain xml...
	I0920 18:17:15.218674   75086 main.go:141] libmachine: (embed-certs-768431) Creating domain...
	I0920 18:17:16.454922   75086 main.go:141] libmachine: (embed-certs-768431) Waiting to get IP...
	I0920 18:17:16.456011   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:16.456437   75086 main.go:141] libmachine: (embed-certs-768431) DBG | unable to find current IP address of domain embed-certs-768431 in network mk-embed-certs-768431
	I0920 18:17:16.456518   75086 main.go:141] libmachine: (embed-certs-768431) DBG | I0920 18:17:16.456416   76214 retry.go:31] will retry after 282.081004ms: waiting for machine to come up
	I0920 18:17:16.740062   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:16.740455   75086 main.go:141] libmachine: (embed-certs-768431) DBG | unable to find current IP address of domain embed-certs-768431 in network mk-embed-certs-768431
	I0920 18:17:16.740505   75086 main.go:141] libmachine: (embed-certs-768431) DBG | I0920 18:17:16.740420   76214 retry.go:31] will retry after 335.306847ms: waiting for machine to come up
	I0920 18:17:17.077169   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:17.077564   75086 main.go:141] libmachine: (embed-certs-768431) DBG | unable to find current IP address of domain embed-certs-768431 in network mk-embed-certs-768431
	I0920 18:17:17.077594   75086 main.go:141] libmachine: (embed-certs-768431) DBG | I0920 18:17:17.077513   76214 retry.go:31] will retry after 455.255315ms: waiting for machine to come up
	I0920 18:17:17.534166   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:17.534545   75086 main.go:141] libmachine: (embed-certs-768431) DBG | unable to find current IP address of domain embed-certs-768431 in network mk-embed-certs-768431
	I0920 18:17:17.534570   75086 main.go:141] libmachine: (embed-certs-768431) DBG | I0920 18:17:17.534511   76214 retry.go:31] will retry after 476.184378ms: waiting for machine to come up
	I0920 18:17:18.011940   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:18.012405   75086 main.go:141] libmachine: (embed-certs-768431) DBG | unable to find current IP address of domain embed-certs-768431 in network mk-embed-certs-768431
	I0920 18:17:18.012436   75086 main.go:141] libmachine: (embed-certs-768431) DBG | I0920 18:17:18.012359   76214 retry.go:31] will retry after 597.678215ms: waiting for machine to come up
	I0920 18:17:18.611179   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:18.611560   75086 main.go:141] libmachine: (embed-certs-768431) DBG | unable to find current IP address of domain embed-certs-768431 in network mk-embed-certs-768431
	I0920 18:17:18.611588   75086 main.go:141] libmachine: (embed-certs-768431) DBG | I0920 18:17:18.611521   76214 retry.go:31] will retry after 696.074491ms: waiting for machine to come up
	I0920 18:17:20.194460   74753 start.go:360] acquireMachinesLock for no-preload-956403: {Name:mkfeedb385cf08b5d2aa00913e85815d02a180c2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 18:17:19.309493   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:19.309940   75086 main.go:141] libmachine: (embed-certs-768431) DBG | unable to find current IP address of domain embed-certs-768431 in network mk-embed-certs-768431
	I0920 18:17:19.309958   75086 main.go:141] libmachine: (embed-certs-768431) DBG | I0920 18:17:19.309909   76214 retry.go:31] will retry after 825.638908ms: waiting for machine to come up
	I0920 18:17:20.137018   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:20.137380   75086 main.go:141] libmachine: (embed-certs-768431) DBG | unable to find current IP address of domain embed-certs-768431 in network mk-embed-certs-768431
	I0920 18:17:20.137407   75086 main.go:141] libmachine: (embed-certs-768431) DBG | I0920 18:17:20.137338   76214 retry.go:31] will retry after 997.909719ms: waiting for machine to come up
	I0920 18:17:21.137154   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:21.137608   75086 main.go:141] libmachine: (embed-certs-768431) DBG | unable to find current IP address of domain embed-certs-768431 in network mk-embed-certs-768431
	I0920 18:17:21.137630   75086 main.go:141] libmachine: (embed-certs-768431) DBG | I0920 18:17:21.137552   76214 retry.go:31] will retry after 1.368594293s: waiting for machine to come up
	I0920 18:17:22.507834   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:22.508202   75086 main.go:141] libmachine: (embed-certs-768431) DBG | unable to find current IP address of domain embed-certs-768431 in network mk-embed-certs-768431
	I0920 18:17:22.508228   75086 main.go:141] libmachine: (embed-certs-768431) DBG | I0920 18:17:22.508152   76214 retry.go:31] will retry after 1.922265011s: waiting for machine to come up
	I0920 18:17:24.431977   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:24.432422   75086 main.go:141] libmachine: (embed-certs-768431) DBG | unable to find current IP address of domain embed-certs-768431 in network mk-embed-certs-768431
	I0920 18:17:24.432452   75086 main.go:141] libmachine: (embed-certs-768431) DBG | I0920 18:17:24.432370   76214 retry.go:31] will retry after 2.875158038s: waiting for machine to come up
	I0920 18:17:27.309993   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:27.310512   75086 main.go:141] libmachine: (embed-certs-768431) DBG | unable to find current IP address of domain embed-certs-768431 in network mk-embed-certs-768431
	I0920 18:17:27.310539   75086 main.go:141] libmachine: (embed-certs-768431) DBG | I0920 18:17:27.310459   76214 retry.go:31] will retry after 3.089759463s: waiting for machine to come up
	I0920 18:17:30.402254   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:30.402718   75086 main.go:141] libmachine: (embed-certs-768431) DBG | unable to find current IP address of domain embed-certs-768431 in network mk-embed-certs-768431
	I0920 18:17:30.402757   75086 main.go:141] libmachine: (embed-certs-768431) DBG | I0920 18:17:30.402671   76214 retry.go:31] will retry after 3.42897838s: waiting for machine to come up
	I0920 18:17:33.835196   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:33.835576   75086 main.go:141] libmachine: (embed-certs-768431) Found IP for machine: 192.168.61.202
	I0920 18:17:33.835606   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has current primary IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:33.835615   75086 main.go:141] libmachine: (embed-certs-768431) Reserving static IP address...
	I0920 18:17:33.835974   75086 main.go:141] libmachine: (embed-certs-768431) Reserved static IP address: 192.168.61.202
	I0920 18:17:33.836010   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "embed-certs-768431", mac: "52:54:00:d2:f2:e2", ip: "192.168.61.202"} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:33.836024   75086 main.go:141] libmachine: (embed-certs-768431) Waiting for SSH to be available...
	I0920 18:17:33.836053   75086 main.go:141] libmachine: (embed-certs-768431) DBG | skip adding static IP to network mk-embed-certs-768431 - found existing host DHCP lease matching {name: "embed-certs-768431", mac: "52:54:00:d2:f2:e2", ip: "192.168.61.202"}
	I0920 18:17:33.836065   75086 main.go:141] libmachine: (embed-certs-768431) DBG | Getting to WaitForSSH function...
	I0920 18:17:33.838215   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:33.838561   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:33.838593   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:33.838735   75086 main.go:141] libmachine: (embed-certs-768431) DBG | Using SSH client type: external
	I0920 18:17:33.838768   75086 main.go:141] libmachine: (embed-certs-768431) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/embed-certs-768431/id_rsa (-rw-------)
	I0920 18:17:33.838809   75086 main.go:141] libmachine: (embed-certs-768431) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.202 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-8777/.minikube/machines/embed-certs-768431/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 18:17:33.838826   75086 main.go:141] libmachine: (embed-certs-768431) DBG | About to run SSH command:
	I0920 18:17:33.838838   75086 main.go:141] libmachine: (embed-certs-768431) DBG | exit 0
	I0920 18:17:33.962092   75086 main.go:141] libmachine: (embed-certs-768431) DBG | SSH cmd err, output: <nil>: 
	I0920 18:17:33.962513   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetConfigRaw
	I0920 18:17:33.963115   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetIP
	I0920 18:17:33.965714   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:33.966036   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:33.966056   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:33.966391   75086 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/embed-certs-768431/config.json ...
	I0920 18:17:33.966621   75086 machine.go:93] provisionDockerMachine start ...
	I0920 18:17:33.966641   75086 main.go:141] libmachine: (embed-certs-768431) Calling .DriverName
	I0920 18:17:33.966848   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHHostname
	I0920 18:17:33.968954   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:33.969235   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:33.969266   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:33.969358   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHPort
	I0920 18:17:33.969542   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:33.969694   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:33.969854   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHUsername
	I0920 18:17:33.970001   75086 main.go:141] libmachine: Using SSH client type: native
	I0920 18:17:33.970194   75086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0920 18:17:33.970204   75086 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 18:17:35.178887   75264 start.go:364] duration metric: took 3m58.041934266s to acquireMachinesLock for "default-k8s-diff-port-553719"
	I0920 18:17:35.178955   75264 start.go:96] Skipping create...Using existing machine configuration
	I0920 18:17:35.178968   75264 fix.go:54] fixHost starting: 
	I0920 18:17:35.179541   75264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:17:35.179604   75264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:17:35.199776   75264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39643
	I0920 18:17:35.200255   75264 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:17:35.200832   75264 main.go:141] libmachine: Using API Version  1
	I0920 18:17:35.200858   75264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:17:35.201213   75264 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:17:35.201427   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .DriverName
	I0920 18:17:35.201575   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetState
	I0920 18:17:35.203239   75264 fix.go:112] recreateIfNeeded on default-k8s-diff-port-553719: state=Stopped err=<nil>
	I0920 18:17:35.203279   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .DriverName
	W0920 18:17:35.203415   75264 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 18:17:35.205529   75264 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-553719" ...
	I0920 18:17:34.078412   75086 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 18:17:34.078447   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetMachineName
	I0920 18:17:34.078648   75086 buildroot.go:166] provisioning hostname "embed-certs-768431"
	I0920 18:17:34.078676   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetMachineName
	I0920 18:17:34.078854   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHHostname
	I0920 18:17:34.081703   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.082042   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:34.082079   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.082175   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHPort
	I0920 18:17:34.082371   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:34.082525   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:34.082656   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHUsername
	I0920 18:17:34.082778   75086 main.go:141] libmachine: Using SSH client type: native
	I0920 18:17:34.082937   75086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0920 18:17:34.082949   75086 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-768431 && echo "embed-certs-768431" | sudo tee /etc/hostname
	I0920 18:17:34.199661   75086 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-768431
	
	I0920 18:17:34.199726   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHHostname
	I0920 18:17:34.202468   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.202875   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:34.202903   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.203084   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHPort
	I0920 18:17:34.203328   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:34.203477   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:34.203630   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHUsername
	I0920 18:17:34.203769   75086 main.go:141] libmachine: Using SSH client type: native
	I0920 18:17:34.203936   75086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0920 18:17:34.203952   75086 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-768431' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-768431/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-768431' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 18:17:34.314604   75086 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:17:34.314632   75086 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-8777/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-8777/.minikube}
	I0920 18:17:34.314656   75086 buildroot.go:174] setting up certificates
	I0920 18:17:34.314666   75086 provision.go:84] configureAuth start
	I0920 18:17:34.314674   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetMachineName
	I0920 18:17:34.314940   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetIP
	I0920 18:17:34.317999   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.318306   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:34.318326   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.318538   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHHostname
	I0920 18:17:34.320715   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.321046   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:34.321073   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.321187   75086 provision.go:143] copyHostCerts
	I0920 18:17:34.321256   75086 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem, removing ...
	I0920 18:17:34.321276   75086 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem
	I0920 18:17:34.321359   75086 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem (1082 bytes)
	I0920 18:17:34.321452   75086 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem, removing ...
	I0920 18:17:34.321459   75086 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem
	I0920 18:17:34.321493   75086 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem (1123 bytes)
	I0920 18:17:34.321549   75086 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem, removing ...
	I0920 18:17:34.321558   75086 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem
	I0920 18:17:34.321580   75086 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem (1675 bytes)
	I0920 18:17:34.321692   75086 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem org=jenkins.embed-certs-768431 san=[127.0.0.1 192.168.61.202 embed-certs-768431 localhost minikube]
	I0920 18:17:34.547266   75086 provision.go:177] copyRemoteCerts
	I0920 18:17:34.547343   75086 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 18:17:34.547373   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHHostname
	I0920 18:17:34.550265   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.550648   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:34.550680   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.550895   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHPort
	I0920 18:17:34.551084   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:34.551227   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHUsername
	I0920 18:17:34.551359   75086 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/embed-certs-768431/id_rsa Username:docker}
	I0920 18:17:34.632404   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0920 18:17:34.656468   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 18:17:34.681704   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0920 18:17:34.706486   75086 provision.go:87] duration metric: took 391.807931ms to configureAuth
	I0920 18:17:34.706514   75086 buildroot.go:189] setting minikube options for container-runtime
	I0920 18:17:34.706681   75086 config.go:182] Loaded profile config "embed-certs-768431": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:17:34.706750   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHHostname
	I0920 18:17:34.709500   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.709851   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:34.709915   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.709972   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHPort
	I0920 18:17:34.710199   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:34.710337   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:34.710511   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHUsername
	I0920 18:17:34.710669   75086 main.go:141] libmachine: Using SSH client type: native
	I0920 18:17:34.710854   75086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0920 18:17:34.710875   75086 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 18:17:34.941246   75086 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 18:17:34.941277   75086 machine.go:96] duration metric: took 974.641764ms to provisionDockerMachine
	I0920 18:17:34.941293   75086 start.go:293] postStartSetup for "embed-certs-768431" (driver="kvm2")
	I0920 18:17:34.941341   75086 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 18:17:34.941370   75086 main.go:141] libmachine: (embed-certs-768431) Calling .DriverName
	I0920 18:17:34.941757   75086 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 18:17:34.941792   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHHostname
	I0920 18:17:34.944366   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.944712   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:34.944749   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.944861   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHPort
	I0920 18:17:34.945051   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:34.945198   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHUsername
	I0920 18:17:34.945320   75086 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/embed-certs-768431/id_rsa Username:docker}
	I0920 18:17:35.028570   75086 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 18:17:35.032711   75086 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 18:17:35.032751   75086 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/addons for local assets ...
	I0920 18:17:35.032859   75086 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/files for local assets ...
	I0920 18:17:35.032973   75086 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem -> 159732.pem in /etc/ssl/certs
	I0920 18:17:35.033127   75086 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 18:17:35.042212   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /etc/ssl/certs/159732.pem (1708 bytes)
	I0920 18:17:35.066379   75086 start.go:296] duration metric: took 125.0653ms for postStartSetup
	I0920 18:17:35.066429   75086 fix.go:56] duration metric: took 19.872197784s for fixHost
	I0920 18:17:35.066456   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHHostname
	I0920 18:17:35.069888   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:35.070261   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:35.070286   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:35.070472   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHPort
	I0920 18:17:35.070693   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:35.070836   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:35.070989   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHUsername
	I0920 18:17:35.071204   75086 main.go:141] libmachine: Using SSH client type: native
	I0920 18:17:35.071396   75086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0920 18:17:35.071407   75086 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 18:17:35.178674   75086 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726856255.154624802
	
	I0920 18:17:35.178704   75086 fix.go:216] guest clock: 1726856255.154624802
	I0920 18:17:35.178712   75086 fix.go:229] Guest: 2024-09-20 18:17:35.154624802 +0000 UTC Remote: 2024-09-20 18:17:35.066435161 +0000 UTC m=+261.108982198 (delta=88.189641ms)
	I0920 18:17:35.178734   75086 fix.go:200] guest clock delta is within tolerance: 88.189641ms
	I0920 18:17:35.178760   75086 start.go:83] releasing machines lock for "embed-certs-768431", held for 19.984535459s
	I0920 18:17:35.178801   75086 main.go:141] libmachine: (embed-certs-768431) Calling .DriverName
	I0920 18:17:35.179101   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetIP
	I0920 18:17:35.181807   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:35.182179   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:35.182208   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:35.182353   75086 main.go:141] libmachine: (embed-certs-768431) Calling .DriverName
	I0920 18:17:35.182850   75086 main.go:141] libmachine: (embed-certs-768431) Calling .DriverName
	I0920 18:17:35.182993   75086 main.go:141] libmachine: (embed-certs-768431) Calling .DriverName
	I0920 18:17:35.183097   75086 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 18:17:35.183171   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHHostname
	I0920 18:17:35.183171   75086 ssh_runner.go:195] Run: cat /version.json
	I0920 18:17:35.183230   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHHostname
	I0920 18:17:35.185905   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:35.185942   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:35.186249   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:35.186279   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:35.186317   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:35.186331   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:35.186476   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHPort
	I0920 18:17:35.186597   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHPort
	I0920 18:17:35.186687   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:35.186791   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:35.186816   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHUsername
	I0920 18:17:35.186974   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHUsername
	I0920 18:17:35.186992   75086 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/embed-certs-768431/id_rsa Username:docker}
	I0920 18:17:35.187127   75086 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/embed-certs-768431/id_rsa Username:docker}
	I0920 18:17:35.311021   75086 ssh_runner.go:195] Run: systemctl --version
	I0920 18:17:35.317587   75086 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 18:17:35.462634   75086 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 18:17:35.469331   75086 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 18:17:35.469444   75086 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 18:17:35.485752   75086 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 18:17:35.485780   75086 start.go:495] detecting cgroup driver to use...
	I0920 18:17:35.485903   75086 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 18:17:35.507004   75086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 18:17:35.525868   75086 docker.go:217] disabling cri-docker service (if available) ...
	I0920 18:17:35.525934   75086 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 18:17:35.542533   75086 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 18:17:35.557622   75086 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 18:17:35.669845   75086 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 18:17:35.827784   75086 docker.go:233] disabling docker service ...
	I0920 18:17:35.827855   75086 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 18:17:35.842877   75086 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 18:17:35.859468   75086 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 18:17:35.994094   75086 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 18:17:36.123353   75086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 18:17:36.137941   75086 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 18:17:36.157781   75086 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 18:17:36.157871   75086 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:36.168857   75086 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 18:17:36.168929   75086 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:36.179807   75086 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:36.192260   75086 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:36.203220   75086 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 18:17:36.214388   75086 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:36.224686   75086 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:36.242143   75086 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:36.257047   75086 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 18:17:36.272014   75086 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 18:17:36.272130   75086 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 18:17:36.286083   75086 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 18:17:36.296624   75086 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:17:36.414196   75086 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 18:17:36.511654   75086 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 18:17:36.511720   75086 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 18:17:36.516593   75086 start.go:563] Will wait 60s for crictl version
	I0920 18:17:36.516640   75086 ssh_runner.go:195] Run: which crictl
	I0920 18:17:36.520360   75086 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 18:17:36.556846   75086 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 18:17:36.556949   75086 ssh_runner.go:195] Run: crio --version
	I0920 18:17:36.584699   75086 ssh_runner.go:195] Run: crio --version
	I0920 18:17:36.615179   75086 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 18:17:35.206839   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .Start
	I0920 18:17:35.207055   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Ensuring networks are active...
	I0920 18:17:35.207928   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Ensuring network default is active
	I0920 18:17:35.208260   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Ensuring network mk-default-k8s-diff-port-553719 is active
	I0920 18:17:35.208679   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Getting domain xml...
	I0920 18:17:35.209488   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Creating domain...
	I0920 18:17:36.500751   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting to get IP...
	I0920 18:17:36.501630   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:36.502033   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | unable to find current IP address of domain default-k8s-diff-port-553719 in network mk-default-k8s-diff-port-553719
	I0920 18:17:36.502137   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | I0920 18:17:36.502044   76346 retry.go:31] will retry after 216.050114ms: waiting for machine to come up
	I0920 18:17:36.719559   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:36.720002   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | unable to find current IP address of domain default-k8s-diff-port-553719 in network mk-default-k8s-diff-port-553719
	I0920 18:17:36.720026   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | I0920 18:17:36.719977   76346 retry.go:31] will retry after 303.832728ms: waiting for machine to come up
	I0920 18:17:37.025606   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:37.026096   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | unable to find current IP address of domain default-k8s-diff-port-553719 in network mk-default-k8s-diff-port-553719
	I0920 18:17:37.026137   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | I0920 18:17:37.026050   76346 retry.go:31] will retry after 316.827461ms: waiting for machine to come up
	I0920 18:17:36.616640   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetIP
	I0920 18:17:36.619927   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:36.620377   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:36.620407   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:36.620642   75086 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0920 18:17:36.627189   75086 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:17:36.639842   75086 kubeadm.go:883] updating cluster {Name:embed-certs-768431 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-768431 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.202 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 18:17:36.639953   75086 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:17:36.640019   75086 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:17:36.676519   75086 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 18:17:36.676599   75086 ssh_runner.go:195] Run: which lz4
	I0920 18:17:36.680472   75086 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 18:17:36.684650   75086 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 18:17:36.684683   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0920 18:17:38.090996   75086 crio.go:462] duration metric: took 1.410558742s to copy over tarball
	I0920 18:17:38.091068   75086 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 18:17:37.344880   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:37.345393   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | unable to find current IP address of domain default-k8s-diff-port-553719 in network mk-default-k8s-diff-port-553719
	I0920 18:17:37.345419   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | I0920 18:17:37.345323   76346 retry.go:31] will retry after 456.966571ms: waiting for machine to come up
	I0920 18:17:37.804436   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:37.804919   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | unable to find current IP address of domain default-k8s-diff-port-553719 in network mk-default-k8s-diff-port-553719
	I0920 18:17:37.804954   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | I0920 18:17:37.804891   76346 retry.go:31] will retry after 582.103589ms: waiting for machine to come up
	I0920 18:17:38.388738   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:38.389266   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | unable to find current IP address of domain default-k8s-diff-port-553719 in network mk-default-k8s-diff-port-553719
	I0920 18:17:38.389297   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | I0920 18:17:38.389217   76346 retry.go:31] will retry after 884.882678ms: waiting for machine to come up
	I0920 18:17:39.276048   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:39.276478   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | unable to find current IP address of domain default-k8s-diff-port-553719 in network mk-default-k8s-diff-port-553719
	I0920 18:17:39.276504   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | I0920 18:17:39.276453   76346 retry.go:31] will retry after 807.001285ms: waiting for machine to come up
	I0920 18:17:40.085749   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:40.086228   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | unable to find current IP address of domain default-k8s-diff-port-553719 in network mk-default-k8s-diff-port-553719
	I0920 18:17:40.086256   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | I0920 18:17:40.086190   76346 retry.go:31] will retry after 1.283354255s: waiting for machine to come up
	I0920 18:17:41.370861   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:41.371314   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | unable to find current IP address of domain default-k8s-diff-port-553719 in network mk-default-k8s-diff-port-553719
	I0920 18:17:41.371343   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | I0920 18:17:41.371265   76346 retry.go:31] will retry after 1.756084886s: waiting for machine to come up
	I0920 18:17:40.301535   75086 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.210437306s)
	I0920 18:17:40.301567   75086 crio.go:469] duration metric: took 2.210539553s to extract the tarball
	I0920 18:17:40.301578   75086 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 18:17:40.338638   75086 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:17:40.381753   75086 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 18:17:40.381780   75086 cache_images.go:84] Images are preloaded, skipping loading
	I0920 18:17:40.381788   75086 kubeadm.go:934] updating node { 192.168.61.202 8443 v1.31.1 crio true true} ...
	I0920 18:17:40.381914   75086 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-768431 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.202
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-768431 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 18:17:40.381998   75086 ssh_runner.go:195] Run: crio config
	I0920 18:17:40.428457   75086 cni.go:84] Creating CNI manager for ""
	I0920 18:17:40.428482   75086 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:17:40.428491   75086 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 18:17:40.428512   75086 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.202 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-768431 NodeName:embed-certs-768431 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.202"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.202 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 18:17:40.428681   75086 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.202
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-768431"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.202
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.202"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 18:17:40.428800   75086 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 18:17:40.439049   75086 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 18:17:40.439123   75086 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 18:17:40.449220   75086 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0920 18:17:40.466012   75086 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 18:17:40.482158   75086 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0920 18:17:40.499225   75086 ssh_runner.go:195] Run: grep 192.168.61.202	control-plane.minikube.internal$ /etc/hosts
	I0920 18:17:40.502817   75086 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.202	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:17:40.515241   75086 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:17:40.638615   75086 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:17:40.655790   75086 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/embed-certs-768431 for IP: 192.168.61.202
	I0920 18:17:40.655814   75086 certs.go:194] generating shared ca certs ...
	I0920 18:17:40.655834   75086 certs.go:226] acquiring lock for ca certs: {Name:mkc7ef6c737c6bdc3fdd9dcff8f57029c020d8f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:17:40.656006   75086 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key
	I0920 18:17:40.656070   75086 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key
	I0920 18:17:40.656084   75086 certs.go:256] generating profile certs ...
	I0920 18:17:40.656193   75086 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/embed-certs-768431/client.key
	I0920 18:17:40.656281   75086 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/embed-certs-768431/apiserver.key.a3dbf377
	I0920 18:17:40.656354   75086 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/embed-certs-768431/proxy-client.key
	I0920 18:17:40.656508   75086 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem (1338 bytes)
	W0920 18:17:40.656560   75086 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973_empty.pem, impossibly tiny 0 bytes
	I0920 18:17:40.656573   75086 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 18:17:40.656620   75086 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem (1082 bytes)
	I0920 18:17:40.656654   75086 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem (1123 bytes)
	I0920 18:17:40.656679   75086 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem (1675 bytes)
	I0920 18:17:40.656733   75086 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem (1708 bytes)
	I0920 18:17:40.657567   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 18:17:40.693111   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 18:17:40.733664   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 18:17:40.774367   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0920 18:17:40.800930   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/embed-certs-768431/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0920 18:17:40.827793   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/embed-certs-768431/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 18:17:40.851189   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/embed-certs-768431/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 18:17:40.875092   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/embed-certs-768431/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 18:17:40.898639   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem --> /usr/share/ca-certificates/15973.pem (1338 bytes)
	I0920 18:17:40.921569   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /usr/share/ca-certificates/159732.pem (1708 bytes)
	I0920 18:17:40.945739   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 18:17:40.969942   75086 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 18:17:40.987563   75086 ssh_runner.go:195] Run: openssl version
	I0920 18:17:40.993363   75086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15973.pem && ln -fs /usr/share/ca-certificates/15973.pem /etc/ssl/certs/15973.pem"
	I0920 18:17:41.004673   75086 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15973.pem
	I0920 18:17:41.008985   75086 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:04 /usr/share/ca-certificates/15973.pem
	I0920 18:17:41.009032   75086 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15973.pem
	I0920 18:17:41.014950   75086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15973.pem /etc/ssl/certs/51391683.0"
	I0920 18:17:41.027944   75086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/159732.pem && ln -fs /usr/share/ca-certificates/159732.pem /etc/ssl/certs/159732.pem"
	I0920 18:17:41.040711   75086 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/159732.pem
	I0920 18:17:41.045304   75086 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:04 /usr/share/ca-certificates/159732.pem
	I0920 18:17:41.045361   75086 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/159732.pem
	I0920 18:17:41.050891   75086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/159732.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 18:17:41.061542   75086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 18:17:41.072622   75086 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:17:41.076916   75086 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:17:41.076965   75086 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:17:41.082644   75086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 18:17:41.093136   75086 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 18:17:41.097496   75086 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 18:17:41.103393   75086 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 18:17:41.109444   75086 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 18:17:41.115844   75086 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 18:17:41.121578   75086 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 18:17:41.127292   75086 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 18:17:41.133310   75086 kubeadm.go:392] StartCluster: {Name:embed-certs-768431 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-768431 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.202 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:17:41.133393   75086 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 18:17:41.133439   75086 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:17:41.177284   75086 cri.go:89] found id: ""
	I0920 18:17:41.177380   75086 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 18:17:41.187289   75086 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 18:17:41.187311   75086 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 18:17:41.187362   75086 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 18:17:41.196445   75086 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 18:17:41.197939   75086 kubeconfig.go:125] found "embed-certs-768431" server: "https://192.168.61.202:8443"
	I0920 18:17:41.201030   75086 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 18:17:41.210356   75086 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.202
	I0920 18:17:41.210394   75086 kubeadm.go:1160] stopping kube-system containers ...
	I0920 18:17:41.210422   75086 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0920 18:17:41.210499   75086 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:17:41.244659   75086 cri.go:89] found id: ""
	I0920 18:17:41.244721   75086 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 18:17:41.264341   75086 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 18:17:41.275229   75086 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 18:17:41.275252   75086 kubeadm.go:157] found existing configuration files:
	
	I0920 18:17:41.275314   75086 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 18:17:41.285420   75086 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 18:17:41.285502   75086 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 18:17:41.295902   75086 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 18:17:41.305951   75086 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 18:17:41.306015   75086 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 18:17:41.316567   75086 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 18:17:41.325623   75086 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 18:17:41.325691   75086 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 18:17:41.336405   75086 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 18:17:41.346501   75086 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 18:17:41.346574   75086 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 18:17:41.357340   75086 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 18:17:41.367956   75086 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:17:41.479022   75086 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:17:42.796122   75086 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.317061067s)
	I0920 18:17:42.796155   75086 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:17:43.053135   75086 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:17:43.139770   75086 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:17:43.244066   75086 api_server.go:52] waiting for apiserver process to appear ...
	I0920 18:17:43.244166   75086 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:17:43.744622   75086 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:17:43.129383   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:43.129827   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | unable to find current IP address of domain default-k8s-diff-port-553719 in network mk-default-k8s-diff-port-553719
	I0920 18:17:43.129864   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | I0920 18:17:43.129798   76346 retry.go:31] will retry after 2.259775905s: waiting for machine to come up
	I0920 18:17:45.391508   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:45.391915   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | unable to find current IP address of domain default-k8s-diff-port-553719 in network mk-default-k8s-diff-port-553719
	I0920 18:17:45.391941   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | I0920 18:17:45.391885   76346 retry.go:31] will retry after 1.767770692s: waiting for machine to come up
	I0920 18:17:44.244723   75086 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:17:44.265463   75086 api_server.go:72] duration metric: took 1.02139562s to wait for apiserver process to appear ...
	I0920 18:17:44.265500   75086 api_server.go:88] waiting for apiserver healthz status ...
	I0920 18:17:44.265528   75086 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0920 18:17:46.935510   75086 api_server.go:279] https://192.168.61.202:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 18:17:46.935571   75086 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 18:17:46.935589   75086 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0920 18:17:46.947553   75086 api_server.go:279] https://192.168.61.202:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 18:17:46.947586   75086 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 18:17:47.265919   75086 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0920 18:17:47.272986   75086 api_server.go:279] https://192.168.61.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 18:17:47.273020   75086 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 18:17:47.766662   75086 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0920 18:17:47.783700   75086 api_server.go:279] https://192.168.61.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 18:17:47.783749   75086 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 18:17:48.266012   75086 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0920 18:17:48.270573   75086 api_server.go:279] https://192.168.61.202:8443/healthz returned 200:
	ok
	I0920 18:17:48.277290   75086 api_server.go:141] control plane version: v1.31.1
	I0920 18:17:48.277331   75086 api_server.go:131] duration metric: took 4.011813655s to wait for apiserver health ...
	I0920 18:17:48.277342   75086 cni.go:84] Creating CNI manager for ""
	I0920 18:17:48.277351   75086 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:17:48.278863   75086 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 18:17:48.279934   75086 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 18:17:48.304037   75086 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 18:17:48.337082   75086 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 18:17:48.349485   75086 system_pods.go:59] 8 kube-system pods found
	I0920 18:17:48.349541   75086 system_pods.go:61] "coredns-7c65d6cfc9-cskt4" [2ce6fa43-bdf9-4625-a198-8f59ee9fcf0b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 18:17:48.349554   75086 system_pods.go:61] "etcd-embed-certs-768431" [656af9fd-b380-4934-a0b5-5d39a755de44] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0920 18:17:48.349565   75086 system_pods.go:61] "kube-apiserver-embed-certs-768431" [e128f4ff-3432-4d27-83de-ed76b686403d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0920 18:17:48.349573   75086 system_pods.go:61] "kube-controller-manager-embed-certs-768431" [a2c7e6d7-e351-4e6a-ab95-638aecdaab28] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0920 18:17:48.349579   75086 system_pods.go:61] "kube-proxy-cshjm" [86a82f1b-d8d5-4ab7-89fb-84b836dc0470] Running
	I0920 18:17:48.349586   75086 system_pods.go:61] "kube-scheduler-embed-certs-768431" [64bc50a6-2e99-4992-b249-9ffdf463539a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0920 18:17:48.349601   75086 system_pods.go:61] "metrics-server-6867b74b74-dwnt6" [bb0a120f-ca9e-4b54-826b-55aa20b2f6a4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 18:17:48.349606   75086 system_pods.go:61] "storage-provisioner" [083c6964-a774-4b85-8e6d-acd04ededb3c] Running
	I0920 18:17:48.349617   75086 system_pods.go:74] duration metric: took 12.507925ms to wait for pod list to return data ...
	I0920 18:17:48.349630   75086 node_conditions.go:102] verifying NodePressure condition ...
	I0920 18:17:48.353604   75086 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 18:17:48.353635   75086 node_conditions.go:123] node cpu capacity is 2
	I0920 18:17:48.353648   75086 node_conditions.go:105] duration metric: took 4.012436ms to run NodePressure ...
	I0920 18:17:48.353668   75086 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:17:48.623666   75086 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0920 18:17:48.634113   75086 kubeadm.go:739] kubelet initialised
	I0920 18:17:48.634184   75086 kubeadm.go:740] duration metric: took 10.458173ms waiting for restarted kubelet to initialise ...
	I0920 18:17:48.634205   75086 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:17:48.642334   75086 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-cskt4" in "kube-system" namespace to be "Ready" ...
	I0920 18:17:47.161670   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:47.162064   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | unable to find current IP address of domain default-k8s-diff-port-553719 in network mk-default-k8s-diff-port-553719
	I0920 18:17:47.162094   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | I0920 18:17:47.162020   76346 retry.go:31] will retry after 2.299554771s: waiting for machine to come up
	I0920 18:17:49.463022   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:49.463483   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | unable to find current IP address of domain default-k8s-diff-port-553719 in network mk-default-k8s-diff-port-553719
	I0920 18:17:49.463515   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | I0920 18:17:49.463442   76346 retry.go:31] will retry after 4.372569793s: waiting for machine to come up
	I0920 18:17:50.650781   75086 pod_ready.go:103] pod "coredns-7c65d6cfc9-cskt4" in "kube-system" namespace has status "Ready":"False"
	I0920 18:17:53.149156   75086 pod_ready.go:103] pod "coredns-7c65d6cfc9-cskt4" in "kube-system" namespace has status "Ready":"False"
	I0920 18:17:55.087078   75577 start.go:364] duration metric: took 3m42.245259516s to acquireMachinesLock for "old-k8s-version-744025"
	I0920 18:17:55.087155   75577 start.go:96] Skipping create...Using existing machine configuration
	I0920 18:17:55.087166   75577 fix.go:54] fixHost starting: 
	I0920 18:17:55.087618   75577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:17:55.087671   75577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:17:55.107336   75577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34989
	I0920 18:17:55.107839   75577 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:17:55.108462   75577 main.go:141] libmachine: Using API Version  1
	I0920 18:17:55.108496   75577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:17:55.108855   75577 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:17:55.109072   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .DriverName
	I0920 18:17:55.109222   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetState
	I0920 18:17:55.110831   75577 fix.go:112] recreateIfNeeded on old-k8s-version-744025: state=Stopped err=<nil>
	I0920 18:17:55.110857   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .DriverName
	W0920 18:17:55.110973   75577 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 18:17:55.113408   75577 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-744025" ...
	I0920 18:17:53.840215   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:53.840817   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has current primary IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:53.840844   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Found IP for machine: 192.168.72.190
	I0920 18:17:53.840859   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Reserving static IP address...
	I0920 18:17:53.841438   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Reserved static IP address: 192.168.72.190
	I0920 18:17:53.841460   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for SSH to be available...
	I0920 18:17:53.841475   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-553719", mac: "52:54:00:dd:93:60", ip: "192.168.72.190"} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:53.841503   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | skip adding static IP to network mk-default-k8s-diff-port-553719 - found existing host DHCP lease matching {name: "default-k8s-diff-port-553719", mac: "52:54:00:dd:93:60", ip: "192.168.72.190"}
	I0920 18:17:53.841583   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | Getting to WaitForSSH function...
	I0920 18:17:53.843772   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:53.844082   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:53.844115   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:53.844201   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | Using SSH client type: external
	I0920 18:17:53.844238   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/default-k8s-diff-port-553719/id_rsa (-rw-------)
	I0920 18:17:53.844282   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.190 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-8777/.minikube/machines/default-k8s-diff-port-553719/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 18:17:53.844296   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | About to run SSH command:
	I0920 18:17:53.844309   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | exit 0
	I0920 18:17:53.965938   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | SSH cmd err, output: <nil>: 
	I0920 18:17:53.966354   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetConfigRaw
	I0920 18:17:53.967105   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetIP
	I0920 18:17:53.969715   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:53.970112   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:53.970140   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:53.970429   75264 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/default-k8s-diff-port-553719/config.json ...
	I0920 18:17:53.970686   75264 machine.go:93] provisionDockerMachine start ...
	I0920 18:17:53.970707   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .DriverName
	I0920 18:17:53.970924   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHHostname
	I0920 18:17:53.973207   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:53.973541   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:53.973571   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:53.973716   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHPort
	I0920 18:17:53.973901   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:53.974041   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:53.974155   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHUsername
	I0920 18:17:53.974298   75264 main.go:141] libmachine: Using SSH client type: native
	I0920 18:17:53.974518   75264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.190 22 <nil> <nil>}
	I0920 18:17:53.974533   75264 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 18:17:54.078095   75264 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 18:17:54.078121   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetMachineName
	I0920 18:17:54.078377   75264 buildroot.go:166] provisioning hostname "default-k8s-diff-port-553719"
	I0920 18:17:54.078397   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetMachineName
	I0920 18:17:54.078540   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHHostname
	I0920 18:17:54.081589   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.081998   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:54.082032   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.082179   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHPort
	I0920 18:17:54.082375   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:54.082539   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:54.082743   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHUsername
	I0920 18:17:54.082949   75264 main.go:141] libmachine: Using SSH client type: native
	I0920 18:17:54.083153   75264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.190 22 <nil> <nil>}
	I0920 18:17:54.083173   75264 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-553719 && echo "default-k8s-diff-port-553719" | sudo tee /etc/hostname
	I0920 18:17:54.199155   75264 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-553719
	
	I0920 18:17:54.199192   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHHostname
	I0920 18:17:54.202231   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.202650   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:54.202685   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.202835   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHPort
	I0920 18:17:54.203112   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:54.203289   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:54.203458   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHUsername
	I0920 18:17:54.203696   75264 main.go:141] libmachine: Using SSH client type: native
	I0920 18:17:54.203944   75264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.190 22 <nil> <nil>}
	I0920 18:17:54.203969   75264 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-553719' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-553719/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-553719' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 18:17:54.312356   75264 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:17:54.312386   75264 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-8777/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-8777/.minikube}
	I0920 18:17:54.312432   75264 buildroot.go:174] setting up certificates
	I0920 18:17:54.312450   75264 provision.go:84] configureAuth start
	I0920 18:17:54.312464   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetMachineName
	I0920 18:17:54.312758   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetIP
	I0920 18:17:54.315327   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.315697   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:54.315725   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.315870   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHHostname
	I0920 18:17:54.318203   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.318653   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:54.318679   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.318824   75264 provision.go:143] copyHostCerts
	I0920 18:17:54.318885   75264 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem, removing ...
	I0920 18:17:54.318906   75264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem
	I0920 18:17:54.318978   75264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem (1675 bytes)
	I0920 18:17:54.319096   75264 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem, removing ...
	I0920 18:17:54.319107   75264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem
	I0920 18:17:54.319141   75264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem (1082 bytes)
	I0920 18:17:54.319384   75264 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem, removing ...
	I0920 18:17:54.319401   75264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem
	I0920 18:17:54.319465   75264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem (1123 bytes)
	I0920 18:17:54.319551   75264 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-553719 san=[127.0.0.1 192.168.72.190 default-k8s-diff-port-553719 localhost minikube]
	I0920 18:17:54.453062   75264 provision.go:177] copyRemoteCerts
	I0920 18:17:54.453131   75264 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 18:17:54.453160   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHHostname
	I0920 18:17:54.456047   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.456402   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:54.456431   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.456598   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHPort
	I0920 18:17:54.456796   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:54.456970   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHUsername
	I0920 18:17:54.457092   75264 sshutil.go:53] new ssh client: &{IP:192.168.72.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/default-k8s-diff-port-553719/id_rsa Username:docker}
	I0920 18:17:54.544567   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0920 18:17:54.570009   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0920 18:17:54.594249   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0920 18:17:54.617652   75264 provision.go:87] duration metric: took 305.186554ms to configureAuth
	I0920 18:17:54.617686   75264 buildroot.go:189] setting minikube options for container-runtime
	I0920 18:17:54.617971   75264 config.go:182] Loaded profile config "default-k8s-diff-port-553719": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:17:54.618064   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHHostname
	I0920 18:17:54.621408   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.621882   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:54.621915   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.622140   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHPort
	I0920 18:17:54.622368   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:54.622592   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:54.622833   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHUsername
	I0920 18:17:54.623047   75264 main.go:141] libmachine: Using SSH client type: native
	I0920 18:17:54.623287   75264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.190 22 <nil> <nil>}
	I0920 18:17:54.623305   75264 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 18:17:54.859786   75264 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 18:17:54.859810   75264 machine.go:96] duration metric: took 889.110491ms to provisionDockerMachine
	I0920 18:17:54.859820   75264 start.go:293] postStartSetup for "default-k8s-diff-port-553719" (driver="kvm2")
	I0920 18:17:54.859831   75264 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 18:17:54.859850   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .DriverName
	I0920 18:17:54.860209   75264 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 18:17:54.860258   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHHostname
	I0920 18:17:54.863576   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.863933   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:54.863966   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.864168   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHPort
	I0920 18:17:54.864345   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:54.864494   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHUsername
	I0920 18:17:54.864640   75264 sshutil.go:53] new ssh client: &{IP:192.168.72.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/default-k8s-diff-port-553719/id_rsa Username:docker}
	I0920 18:17:54.944717   75264 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 18:17:54.948836   75264 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 18:17:54.948870   75264 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/addons for local assets ...
	I0920 18:17:54.948937   75264 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/files for local assets ...
	I0920 18:17:54.949014   75264 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem -> 159732.pem in /etc/ssl/certs
	I0920 18:17:54.949116   75264 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 18:17:54.958726   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /etc/ssl/certs/159732.pem (1708 bytes)
	I0920 18:17:54.982792   75264 start.go:296] duration metric: took 122.958621ms for postStartSetup
	I0920 18:17:54.982831   75264 fix.go:56] duration metric: took 19.803863913s for fixHost
	I0920 18:17:54.982856   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHHostname
	I0920 18:17:54.985588   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.985924   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:54.985966   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.986145   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHPort
	I0920 18:17:54.986407   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:54.986586   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:54.986784   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHUsername
	I0920 18:17:54.986982   75264 main.go:141] libmachine: Using SSH client type: native
	I0920 18:17:54.987233   75264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.190 22 <nil> <nil>}
	I0920 18:17:54.987248   75264 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 18:17:55.086859   75264 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726856275.038902431
	
	I0920 18:17:55.086888   75264 fix.go:216] guest clock: 1726856275.038902431
	I0920 18:17:55.086900   75264 fix.go:229] Guest: 2024-09-20 18:17:55.038902431 +0000 UTC Remote: 2024-09-20 18:17:54.98283641 +0000 UTC m=+257.985357778 (delta=56.066021ms)
	I0920 18:17:55.086959   75264 fix.go:200] guest clock delta is within tolerance: 56.066021ms
	I0920 18:17:55.086968   75264 start.go:83] releasing machines lock for "default-k8s-diff-port-553719", held for 19.908037967s
	I0920 18:17:55.087009   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .DriverName
	I0920 18:17:55.087303   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetIP
	I0920 18:17:55.090396   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:55.090777   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:55.090805   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:55.090973   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .DriverName
	I0920 18:17:55.091481   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .DriverName
	I0920 18:17:55.091684   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .DriverName
	I0920 18:17:55.091772   75264 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 18:17:55.091827   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHHostname
	I0920 18:17:55.091951   75264 ssh_runner.go:195] Run: cat /version.json
	I0920 18:17:55.091976   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHHostname
	I0920 18:17:55.094742   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:55.094878   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:55.095102   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:55.095133   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:55.095265   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHPort
	I0920 18:17:55.095362   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:55.095400   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:55.095443   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:55.095550   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHPort
	I0920 18:17:55.095619   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHUsername
	I0920 18:17:55.095689   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:55.095763   75264 sshutil.go:53] new ssh client: &{IP:192.168.72.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/default-k8s-diff-port-553719/id_rsa Username:docker}
	I0920 18:17:55.095795   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHUsername
	I0920 18:17:55.095950   75264 sshutil.go:53] new ssh client: &{IP:192.168.72.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/default-k8s-diff-port-553719/id_rsa Username:docker}
	I0920 18:17:55.213223   75264 ssh_runner.go:195] Run: systemctl --version
	I0920 18:17:55.220952   75264 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 18:17:55.370747   75264 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 18:17:55.377509   75264 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 18:17:55.377595   75264 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 18:17:55.395830   75264 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 18:17:55.395854   75264 start.go:495] detecting cgroup driver to use...
	I0920 18:17:55.395920   75264 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 18:17:55.412885   75264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 18:17:55.428380   75264 docker.go:217] disabling cri-docker service (if available) ...
	I0920 18:17:55.428433   75264 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 18:17:55.444371   75264 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 18:17:55.459485   75264 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 18:17:55.583649   75264 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 18:17:55.768003   75264 docker.go:233] disabling docker service ...
	I0920 18:17:55.768065   75264 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 18:17:55.787062   75264 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 18:17:55.802662   75264 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 18:17:55.967892   75264 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 18:17:56.105744   75264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 18:17:56.120499   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 18:17:56.140527   75264 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 18:17:56.140613   75264 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:56.151282   75264 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 18:17:56.151355   75264 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:56.164680   75264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:56.176142   75264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:56.187120   75264 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 18:17:56.198384   75264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:56.209298   75264 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:56.226714   75264 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:56.237886   75264 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 18:17:56.253664   75264 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 18:17:56.253778   75264 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 18:17:56.267429   75264 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 18:17:56.279118   75264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:17:56.412692   75264 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 18:17:56.513349   75264 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 18:17:56.513438   75264 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 18:17:56.517974   75264 start.go:563] Will wait 60s for crictl version
	I0920 18:17:56.518042   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:17:56.521966   75264 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 18:17:56.561446   75264 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 18:17:56.561520   75264 ssh_runner.go:195] Run: crio --version
	I0920 18:17:56.592009   75264 ssh_runner.go:195] Run: crio --version
	I0920 18:17:56.626555   75264 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 18:17:56.627668   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetIP
	I0920 18:17:56.630995   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:56.631450   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:56.631480   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:56.631751   75264 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0920 18:17:56.636473   75264 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:17:56.648824   75264 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-553719 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-553719 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.190 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 18:17:56.648937   75264 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:17:56.648980   75264 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:17:56.691029   75264 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 18:17:56.691104   75264 ssh_runner.go:195] Run: which lz4
	I0920 18:17:56.695538   75264 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 18:17:56.699703   75264 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 18:17:56.699735   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0920 18:17:55.114625   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .Start
	I0920 18:17:55.114814   75577 main.go:141] libmachine: (old-k8s-version-744025) Ensuring networks are active...
	I0920 18:17:55.115808   75577 main.go:141] libmachine: (old-k8s-version-744025) Ensuring network default is active
	I0920 18:17:55.116209   75577 main.go:141] libmachine: (old-k8s-version-744025) Ensuring network mk-old-k8s-version-744025 is active
	I0920 18:17:55.116615   75577 main.go:141] libmachine: (old-k8s-version-744025) Getting domain xml...
	I0920 18:17:55.117403   75577 main.go:141] libmachine: (old-k8s-version-744025) Creating domain...
	I0920 18:17:56.411359   75577 main.go:141] libmachine: (old-k8s-version-744025) Waiting to get IP...
	I0920 18:17:56.412507   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:17:56.413010   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:17:56.413094   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:17:56.412996   76990 retry.go:31] will retry after 300.110477ms: waiting for machine to come up
	I0920 18:17:56.714648   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:17:56.715163   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:17:56.715183   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:17:56.715114   76990 retry.go:31] will retry after 352.948637ms: waiting for machine to come up
	I0920 18:17:57.069760   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:17:57.070449   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:17:57.070474   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:17:57.070404   76990 retry.go:31] will retry after 335.58281ms: waiting for machine to come up
	I0920 18:17:57.408023   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:17:57.408521   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:17:57.408546   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:17:57.408469   76990 retry.go:31] will retry after 498.040148ms: waiting for machine to come up
	I0920 18:17:54.653346   75086 pod_ready.go:93] pod "coredns-7c65d6cfc9-cskt4" in "kube-system" namespace has status "Ready":"True"
	I0920 18:17:54.653373   75086 pod_ready.go:82] duration metric: took 6.011015799s for pod "coredns-7c65d6cfc9-cskt4" in "kube-system" namespace to be "Ready" ...
	I0920 18:17:54.653383   75086 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:17:56.660606   75086 pod_ready.go:103] pod "etcd-embed-certs-768431" in "kube-system" namespace has status "Ready":"False"
	I0920 18:17:58.162253   75086 pod_ready.go:93] pod "etcd-embed-certs-768431" in "kube-system" namespace has status "Ready":"True"
	I0920 18:17:58.162282   75086 pod_ready.go:82] duration metric: took 3.508893207s for pod "etcd-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:17:58.162292   75086 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:17:58.109419   75264 crio.go:462] duration metric: took 1.413913626s to copy over tarball
	I0920 18:17:58.109504   75264 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 18:18:00.360623   75264 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.25108327s)
	I0920 18:18:00.360655   75264 crio.go:469] duration metric: took 2.251201466s to extract the tarball
	I0920 18:18:00.360703   75264 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 18:18:00.400822   75264 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:18:00.451366   75264 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 18:18:00.451402   75264 cache_images.go:84] Images are preloaded, skipping loading
	I0920 18:18:00.451413   75264 kubeadm.go:934] updating node { 192.168.72.190 8444 v1.31.1 crio true true} ...
	I0920 18:18:00.451590   75264 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-553719 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.190
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-553719 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 18:18:00.451688   75264 ssh_runner.go:195] Run: crio config
	I0920 18:18:00.502703   75264 cni.go:84] Creating CNI manager for ""
	I0920 18:18:00.502729   75264 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:18:00.502740   75264 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 18:18:00.502778   75264 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.190 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-553719 NodeName:default-k8s-diff-port-553719 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.190"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.190 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 18:18:00.502927   75264 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.190
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-553719"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.190
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.190"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 18:18:00.502996   75264 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 18:18:00.513768   75264 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 18:18:00.513870   75264 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 18:18:00.523631   75264 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0920 18:18:00.540428   75264 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 18:18:00.556858   75264 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0920 18:18:00.574954   75264 ssh_runner.go:195] Run: grep 192.168.72.190	control-plane.minikube.internal$ /etc/hosts
	I0920 18:18:00.578951   75264 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.190	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:18:00.592592   75264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:18:00.712035   75264 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:18:00.728657   75264 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/default-k8s-diff-port-553719 for IP: 192.168.72.190
	I0920 18:18:00.728684   75264 certs.go:194] generating shared ca certs ...
	I0920 18:18:00.728706   75264 certs.go:226] acquiring lock for ca certs: {Name:mkc7ef6c737c6bdc3fdd9dcff8f57029c020d8f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:18:00.728877   75264 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key
	I0920 18:18:00.728937   75264 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key
	I0920 18:18:00.728948   75264 certs.go:256] generating profile certs ...
	I0920 18:18:00.729055   75264 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/default-k8s-diff-port-553719/client.key
	I0920 18:18:00.729151   75264 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/default-k8s-diff-port-553719/apiserver.key.cc1f66e7
	I0920 18:18:00.729205   75264 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/default-k8s-diff-port-553719/proxy-client.key
	I0920 18:18:00.729368   75264 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem (1338 bytes)
	W0920 18:18:00.729415   75264 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973_empty.pem, impossibly tiny 0 bytes
	I0920 18:18:00.729425   75264 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 18:18:00.729453   75264 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem (1082 bytes)
	I0920 18:18:00.729474   75264 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem (1123 bytes)
	I0920 18:18:00.729501   75264 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem (1675 bytes)
	I0920 18:18:00.729538   75264 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem (1708 bytes)
	I0920 18:18:00.730192   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 18:18:00.775634   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 18:18:00.812006   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 18:18:00.846904   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0920 18:18:00.877908   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/default-k8s-diff-port-553719/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0920 18:18:00.910057   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/default-k8s-diff-port-553719/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 18:18:00.935377   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/default-k8s-diff-port-553719/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 18:18:00.960143   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/default-k8s-diff-port-553719/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 18:18:00.983906   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 18:18:01.007105   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem --> /usr/share/ca-certificates/15973.pem (1338 bytes)
	I0920 18:18:01.030331   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /usr/share/ca-certificates/159732.pem (1708 bytes)
	I0920 18:18:01.055976   75264 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 18:18:01.073892   75264 ssh_runner.go:195] Run: openssl version
	I0920 18:18:01.081246   75264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 18:18:01.092174   75264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:18:01.096541   75264 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:18:01.096595   75264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:18:01.103564   75264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 18:18:01.117697   75264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15973.pem && ln -fs /usr/share/ca-certificates/15973.pem /etc/ssl/certs/15973.pem"
	I0920 18:18:01.130349   75264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15973.pem
	I0920 18:18:01.135397   75264 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:04 /usr/share/ca-certificates/15973.pem
	I0920 18:18:01.135471   75264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15973.pem
	I0920 18:18:01.141399   75264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15973.pem /etc/ssl/certs/51391683.0"
	I0920 18:18:01.153123   75264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/159732.pem && ln -fs /usr/share/ca-certificates/159732.pem /etc/ssl/certs/159732.pem"
	I0920 18:18:01.165157   75264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/159732.pem
	I0920 18:18:01.170596   75264 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:04 /usr/share/ca-certificates/159732.pem
	I0920 18:18:01.170678   75264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/159732.pem
	I0920 18:18:01.176534   75264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/159732.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 18:18:01.188576   75264 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 18:18:01.193401   75264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 18:18:01.199557   75264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 18:18:01.205410   75264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 18:18:01.211296   75264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 18:18:01.216953   75264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 18:18:01.223417   75264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 18:18:01.229172   75264 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-553719 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-553719 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.190 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:18:01.229428   75264 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 18:18:01.229488   75264 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:18:01.270537   75264 cri.go:89] found id: ""
	I0920 18:18:01.270610   75264 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 18:18:01.284566   75264 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 18:18:01.284588   75264 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 18:18:01.284638   75264 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 18:18:01.297884   75264 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 18:18:01.298879   75264 kubeconfig.go:125] found "default-k8s-diff-port-553719" server: "https://192.168.72.190:8444"
	I0920 18:18:01.300996   75264 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 18:18:01.312183   75264 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.190
	I0920 18:18:01.312251   75264 kubeadm.go:1160] stopping kube-system containers ...
	I0920 18:18:01.312266   75264 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0920 18:18:01.312324   75264 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:18:01.358873   75264 cri.go:89] found id: ""
	I0920 18:18:01.358959   75264 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 18:18:01.377027   75264 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 18:18:01.386872   75264 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 18:18:01.386890   75264 kubeadm.go:157] found existing configuration files:
	
	I0920 18:18:01.386931   75264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0920 18:18:01.396044   75264 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 18:18:01.396118   75264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 18:18:01.405783   75264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0920 18:18:01.415296   75264 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 18:18:01.415367   75264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 18:18:01.427782   75264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0920 18:18:01.438838   75264 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 18:18:01.438897   75264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 18:18:01.450255   75264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0920 18:18:01.461237   75264 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 18:18:01.461313   75264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 18:18:01.472543   75264 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 18:18:01.483456   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:01.607155   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:17:57.908137   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:17:57.908561   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:17:57.908589   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:17:57.908522   76990 retry.go:31] will retry after 749.044696ms: waiting for machine to come up
	I0920 18:17:58.658869   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:17:58.659393   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:17:58.659424   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:17:58.659342   76990 retry.go:31] will retry after 949.936088ms: waiting for machine to come up
	I0920 18:17:59.610936   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:17:59.611467   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:17:59.611513   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:17:59.611430   76990 retry.go:31] will retry after 762.437104ms: waiting for machine to come up
	I0920 18:18:00.375768   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:00.376207   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:18:00.376235   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:18:00.376152   76990 retry.go:31] will retry after 1.228102027s: waiting for machine to come up
	I0920 18:18:01.606490   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:01.606958   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:18:01.606982   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:18:01.606907   76990 retry.go:31] will retry after 1.383186524s: waiting for machine to come up
	I0920 18:18:00.169862   75086 pod_ready.go:103] pod "kube-apiserver-embed-certs-768431" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:01.669351   75086 pod_ready.go:93] pod "kube-apiserver-embed-certs-768431" in "kube-system" namespace has status "Ready":"True"
	I0920 18:18:01.669378   75086 pod_ready.go:82] duration metric: took 3.507078668s for pod "kube-apiserver-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:01.669392   75086 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:01.674981   75086 pod_ready.go:93] pod "kube-controller-manager-embed-certs-768431" in "kube-system" namespace has status "Ready":"True"
	I0920 18:18:01.675006   75086 pod_ready.go:82] duration metric: took 5.604682ms for pod "kube-controller-manager-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:01.675018   75086 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-cshjm" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:01.680522   75086 pod_ready.go:93] pod "kube-proxy-cshjm" in "kube-system" namespace has status "Ready":"True"
	I0920 18:18:01.680547   75086 pod_ready.go:82] duration metric: took 5.521295ms for pod "kube-proxy-cshjm" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:01.680559   75086 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:01.685097   75086 pod_ready.go:93] pod "kube-scheduler-embed-certs-768431" in "kube-system" namespace has status "Ready":"True"
	I0920 18:18:01.685120   75086 pod_ready.go:82] duration metric: took 4.551724ms for pod "kube-scheduler-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:01.685131   75086 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:03.693234   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:02.268120   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:02.471363   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:02.530979   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:02.580339   75264 api_server.go:52] waiting for apiserver process to appear ...
	I0920 18:18:02.580455   75264 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:03.081156   75264 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:03.580564   75264 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:04.080991   75264 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:04.095108   75264 api_server.go:72] duration metric: took 1.514770034s to wait for apiserver process to appear ...
	I0920 18:18:04.095139   75264 api_server.go:88] waiting for apiserver healthz status ...
	I0920 18:18:04.095164   75264 api_server.go:253] Checking apiserver healthz at https://192.168.72.190:8444/healthz ...
	I0920 18:18:04.095704   75264 api_server.go:269] stopped: https://192.168.72.190:8444/healthz: Get "https://192.168.72.190:8444/healthz": dial tcp 192.168.72.190:8444: connect: connection refused
	I0920 18:18:04.595270   75264 api_server.go:253] Checking apiserver healthz at https://192.168.72.190:8444/healthz ...
	I0920 18:18:06.378496   75264 api_server.go:279] https://192.168.72.190:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 18:18:06.378532   75264 api_server.go:103] status: https://192.168.72.190:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 18:18:06.378550   75264 api_server.go:253] Checking apiserver healthz at https://192.168.72.190:8444/healthz ...
	I0920 18:18:06.425353   75264 api_server.go:279] https://192.168.72.190:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 18:18:06.425386   75264 api_server.go:103] status: https://192.168.72.190:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 18:18:06.595786   75264 api_server.go:253] Checking apiserver healthz at https://192.168.72.190:8444/healthz ...
	I0920 18:18:06.602114   75264 api_server.go:279] https://192.168.72.190:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 18:18:06.602148   75264 api_server.go:103] status: https://192.168.72.190:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 18:18:07.096095   75264 api_server.go:253] Checking apiserver healthz at https://192.168.72.190:8444/healthz ...
	I0920 18:18:07.102320   75264 api_server.go:279] https://192.168.72.190:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 18:18:07.102348   75264 api_server.go:103] status: https://192.168.72.190:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 18:18:07.595804   75264 api_server.go:253] Checking apiserver healthz at https://192.168.72.190:8444/healthz ...
	I0920 18:18:07.600025   75264 api_server.go:279] https://192.168.72.190:8444/healthz returned 200:
	ok
	I0920 18:18:07.606598   75264 api_server.go:141] control plane version: v1.31.1
	I0920 18:18:07.606626   75264 api_server.go:131] duration metric: took 3.511479158s to wait for apiserver health ...
	I0920 18:18:07.606637   75264 cni.go:84] Creating CNI manager for ""
	I0920 18:18:07.606645   75264 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:18:07.608423   75264 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 18:18:02.992412   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:02.992887   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:18:02.992919   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:18:02.992822   76990 retry.go:31] will retry after 2.100326569s: waiting for machine to come up
	I0920 18:18:05.095088   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:05.095546   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:18:05.095569   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:18:05.095507   76990 retry.go:31] will retry after 1.758181729s: waiting for machine to come up
	I0920 18:18:06.855172   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:06.855654   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:18:06.855678   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:18:06.855606   76990 retry.go:31] will retry after 2.478116743s: waiting for machine to come up
	I0920 18:18:05.694350   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:08.191644   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:07.609444   75264 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 18:18:07.620420   75264 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 18:18:07.637796   75264 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 18:18:07.652676   75264 system_pods.go:59] 8 kube-system pods found
	I0920 18:18:07.652710   75264 system_pods.go:61] "coredns-7c65d6cfc9-dmdfb" [8e4ffdb3-37e0-4498-bdf5-d6e7dbcad020] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 18:18:07.652721   75264 system_pods.go:61] "etcd-default-k8s-diff-port-553719" [e8de0e96-5f3a-4f3f-ae69-c5da7d3c2eb7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0920 18:18:07.652727   75264 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-553719" [5164e26c-7fcd-41dc-865f-429046a9ad61] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0920 18:18:07.652735   75264 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-553719" [b2c00a76-9645-4c63-9812-43e6ed9f1d4f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0920 18:18:07.652741   75264 system_pods.go:61] "kube-proxy-p9crq" [83e0f53d-6960-42c4-904d-ea85ba9160f4] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0920 18:18:07.652746   75264 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-553719" [42155f7b-95bb-467b-acf5-89eb4f80cf76] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0920 18:18:07.652751   75264 system_pods.go:61] "metrics-server-6867b74b74-vtl79" [29e0b6eb-22a9-4e37-97f9-83b48cc38193] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 18:18:07.652760   75264 system_pods.go:61] "storage-provisioner" [6fad2d07-f99e-45ac-9657-bce6d73d7fce] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0920 18:18:07.652769   75264 system_pods.go:74] duration metric: took 14.950839ms to wait for pod list to return data ...
	I0920 18:18:07.652780   75264 node_conditions.go:102] verifying NodePressure condition ...
	I0920 18:18:07.658890   75264 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 18:18:07.658927   75264 node_conditions.go:123] node cpu capacity is 2
	I0920 18:18:07.658941   75264 node_conditions.go:105] duration metric: took 6.15496ms to run NodePressure ...
	I0920 18:18:07.658964   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:07.932231   75264 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0920 18:18:07.936474   75264 kubeadm.go:739] kubelet initialised
	I0920 18:18:07.936500   75264 kubeadm.go:740] duration metric: took 4.236071ms waiting for restarted kubelet to initialise ...
	I0920 18:18:07.936507   75264 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:18:07.941918   75264 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-dmdfb" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:07.947609   75264 pod_ready.go:98] node "default-k8s-diff-port-553719" hosting pod "coredns-7c65d6cfc9-dmdfb" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:07.947642   75264 pod_ready.go:82] duration metric: took 5.693968ms for pod "coredns-7c65d6cfc9-dmdfb" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:07.947653   75264 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-553719" hosting pod "coredns-7c65d6cfc9-dmdfb" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:07.947660   75264 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:07.954317   75264 pod_ready.go:98] node "default-k8s-diff-port-553719" hosting pod "etcd-default-k8s-diff-port-553719" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:07.954358   75264 pod_ready.go:82] duration metric: took 6.689603ms for pod "etcd-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:07.954373   75264 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-553719" hosting pod "etcd-default-k8s-diff-port-553719" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:07.954382   75264 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:07.966099   75264 pod_ready.go:98] node "default-k8s-diff-port-553719" hosting pod "kube-apiserver-default-k8s-diff-port-553719" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:07.966126   75264 pod_ready.go:82] duration metric: took 11.735727ms for pod "kube-apiserver-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:07.966141   75264 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-553719" hosting pod "kube-apiserver-default-k8s-diff-port-553719" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:07.966154   75264 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:08.041768   75264 pod_ready.go:98] node "default-k8s-diff-port-553719" hosting pod "kube-controller-manager-default-k8s-diff-port-553719" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:08.041794   75264 pod_ready.go:82] duration metric: took 75.630916ms for pod "kube-controller-manager-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:08.041805   75264 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-553719" hosting pod "kube-controller-manager-default-k8s-diff-port-553719" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:08.041816   75264 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-p9crq" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:08.440423   75264 pod_ready.go:98] node "default-k8s-diff-port-553719" hosting pod "kube-proxy-p9crq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:08.440459   75264 pod_ready.go:82] duration metric: took 398.635469ms for pod "kube-proxy-p9crq" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:08.440471   75264 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-553719" hosting pod "kube-proxy-p9crq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:08.440480   75264 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:08.841917   75264 pod_ready.go:98] node "default-k8s-diff-port-553719" hosting pod "kube-scheduler-default-k8s-diff-port-553719" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:08.841953   75264 pod_ready.go:82] duration metric: took 401.459059ms for pod "kube-scheduler-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:08.841968   75264 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-553719" hosting pod "kube-scheduler-default-k8s-diff-port-553719" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:08.841977   75264 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:09.241198   75264 pod_ready.go:98] node "default-k8s-diff-port-553719" hosting pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:09.241225   75264 pod_ready.go:82] duration metric: took 399.238784ms for pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:09.241237   75264 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-553719" hosting pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:09.241245   75264 pod_ready.go:39] duration metric: took 1.304729447s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:18:09.241259   75264 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 18:18:09.253519   75264 ops.go:34] apiserver oom_adj: -16
	I0920 18:18:09.253546   75264 kubeadm.go:597] duration metric: took 7.96895197s to restartPrimaryControlPlane
	I0920 18:18:09.253558   75264 kubeadm.go:394] duration metric: took 8.024395552s to StartCluster
	I0920 18:18:09.253586   75264 settings.go:142] acquiring lock: {Name:mk2a13c58cdc0faf8cddca5d6716175d45db9bfd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:18:09.253682   75264 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19672-8777/kubeconfig
	I0920 18:18:09.255907   75264 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/kubeconfig: {Name:mkf32a4c736808e023459b2f0e40188618a38db1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:18:09.256208   75264 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.190 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:18:09.256322   75264 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 18:18:09.256394   75264 config.go:182] Loaded profile config "default-k8s-diff-port-553719": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:18:09.256420   75264 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-553719"
	I0920 18:18:09.256428   75264 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-553719"
	I0920 18:18:09.256440   75264 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-553719"
	I0920 18:18:09.256448   75264 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-553719"
	I0920 18:18:09.256457   75264 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-553719"
	W0920 18:18:09.256468   75264 addons.go:243] addon metrics-server should already be in state true
	I0920 18:18:09.256496   75264 host.go:66] Checking if "default-k8s-diff-port-553719" exists ...
	I0920 18:18:09.256441   75264 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-553719"
	W0920 18:18:09.256569   75264 addons.go:243] addon storage-provisioner should already be in state true
	I0920 18:18:09.256602   75264 host.go:66] Checking if "default-k8s-diff-port-553719" exists ...
	I0920 18:18:09.256814   75264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:09.256844   75264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:09.256882   75264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:09.256926   75264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:09.256930   75264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:09.256957   75264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:09.258738   75264 out.go:177] * Verifying Kubernetes components...
	I0920 18:18:09.259893   75264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:18:09.272294   75264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45635
	I0920 18:18:09.272357   75264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39993
	I0920 18:18:09.272304   75264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32923
	I0920 18:18:09.272766   75264 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:09.272889   75264 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:09.272918   75264 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:09.273279   75264 main.go:141] libmachine: Using API Version  1
	I0920 18:18:09.273294   75264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:09.273416   75264 main.go:141] libmachine: Using API Version  1
	I0920 18:18:09.273438   75264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:09.273457   75264 main.go:141] libmachine: Using API Version  1
	I0920 18:18:09.273478   75264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:09.273657   75264 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:09.273850   75264 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:09.273922   75264 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:09.274017   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetState
	I0920 18:18:09.274244   75264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:09.274287   75264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:09.274399   75264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:09.274430   75264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:09.277535   75264 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-553719"
	W0920 18:18:09.277556   75264 addons.go:243] addon default-storageclass should already be in state true
	I0920 18:18:09.277586   75264 host.go:66] Checking if "default-k8s-diff-port-553719" exists ...
	I0920 18:18:09.277990   75264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:09.278041   75264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:09.290955   75264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39845
	I0920 18:18:09.291487   75264 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:09.292058   75264 main.go:141] libmachine: Using API Version  1
	I0920 18:18:09.292087   75264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:09.292409   75264 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:09.292607   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetState
	I0920 18:18:09.293351   75264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44575
	I0920 18:18:09.293790   75264 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:09.294056   75264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44517
	I0920 18:18:09.294412   75264 main.go:141] libmachine: Using API Version  1
	I0920 18:18:09.294438   75264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:09.294540   75264 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:09.294854   75264 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:09.294902   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .DriverName
	I0920 18:18:09.295362   75264 main.go:141] libmachine: Using API Version  1
	I0920 18:18:09.295387   75264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:09.295521   75264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:09.295569   75264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:09.295783   75264 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:09.295993   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetState
	I0920 18:18:09.297231   75264 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0920 18:18:09.298214   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .DriverName
	I0920 18:18:09.298825   75264 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 18:18:09.298849   75264 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 18:18:09.298870   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHHostname
	I0920 18:18:09.299849   75264 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:18:09.301018   75264 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 18:18:09.301034   75264 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 18:18:09.301048   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHHostname
	I0920 18:18:09.302841   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:18:09.303141   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:18:09.303165   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:18:09.303335   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHPort
	I0920 18:18:09.303491   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:18:09.303627   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHUsername
	I0920 18:18:09.303772   75264 sshutil.go:53] new ssh client: &{IP:192.168.72.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/default-k8s-diff-port-553719/id_rsa Username:docker}
	I0920 18:18:09.304104   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:18:09.304507   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:18:09.304552   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:18:09.304623   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHPort
	I0920 18:18:09.304786   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:18:09.304946   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHUsername
	I0920 18:18:09.305085   75264 sshutil.go:53] new ssh client: &{IP:192.168.72.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/default-k8s-diff-port-553719/id_rsa Username:docker}
	I0920 18:18:09.312912   75264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35085
	I0920 18:18:09.313277   75264 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:09.313780   75264 main.go:141] libmachine: Using API Version  1
	I0920 18:18:09.313801   75264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:09.314355   75264 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:09.314525   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetState
	I0920 18:18:09.316088   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .DriverName
	I0920 18:18:09.316415   75264 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 18:18:09.316429   75264 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 18:18:09.316448   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHHostname
	I0920 18:18:09.319116   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:18:09.319484   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:18:09.319512   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:18:09.319664   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHPort
	I0920 18:18:09.319832   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:18:09.319984   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHUsername
	I0920 18:18:09.320098   75264 sshutil.go:53] new ssh client: &{IP:192.168.72.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/default-k8s-diff-port-553719/id_rsa Username:docker}
	I0920 18:18:09.457570   75264 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:18:09.476315   75264 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-553719" to be "Ready" ...
	I0920 18:18:09.599420   75264 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 18:18:09.599442   75264 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0920 18:18:09.600777   75264 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 18:18:09.621891   75264 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 18:18:09.626217   75264 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 18:18:09.626251   75264 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 18:18:09.675537   75264 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 18:18:09.675569   75264 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 18:18:09.723167   75264 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 18:18:10.813006   75264 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.191083617s)
	I0920 18:18:10.813080   75264 main.go:141] libmachine: Making call to close driver server
	I0920 18:18:10.813095   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .Close
	I0920 18:18:10.813403   75264 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:18:10.813452   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | Closing plugin on server side
	I0920 18:18:10.813455   75264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:18:10.813497   75264 main.go:141] libmachine: Making call to close driver server
	I0920 18:18:10.813518   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .Close
	I0920 18:18:10.813748   75264 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:18:10.813764   75264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:18:10.813783   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | Closing plugin on server side
	I0920 18:18:10.813975   75264 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.213167075s)
	I0920 18:18:10.814018   75264 main.go:141] libmachine: Making call to close driver server
	I0920 18:18:10.814034   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .Close
	I0920 18:18:10.814323   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | Closing plugin on server side
	I0920 18:18:10.814325   75264 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:18:10.814345   75264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:18:10.814358   75264 main.go:141] libmachine: Making call to close driver server
	I0920 18:18:10.814368   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .Close
	I0920 18:18:10.814653   75264 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:18:10.814673   75264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:18:10.821768   75264 main.go:141] libmachine: Making call to close driver server
	I0920 18:18:10.821785   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .Close
	I0920 18:18:10.822016   75264 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:18:10.822034   75264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:18:10.846062   75264 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.122849108s)
	I0920 18:18:10.846122   75264 main.go:141] libmachine: Making call to close driver server
	I0920 18:18:10.846139   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .Close
	I0920 18:18:10.846441   75264 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:18:10.846469   75264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:18:10.846469   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | Closing plugin on server side
	I0920 18:18:10.846479   75264 main.go:141] libmachine: Making call to close driver server
	I0920 18:18:10.846488   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .Close
	I0920 18:18:10.846715   75264 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:18:10.846737   75264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:18:10.846748   75264 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-553719"
	I0920 18:18:10.846749   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | Closing plugin on server side
	I0920 18:18:10.848702   75264 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0920 18:18:10.849990   75264 addons.go:510] duration metric: took 1.593667614s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0920 18:18:11.480490   75264 node_ready.go:53] node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:09.334928   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:09.335367   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:18:09.335402   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:18:09.335314   76990 retry.go:31] will retry after 4.194120768s: waiting for machine to come up
	I0920 18:18:10.192078   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:12.192186   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:14.778926   74753 start.go:364] duration metric: took 54.584395971s to acquireMachinesLock for "no-preload-956403"
	I0920 18:18:14.778979   74753 start.go:96] Skipping create...Using existing machine configuration
	I0920 18:18:14.778987   74753 fix.go:54] fixHost starting: 
	I0920 18:18:14.779392   74753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:14.779430   74753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:14.796004   74753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45761
	I0920 18:18:14.796509   74753 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:14.797006   74753 main.go:141] libmachine: Using API Version  1
	I0920 18:18:14.797028   74753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:14.797330   74753 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:14.797499   74753 main.go:141] libmachine: (no-preload-956403) Calling .DriverName
	I0920 18:18:14.797650   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetState
	I0920 18:18:14.799376   74753 fix.go:112] recreateIfNeeded on no-preload-956403: state=Stopped err=<nil>
	I0920 18:18:14.799400   74753 main.go:141] libmachine: (no-preload-956403) Calling .DriverName
	W0920 18:18:14.799564   74753 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 18:18:14.802451   74753 out.go:177] * Restarting existing kvm2 VM for "no-preload-956403" ...
	I0920 18:18:13.533702   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.534281   75577 main.go:141] libmachine: (old-k8s-version-744025) Found IP for machine: 192.168.39.207
	I0920 18:18:13.534309   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has current primary IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.534314   75577 main.go:141] libmachine: (old-k8s-version-744025) Reserving static IP address...
	I0920 18:18:13.534725   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "old-k8s-version-744025", mac: "52:54:00:e5:57:41", ip: "192.168.39.207"} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:13.534761   75577 main.go:141] libmachine: (old-k8s-version-744025) Reserved static IP address: 192.168.39.207
	I0920 18:18:13.534783   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | skip adding static IP to network mk-old-k8s-version-744025 - found existing host DHCP lease matching {name: "old-k8s-version-744025", mac: "52:54:00:e5:57:41", ip: "192.168.39.207"}
	I0920 18:18:13.534800   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | Getting to WaitForSSH function...
	I0920 18:18:13.534816   75577 main.go:141] libmachine: (old-k8s-version-744025) Waiting for SSH to be available...
	I0920 18:18:13.536879   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.537271   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:13.537301   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.537391   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | Using SSH client type: external
	I0920 18:18:13.537439   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/old-k8s-version-744025/id_rsa (-rw-------)
	I0920 18:18:13.537476   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.207 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-8777/.minikube/machines/old-k8s-version-744025/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 18:18:13.537486   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | About to run SSH command:
	I0920 18:18:13.537498   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | exit 0
	I0920 18:18:13.662214   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | SSH cmd err, output: <nil>: 
	I0920 18:18:13.662565   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetConfigRaw
	I0920 18:18:13.663269   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetIP
	I0920 18:18:13.666068   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.666530   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:13.666561   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.666899   75577 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/config.json ...
	I0920 18:18:13.667111   75577 machine.go:93] provisionDockerMachine start ...
	I0920 18:18:13.667129   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .DriverName
	I0920 18:18:13.667331   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHHostname
	I0920 18:18:13.669347   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.669717   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:13.669743   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.669908   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHPort
	I0920 18:18:13.670167   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:13.670334   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:13.670453   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHUsername
	I0920 18:18:13.670583   75577 main.go:141] libmachine: Using SSH client type: native
	I0920 18:18:13.670759   75577 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.207 22 <nil> <nil>}
	I0920 18:18:13.670770   75577 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 18:18:13.774059   75577 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 18:18:13.774093   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetMachineName
	I0920 18:18:13.774406   75577 buildroot.go:166] provisioning hostname "old-k8s-version-744025"
	I0920 18:18:13.774434   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetMachineName
	I0920 18:18:13.774618   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHHostname
	I0920 18:18:13.777175   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.777482   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:13.777513   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.777633   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHPort
	I0920 18:18:13.777803   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:13.777966   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:13.778082   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHUsername
	I0920 18:18:13.778235   75577 main.go:141] libmachine: Using SSH client type: native
	I0920 18:18:13.778404   75577 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.207 22 <nil> <nil>}
	I0920 18:18:13.778417   75577 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-744025 && echo "old-k8s-version-744025" | sudo tee /etc/hostname
	I0920 18:18:13.900180   75577 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-744025
	
	I0920 18:18:13.900224   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHHostname
	I0920 18:18:13.902958   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.903309   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:13.903340   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.903543   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHPort
	I0920 18:18:13.903762   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:13.903931   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:13.904051   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHUsername
	I0920 18:18:13.904214   75577 main.go:141] libmachine: Using SSH client type: native
	I0920 18:18:13.904393   75577 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.207 22 <nil> <nil>}
	I0920 18:18:13.904409   75577 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-744025' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-744025/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-744025' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 18:18:14.023748   75577 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:18:14.023781   75577 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-8777/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-8777/.minikube}
	I0920 18:18:14.023834   75577 buildroot.go:174] setting up certificates
	I0920 18:18:14.023851   75577 provision.go:84] configureAuth start
	I0920 18:18:14.023866   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetMachineName
	I0920 18:18:14.024154   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetIP
	I0920 18:18:14.027240   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.027640   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:14.027778   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.027867   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHHostname
	I0920 18:18:14.030383   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.030741   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:14.030765   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.030915   75577 provision.go:143] copyHostCerts
	I0920 18:18:14.030979   75577 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem, removing ...
	I0920 18:18:14.030999   75577 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem
	I0920 18:18:14.031072   75577 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem (1082 bytes)
	I0920 18:18:14.031188   75577 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem, removing ...
	I0920 18:18:14.031205   75577 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem
	I0920 18:18:14.031240   75577 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem (1123 bytes)
	I0920 18:18:14.031340   75577 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem, removing ...
	I0920 18:18:14.031351   75577 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem
	I0920 18:18:14.031378   75577 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem (1675 bytes)
	I0920 18:18:14.031455   75577 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-744025 san=[127.0.0.1 192.168.39.207 localhost minikube old-k8s-version-744025]
	I0920 18:18:14.140775   75577 provision.go:177] copyRemoteCerts
	I0920 18:18:14.140847   75577 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 18:18:14.140883   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHHostname
	I0920 18:18:14.143599   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.144016   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:14.144062   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.144199   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHPort
	I0920 18:18:14.144377   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:14.144526   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHUsername
	I0920 18:18:14.144656   75577 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/old-k8s-version-744025/id_rsa Username:docker}
	I0920 18:18:14.228286   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 18:18:14.257293   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0920 18:18:14.281335   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0920 18:18:14.305736   75577 provision.go:87] duration metric: took 281.853458ms to configureAuth
	I0920 18:18:14.305762   75577 buildroot.go:189] setting minikube options for container-runtime
	I0920 18:18:14.305983   75577 config.go:182] Loaded profile config "old-k8s-version-744025": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0920 18:18:14.306076   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHHostname
	I0920 18:18:14.308833   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.309220   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:14.309244   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.309535   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHPort
	I0920 18:18:14.309779   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:14.309974   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:14.310124   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHUsername
	I0920 18:18:14.310316   75577 main.go:141] libmachine: Using SSH client type: native
	I0920 18:18:14.310535   75577 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.207 22 <nil> <nil>}
	I0920 18:18:14.310551   75577 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 18:18:14.536416   75577 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 18:18:14.536442   75577 machine.go:96] duration metric: took 869.318772ms to provisionDockerMachine
	I0920 18:18:14.536456   75577 start.go:293] postStartSetup for "old-k8s-version-744025" (driver="kvm2")
	I0920 18:18:14.536468   75577 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 18:18:14.536510   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .DriverName
	I0920 18:18:14.536803   75577 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 18:18:14.536831   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHHostname
	I0920 18:18:14.539503   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.539923   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:14.539951   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.540126   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHPort
	I0920 18:18:14.540303   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:14.540463   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHUsername
	I0920 18:18:14.540649   75577 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/old-k8s-version-744025/id_rsa Username:docker}
	I0920 18:18:14.625866   75577 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 18:18:14.630527   75577 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 18:18:14.630551   75577 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/addons for local assets ...
	I0920 18:18:14.630637   75577 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/files for local assets ...
	I0920 18:18:14.630727   75577 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem -> 159732.pem in /etc/ssl/certs
	I0920 18:18:14.630840   75577 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 18:18:14.640965   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /etc/ssl/certs/159732.pem (1708 bytes)
	I0920 18:18:14.668227   75577 start.go:296] duration metric: took 131.756559ms for postStartSetup
	I0920 18:18:14.668268   75577 fix.go:56] duration metric: took 19.581104117s for fixHost
	I0920 18:18:14.668295   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHHostname
	I0920 18:18:14.671138   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.671520   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:14.671549   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.671777   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHPort
	I0920 18:18:14.671981   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:14.672141   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:14.672280   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHUsername
	I0920 18:18:14.672436   75577 main.go:141] libmachine: Using SSH client type: native
	I0920 18:18:14.672606   75577 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.207 22 <nil> <nil>}
	I0920 18:18:14.672616   75577 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 18:18:14.778752   75577 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726856294.753230182
	
	I0920 18:18:14.778785   75577 fix.go:216] guest clock: 1726856294.753230182
	I0920 18:18:14.778796   75577 fix.go:229] Guest: 2024-09-20 18:18:14.753230182 +0000 UTC Remote: 2024-09-20 18:18:14.668273351 +0000 UTC m=+241.967312055 (delta=84.956831ms)
	I0920 18:18:14.778827   75577 fix.go:200] guest clock delta is within tolerance: 84.956831ms
	I0920 18:18:14.778836   75577 start.go:83] releasing machines lock for "old-k8s-version-744025", held for 19.691716285s
	I0920 18:18:14.778874   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .DriverName
	I0920 18:18:14.779135   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetIP
	I0920 18:18:14.781932   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.782357   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:14.782386   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.782572   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .DriverName
	I0920 18:18:14.783124   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .DriverName
	I0920 18:18:14.783328   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .DriverName
	I0920 18:18:14.783401   75577 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 18:18:14.783465   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHHostname
	I0920 18:18:14.783519   75577 ssh_runner.go:195] Run: cat /version.json
	I0920 18:18:14.783552   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHHostname
	I0920 18:18:14.786645   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.786817   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.786994   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:14.787023   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.787207   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:14.787259   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.787264   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHPort
	I0920 18:18:14.787404   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHPort
	I0920 18:18:14.787469   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:14.787561   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:14.787612   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHUsername
	I0920 18:18:14.787711   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHUsername
	I0920 18:18:14.787845   75577 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/old-k8s-version-744025/id_rsa Username:docker}
	I0920 18:18:14.787845   75577 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/old-k8s-version-744025/id_rsa Username:docker}
	I0920 18:18:14.872183   75577 ssh_runner.go:195] Run: systemctl --version
	I0920 18:18:14.913275   75577 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 18:18:15.071299   75577 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 18:18:15.078725   75577 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 18:18:15.078806   75577 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 18:18:15.101497   75577 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 18:18:15.101522   75577 start.go:495] detecting cgroup driver to use...
	I0920 18:18:15.101579   75577 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 18:18:15.120626   75577 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 18:18:15.137297   75577 docker.go:217] disabling cri-docker service (if available) ...
	I0920 18:18:15.137401   75577 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 18:18:15.152152   75577 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 18:18:15.167359   75577 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 18:18:15.288763   75577 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 18:18:15.475429   75577 docker.go:233] disabling docker service ...
	I0920 18:18:15.475512   75577 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 18:18:15.493331   75577 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 18:18:15.510326   75577 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 18:18:15.629119   75577 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 18:18:15.758923   75577 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 18:18:15.776778   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 18:18:15.798980   75577 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0920 18:18:15.799042   75577 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:18:15.810264   75577 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 18:18:15.810322   75577 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:18:15.821060   75577 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:18:15.832685   75577 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:18:15.843026   75577 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 18:18:15.853996   75577 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 18:18:15.864126   75577 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 18:18:15.864183   75577 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 18:18:15.878216   75577 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 18:18:15.889820   75577 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:18:16.015555   75577 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 18:18:16.118797   75577 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 18:18:16.118880   75577 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 18:18:16.124430   75577 start.go:563] Will wait 60s for crictl version
	I0920 18:18:16.124485   75577 ssh_runner.go:195] Run: which crictl
	I0920 18:18:16.128632   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 18:18:16.165596   75577 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 18:18:16.165707   75577 ssh_runner.go:195] Run: crio --version
	I0920 18:18:16.198772   75577 ssh_runner.go:195] Run: crio --version
	I0920 18:18:16.238511   75577 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0920 18:18:13.980331   75264 node_ready.go:53] node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:15.981435   75264 node_ready.go:53] node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:16.982412   75264 node_ready.go:49] node "default-k8s-diff-port-553719" has status "Ready":"True"
	I0920 18:18:16.982441   75264 node_ready.go:38] duration metric: took 7.506086526s for node "default-k8s-diff-port-553719" to be "Ready" ...
	I0920 18:18:16.982457   75264 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:18:16.989870   75264 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-dmdfb" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:14.803646   74753 main.go:141] libmachine: (no-preload-956403) Calling .Start
	I0920 18:18:14.803856   74753 main.go:141] libmachine: (no-preload-956403) Ensuring networks are active...
	I0920 18:18:14.804594   74753 main.go:141] libmachine: (no-preload-956403) Ensuring network default is active
	I0920 18:18:14.804920   74753 main.go:141] libmachine: (no-preload-956403) Ensuring network mk-no-preload-956403 is active
	I0920 18:18:14.805341   74753 main.go:141] libmachine: (no-preload-956403) Getting domain xml...
	I0920 18:18:14.806068   74753 main.go:141] libmachine: (no-preload-956403) Creating domain...
	I0920 18:18:16.196663   74753 main.go:141] libmachine: (no-preload-956403) Waiting to get IP...
	I0920 18:18:16.197381   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:16.197762   74753 main.go:141] libmachine: (no-preload-956403) DBG | unable to find current IP address of domain no-preload-956403 in network mk-no-preload-956403
	I0920 18:18:16.197885   74753 main.go:141] libmachine: (no-preload-956403) DBG | I0920 18:18:16.197764   77223 retry.go:31] will retry after 214.087819ms: waiting for machine to come up
	I0920 18:18:16.413365   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:16.413951   74753 main.go:141] libmachine: (no-preload-956403) DBG | unable to find current IP address of domain no-preload-956403 in network mk-no-preload-956403
	I0920 18:18:16.413977   74753 main.go:141] libmachine: (no-preload-956403) DBG | I0920 18:18:16.413916   77223 retry.go:31] will retry after 249.35647ms: waiting for machine to come up
	I0920 18:18:16.665587   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:16.666168   74753 main.go:141] libmachine: (no-preload-956403) DBG | unable to find current IP address of domain no-preload-956403 in network mk-no-preload-956403
	I0920 18:18:16.666203   74753 main.go:141] libmachine: (no-preload-956403) DBG | I0920 18:18:16.666132   77223 retry.go:31] will retry after 374.598012ms: waiting for machine to come up
	I0920 18:18:17.042911   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:17.043594   74753 main.go:141] libmachine: (no-preload-956403) DBG | unable to find current IP address of domain no-preload-956403 in network mk-no-preload-956403
	I0920 18:18:17.043618   74753 main.go:141] libmachine: (no-preload-956403) DBG | I0920 18:18:17.043550   77223 retry.go:31] will retry after 536.252353ms: waiting for machine to come up
	I0920 18:18:17.581141   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:17.581582   74753 main.go:141] libmachine: (no-preload-956403) DBG | unable to find current IP address of domain no-preload-956403 in network mk-no-preload-956403
	I0920 18:18:17.581616   74753 main.go:141] libmachine: (no-preload-956403) DBG | I0920 18:18:17.581528   77223 retry.go:31] will retry after 459.241867ms: waiting for machine to come up
	I0920 18:18:16.239946   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetIP
	I0920 18:18:16.242727   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:16.243147   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:16.243184   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:16.243561   75577 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 18:18:16.247928   75577 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:18:16.262165   75577 kubeadm.go:883] updating cluster {Name:old-k8s-version-744025 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-744025 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.207 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 18:18:16.262310   75577 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0920 18:18:16.262358   75577 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:18:16.313771   75577 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0920 18:18:16.313854   75577 ssh_runner.go:195] Run: which lz4
	I0920 18:18:16.318361   75577 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 18:18:16.322529   75577 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 18:18:16.322570   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0920 18:18:14.192319   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:16.194161   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:18.699498   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:18.999985   75264 pod_ready.go:103] pod "coredns-7c65d6cfc9-dmdfb" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:21.497825   75264 pod_ready.go:103] pod "coredns-7c65d6cfc9-dmdfb" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:18.042075   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:18.042566   74753 main.go:141] libmachine: (no-preload-956403) DBG | unable to find current IP address of domain no-preload-956403 in network mk-no-preload-956403
	I0920 18:18:18.042603   74753 main.go:141] libmachine: (no-preload-956403) DBG | I0920 18:18:18.042533   77223 retry.go:31] will retry after 833.585895ms: waiting for machine to come up
	I0920 18:18:18.877534   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:18.878023   74753 main.go:141] libmachine: (no-preload-956403) DBG | unable to find current IP address of domain no-preload-956403 in network mk-no-preload-956403
	I0920 18:18:18.878047   74753 main.go:141] libmachine: (no-preload-956403) DBG | I0920 18:18:18.877988   77223 retry.go:31] will retry after 1.035805905s: waiting for machine to come up
	I0920 18:18:19.915316   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:19.915735   74753 main.go:141] libmachine: (no-preload-956403) DBG | unable to find current IP address of domain no-preload-956403 in network mk-no-preload-956403
	I0920 18:18:19.915859   74753 main.go:141] libmachine: (no-preload-956403) DBG | I0920 18:18:19.915787   77223 retry.go:31] will retry after 978.827371ms: waiting for machine to come up
	I0920 18:18:20.896532   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:20.897185   74753 main.go:141] libmachine: (no-preload-956403) DBG | unable to find current IP address of domain no-preload-956403 in network mk-no-preload-956403
	I0920 18:18:20.897216   74753 main.go:141] libmachine: (no-preload-956403) DBG | I0920 18:18:20.897122   77223 retry.go:31] will retry after 1.808853939s: waiting for machine to come up
	I0920 18:18:17.953626   75577 crio.go:462] duration metric: took 1.635321078s to copy over tarball
	I0920 18:18:17.953717   75577 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 18:18:21.049561   75577 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.095809102s)
	I0920 18:18:21.049588   75577 crio.go:469] duration metric: took 3.095926273s to extract the tarball
	I0920 18:18:21.049596   75577 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 18:18:21.093521   75577 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:18:21.132318   75577 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0920 18:18:21.132361   75577 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0920 18:18:21.132426   75577 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:18:21.132449   75577 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 18:18:21.132432   75577 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 18:18:21.132551   75577 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 18:18:21.132587   75577 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0920 18:18:21.132708   75577 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0920 18:18:21.132819   75577 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 18:18:21.132557   75577 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0920 18:18:21.134217   75577 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0920 18:18:21.134256   75577 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0920 18:18:21.134277   75577 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 18:18:21.134313   75577 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0920 18:18:21.134327   75577 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 18:18:21.134341   75577 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 18:18:21.134266   75577 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 18:18:21.134356   75577 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:18:21.421546   75577 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0920 18:18:21.467311   75577 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0920 18:18:21.467349   75577 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0920 18:18:21.467400   75577 ssh_runner.go:195] Run: which crictl
	I0920 18:18:21.471158   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 18:18:21.474543   75577 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0920 18:18:21.480501   75577 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 18:18:21.499826   75577 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0920 18:18:21.499835   75577 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0920 18:18:21.505712   75577 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0920 18:18:21.514377   75577 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0920 18:18:21.514897   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 18:18:21.601531   75577 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0920 18:18:21.601582   75577 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0920 18:18:21.601635   75577 ssh_runner.go:195] Run: which crictl
	I0920 18:18:21.660417   75577 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0920 18:18:21.660457   75577 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 18:18:21.660504   75577 ssh_runner.go:195] Run: which crictl
	I0920 18:18:21.660528   75577 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0920 18:18:21.660571   75577 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 18:18:21.660625   75577 ssh_runner.go:195] Run: which crictl
	I0920 18:18:21.667538   75577 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0920 18:18:21.667558   75577 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0920 18:18:21.667583   75577 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 18:18:21.667595   75577 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0920 18:18:21.667633   75577 ssh_runner.go:195] Run: which crictl
	I0920 18:18:21.667652   75577 ssh_runner.go:195] Run: which crictl
	I0920 18:18:21.676529   75577 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0920 18:18:21.676580   75577 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 18:18:21.676602   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 18:18:21.676633   75577 ssh_runner.go:195] Run: which crictl
	I0920 18:18:21.676643   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 18:18:21.680041   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 18:18:21.681078   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 18:18:21.681180   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 18:18:21.683409   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 18:18:21.804439   75577 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0920 18:18:21.804492   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 18:18:21.804530   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 18:18:21.804585   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 18:18:21.823434   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 18:18:21.823497   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 18:18:21.823539   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 18:18:21.932568   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 18:18:21.932619   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 18:18:21.932639   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 18:18:21.958001   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 18:18:21.971089   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 18:18:21.975643   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 18:18:22.100118   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 18:18:22.100173   75577 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0920 18:18:22.100197   75577 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0920 18:18:22.100276   75577 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0920 18:18:22.100333   75577 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0920 18:18:22.109895   75577 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0920 18:18:22.137798   75577 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0920 18:18:22.331884   75577 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:18:22.476470   75577 cache_images.go:92] duration metric: took 1.344086626s to LoadCachedImages
	W0920 18:18:22.476575   75577 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0920 18:18:22.476595   75577 kubeadm.go:934] updating node { 192.168.39.207 8443 v1.20.0 crio true true} ...
	I0920 18:18:22.476742   75577 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-744025 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.207
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-744025 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 18:18:22.476833   75577 ssh_runner.go:195] Run: crio config
	I0920 18:18:22.528964   75577 cni.go:84] Creating CNI manager for ""
	I0920 18:18:22.528990   75577 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:18:22.528999   75577 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 18:18:22.529016   75577 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.207 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-744025 NodeName:old-k8s-version-744025 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.207"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.207 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0920 18:18:22.529173   75577 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.207
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-744025"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.207
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.207"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 18:18:22.529233   75577 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0920 18:18:22.540849   75577 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 18:18:22.540935   75577 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 18:18:22.551148   75577 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0920 18:18:22.569199   75577 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 18:18:22.585909   75577 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0920 18:18:22.604987   75577 ssh_runner.go:195] Run: grep 192.168.39.207	control-plane.minikube.internal$ /etc/hosts
	I0920 18:18:22.609152   75577 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.207	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:18:22.628042   75577 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:18:21.191366   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:23.193152   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:23.914443   75264 pod_ready.go:103] pod "coredns-7c65d6cfc9-dmdfb" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:25.002152   75264 pod_ready.go:93] pod "coredns-7c65d6cfc9-dmdfb" in "kube-system" namespace has status "Ready":"True"
	I0920 18:18:25.002185   75264 pod_ready.go:82] duration metric: took 8.012280973s for pod "coredns-7c65d6cfc9-dmdfb" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:25.002198   75264 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:25.010541   75264 pod_ready.go:93] pod "etcd-default-k8s-diff-port-553719" in "kube-system" namespace has status "Ready":"True"
	I0920 18:18:25.010570   75264 pod_ready.go:82] duration metric: took 8.362504ms for pod "etcd-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:25.010591   75264 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:25.023701   75264 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-553719" in "kube-system" namespace has status "Ready":"True"
	I0920 18:18:25.023741   75264 pod_ready.go:82] duration metric: took 13.139423ms for pod "kube-apiserver-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:25.023758   75264 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:25.031247   75264 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-553719" in "kube-system" namespace has status "Ready":"True"
	I0920 18:18:25.031282   75264 pod_ready.go:82] duration metric: took 7.515474ms for pod "kube-controller-manager-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:25.031307   75264 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-p9crq" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:25.119242   75264 pod_ready.go:93] pod "kube-proxy-p9crq" in "kube-system" namespace has status "Ready":"True"
	I0920 18:18:25.119267   75264 pod_ready.go:82] duration metric: took 87.951791ms for pod "kube-proxy-p9crq" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:25.119280   75264 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:25.518243   75264 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-553719" in "kube-system" namespace has status "Ready":"True"
	I0920 18:18:25.518267   75264 pod_ready.go:82] duration metric: took 398.979314ms for pod "kube-scheduler-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:25.518281   75264 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:22.707778   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:22.708271   74753 main.go:141] libmachine: (no-preload-956403) DBG | unable to find current IP address of domain no-preload-956403 in network mk-no-preload-956403
	I0920 18:18:22.708333   74753 main.go:141] libmachine: (no-preload-956403) DBG | I0920 18:18:22.708161   77223 retry.go:31] will retry after 1.516042611s: waiting for machine to come up
	I0920 18:18:24.225560   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:24.225987   74753 main.go:141] libmachine: (no-preload-956403) DBG | unable to find current IP address of domain no-preload-956403 in network mk-no-preload-956403
	I0920 18:18:24.226041   74753 main.go:141] libmachine: (no-preload-956403) DBG | I0920 18:18:24.225968   77223 retry.go:31] will retry after 2.135874415s: waiting for machine to come up
	I0920 18:18:26.363371   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:26.363861   74753 main.go:141] libmachine: (no-preload-956403) DBG | unable to find current IP address of domain no-preload-956403 in network mk-no-preload-956403
	I0920 18:18:26.363895   74753 main.go:141] libmachine: (no-preload-956403) DBG | I0920 18:18:26.363777   77223 retry.go:31] will retry after 3.29383193s: waiting for machine to come up
	I0920 18:18:22.796651   75577 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:18:22.813789   75577 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025 for IP: 192.168.39.207
	I0920 18:18:22.813817   75577 certs.go:194] generating shared ca certs ...
	I0920 18:18:22.813848   75577 certs.go:226] acquiring lock for ca certs: {Name:mkc7ef6c737c6bdc3fdd9dcff8f57029c020d8f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:18:22.814045   75577 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key
	I0920 18:18:22.814101   75577 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key
	I0920 18:18:22.814118   75577 certs.go:256] generating profile certs ...
	I0920 18:18:22.814290   75577 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/client.key
	I0920 18:18:22.814383   75577 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/apiserver.key.3105b99d
	I0920 18:18:22.814445   75577 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/proxy-client.key
	I0920 18:18:22.814626   75577 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem (1338 bytes)
	W0920 18:18:22.814660   75577 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973_empty.pem, impossibly tiny 0 bytes
	I0920 18:18:22.814666   75577 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 18:18:22.814691   75577 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem (1082 bytes)
	I0920 18:18:22.814717   75577 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem (1123 bytes)
	I0920 18:18:22.814749   75577 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem (1675 bytes)
	I0920 18:18:22.814813   75577 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem (1708 bytes)
	I0920 18:18:22.815736   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 18:18:22.863832   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 18:18:22.921747   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 18:18:22.959556   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0920 18:18:22.992097   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0920 18:18:23.027565   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 18:18:23.057374   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 18:18:23.094290   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 18:18:23.120095   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem --> /usr/share/ca-certificates/15973.pem (1338 bytes)
	I0920 18:18:23.144374   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /usr/share/ca-certificates/159732.pem (1708 bytes)
	I0920 18:18:23.169431   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 18:18:23.195779   75577 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 18:18:23.212820   75577 ssh_runner.go:195] Run: openssl version
	I0920 18:18:23.218876   75577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 18:18:23.229684   75577 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:18:23.234533   75577 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:18:23.234603   75577 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:18:23.240460   75577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 18:18:23.251940   75577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15973.pem && ln -fs /usr/share/ca-certificates/15973.pem /etc/ssl/certs/15973.pem"
	I0920 18:18:23.263308   75577 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15973.pem
	I0920 18:18:23.268059   75577 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:04 /usr/share/ca-certificates/15973.pem
	I0920 18:18:23.268128   75577 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15973.pem
	I0920 18:18:23.274199   75577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15973.pem /etc/ssl/certs/51391683.0"
	I0920 18:18:23.286362   75577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/159732.pem && ln -fs /usr/share/ca-certificates/159732.pem /etc/ssl/certs/159732.pem"
	I0920 18:18:23.303962   75577 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/159732.pem
	I0920 18:18:23.310907   75577 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:04 /usr/share/ca-certificates/159732.pem
	I0920 18:18:23.310981   75577 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/159732.pem
	I0920 18:18:23.317881   75577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/159732.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 18:18:23.329247   75577 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 18:18:23.334223   75577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 18:18:23.340565   75577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 18:18:23.346929   75577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 18:18:23.353681   75577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 18:18:23.359699   75577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 18:18:23.365749   75577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 18:18:23.371899   75577 kubeadm.go:392] StartCluster: {Name:old-k8s-version-744025 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-744025 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.207 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:18:23.371981   75577 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 18:18:23.372027   75577 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:18:23.419904   75577 cri.go:89] found id: ""
	I0920 18:18:23.419982   75577 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 18:18:23.431761   75577 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 18:18:23.431782   75577 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 18:18:23.431833   75577 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 18:18:23.444358   75577 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 18:18:23.445545   75577 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-744025" does not appear in /home/jenkins/minikube-integration/19672-8777/kubeconfig
	I0920 18:18:23.446531   75577 kubeconfig.go:62] /home/jenkins/minikube-integration/19672-8777/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-744025" cluster setting kubeconfig missing "old-k8s-version-744025" context setting]
	I0920 18:18:23.447808   75577 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/kubeconfig: {Name:mkf32a4c736808e023459b2f0e40188618a38db1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:18:23.568927   75577 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 18:18:23.579991   75577 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.207
	I0920 18:18:23.580025   75577 kubeadm.go:1160] stopping kube-system containers ...
	I0920 18:18:23.580038   75577 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0920 18:18:23.580097   75577 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:18:23.625568   75577 cri.go:89] found id: ""
	I0920 18:18:23.625648   75577 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 18:18:23.643938   75577 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 18:18:23.654375   75577 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 18:18:23.654398   75577 kubeadm.go:157] found existing configuration files:
	
	I0920 18:18:23.654453   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 18:18:23.664335   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 18:18:23.664409   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 18:18:23.674996   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 18:18:23.685310   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 18:18:23.685401   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 18:18:23.696241   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 18:18:23.706386   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 18:18:23.706465   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 18:18:23.716491   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 18:18:23.726566   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 18:18:23.726626   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 18:18:23.738576   75577 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 18:18:23.749510   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:23.877503   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:24.789322   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:25.054969   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:25.152117   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:25.249140   75577 api_server.go:52] waiting for apiserver process to appear ...
	I0920 18:18:25.249245   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:25.749427   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:26.249360   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:26.749895   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:27.249636   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:25.194678   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:27.693680   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:27.524378   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:29.524998   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:32.025310   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:29.661278   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:29.661743   74753 main.go:141] libmachine: (no-preload-956403) DBG | unable to find current IP address of domain no-preload-956403 in network mk-no-preload-956403
	I0920 18:18:29.661762   74753 main.go:141] libmachine: (no-preload-956403) DBG | I0920 18:18:29.661714   77223 retry.go:31] will retry after 3.154777794s: waiting for machine to come up
	I0920 18:18:27.749629   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:28.250139   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:28.749691   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:29.249709   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:29.749550   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:30.250186   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:30.749562   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:31.249622   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:31.749925   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:32.250324   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:30.191419   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:32.191873   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:32.820331   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:32.820870   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has current primary IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:32.820895   74753 main.go:141] libmachine: (no-preload-956403) Found IP for machine: 192.168.50.47
	I0920 18:18:32.820908   74753 main.go:141] libmachine: (no-preload-956403) Reserving static IP address...
	I0920 18:18:32.821313   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "no-preload-956403", mac: "52:54:00:b6:13:30", ip: "192.168.50.47"} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:32.821339   74753 main.go:141] libmachine: (no-preload-956403) Reserved static IP address: 192.168.50.47
	I0920 18:18:32.821355   74753 main.go:141] libmachine: (no-preload-956403) DBG | skip adding static IP to network mk-no-preload-956403 - found existing host DHCP lease matching {name: "no-preload-956403", mac: "52:54:00:b6:13:30", ip: "192.168.50.47"}
	I0920 18:18:32.821372   74753 main.go:141] libmachine: (no-preload-956403) DBG | Getting to WaitForSSH function...
	I0920 18:18:32.821389   74753 main.go:141] libmachine: (no-preload-956403) Waiting for SSH to be available...
	I0920 18:18:32.823590   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:32.823894   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:32.823928   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:32.824019   74753 main.go:141] libmachine: (no-preload-956403) DBG | Using SSH client type: external
	I0920 18:18:32.824046   74753 main.go:141] libmachine: (no-preload-956403) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/no-preload-956403/id_rsa (-rw-------)
	I0920 18:18:32.824077   74753 main.go:141] libmachine: (no-preload-956403) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.47 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-8777/.minikube/machines/no-preload-956403/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 18:18:32.824090   74753 main.go:141] libmachine: (no-preload-956403) DBG | About to run SSH command:
	I0920 18:18:32.824102   74753 main.go:141] libmachine: (no-preload-956403) DBG | exit 0
	I0920 18:18:32.950122   74753 main.go:141] libmachine: (no-preload-956403) DBG | SSH cmd err, output: <nil>: 
	I0920 18:18:32.950537   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetConfigRaw
	I0920 18:18:32.951187   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetIP
	I0920 18:18:32.953731   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:32.954073   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:32.954102   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:32.954365   74753 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/no-preload-956403/config.json ...
	I0920 18:18:32.954603   74753 machine.go:93] provisionDockerMachine start ...
	I0920 18:18:32.954621   74753 main.go:141] libmachine: (no-preload-956403) Calling .DriverName
	I0920 18:18:32.954814   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:18:32.957049   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:32.957379   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:32.957425   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:32.957538   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHPort
	I0920 18:18:32.957717   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:32.957920   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:32.958094   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHUsername
	I0920 18:18:32.958282   74753 main.go:141] libmachine: Using SSH client type: native
	I0920 18:18:32.958494   74753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.47 22 <nil> <nil>}
	I0920 18:18:32.958506   74753 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 18:18:33.058187   74753 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 18:18:33.058222   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetMachineName
	I0920 18:18:33.058477   74753 buildroot.go:166] provisioning hostname "no-preload-956403"
	I0920 18:18:33.058508   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetMachineName
	I0920 18:18:33.058668   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:18:33.061310   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.061616   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:33.061649   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.061783   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHPort
	I0920 18:18:33.061961   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:33.062089   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:33.062197   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHUsername
	I0920 18:18:33.062340   74753 main.go:141] libmachine: Using SSH client type: native
	I0920 18:18:33.062536   74753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.47 22 <nil> <nil>}
	I0920 18:18:33.062553   74753 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-956403 && echo "no-preload-956403" | sudo tee /etc/hostname
	I0920 18:18:33.175825   74753 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-956403
	
	I0920 18:18:33.175860   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:18:33.178483   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.178749   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:33.178769   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.178904   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHPort
	I0920 18:18:33.179077   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:33.179226   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:33.179366   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHUsername
	I0920 18:18:33.179545   74753 main.go:141] libmachine: Using SSH client type: native
	I0920 18:18:33.179710   74753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.47 22 <nil> <nil>}
	I0920 18:18:33.179726   74753 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-956403' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-956403/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-956403' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 18:18:33.290675   74753 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:18:33.290701   74753 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-8777/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-8777/.minikube}
	I0920 18:18:33.290722   74753 buildroot.go:174] setting up certificates
	I0920 18:18:33.290735   74753 provision.go:84] configureAuth start
	I0920 18:18:33.290747   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetMachineName
	I0920 18:18:33.291015   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetIP
	I0920 18:18:33.293810   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.294244   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:33.294282   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.294337   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:18:33.296376   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.296749   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:33.296776   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.296930   74753 provision.go:143] copyHostCerts
	I0920 18:18:33.296985   74753 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem, removing ...
	I0920 18:18:33.296995   74753 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem
	I0920 18:18:33.297048   74753 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem (1082 bytes)
	I0920 18:18:33.297166   74753 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem, removing ...
	I0920 18:18:33.297177   74753 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem
	I0920 18:18:33.297211   74753 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem (1123 bytes)
	I0920 18:18:33.297287   74753 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem, removing ...
	I0920 18:18:33.297296   74753 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem
	I0920 18:18:33.297320   74753 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem (1675 bytes)
	I0920 18:18:33.297387   74753 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem org=jenkins.no-preload-956403 san=[127.0.0.1 192.168.50.47 localhost minikube no-preload-956403]
	I0920 18:18:33.366768   74753 provision.go:177] copyRemoteCerts
	I0920 18:18:33.366830   74753 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 18:18:33.366852   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:18:33.369441   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.369755   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:33.369787   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.369958   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHPort
	I0920 18:18:33.370127   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:33.370293   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHUsername
	I0920 18:18:33.370490   74753 sshutil.go:53] new ssh client: &{IP:192.168.50.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/no-preload-956403/id_rsa Username:docker}
	I0920 18:18:33.452070   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0920 18:18:33.477564   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0920 18:18:33.501002   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 18:18:33.526725   74753 provision.go:87] duration metric: took 235.975552ms to configureAuth
	I0920 18:18:33.526755   74753 buildroot.go:189] setting minikube options for container-runtime
	I0920 18:18:33.526943   74753 config.go:182] Loaded profile config "no-preload-956403": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:18:33.527011   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:18:33.529870   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.530338   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:33.530371   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.530485   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHPort
	I0920 18:18:33.530708   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:33.530897   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:33.531057   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHUsername
	I0920 18:18:33.531276   74753 main.go:141] libmachine: Using SSH client type: native
	I0920 18:18:33.531497   74753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.47 22 <nil> <nil>}
	I0920 18:18:33.531519   74753 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 18:18:33.758711   74753 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 18:18:33.758739   74753 machine.go:96] duration metric: took 804.122904ms to provisionDockerMachine
	I0920 18:18:33.758753   74753 start.go:293] postStartSetup for "no-preload-956403" (driver="kvm2")
	I0920 18:18:33.758771   74753 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 18:18:33.758796   74753 main.go:141] libmachine: (no-preload-956403) Calling .DriverName
	I0920 18:18:33.759207   74753 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 18:18:33.759262   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:18:33.762524   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.762991   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:33.763027   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.763207   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHPort
	I0920 18:18:33.763471   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:33.763643   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHUsername
	I0920 18:18:33.763861   74753 sshutil.go:53] new ssh client: &{IP:192.168.50.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/no-preload-956403/id_rsa Username:docker}
	I0920 18:18:33.844910   74753 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 18:18:33.849111   74753 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 18:18:33.849137   74753 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/addons for local assets ...
	I0920 18:18:33.849205   74753 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/files for local assets ...
	I0920 18:18:33.849280   74753 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem -> 159732.pem in /etc/ssl/certs
	I0920 18:18:33.849367   74753 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 18:18:33.858757   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /etc/ssl/certs/159732.pem (1708 bytes)
	I0920 18:18:33.883430   74753 start.go:296] duration metric: took 124.663757ms for postStartSetup
	I0920 18:18:33.883468   74753 fix.go:56] duration metric: took 19.104481875s for fixHost
	I0920 18:18:33.883488   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:18:33.886703   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.887047   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:33.887076   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.887276   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHPort
	I0920 18:18:33.887502   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:33.887685   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:33.887816   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHUsername
	I0920 18:18:33.887962   74753 main.go:141] libmachine: Using SSH client type: native
	I0920 18:18:33.888137   74753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.47 22 <nil> <nil>}
	I0920 18:18:33.888148   74753 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 18:18:33.994635   74753 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726856313.964367484
	
	I0920 18:18:33.994660   74753 fix.go:216] guest clock: 1726856313.964367484
	I0920 18:18:33.994670   74753 fix.go:229] Guest: 2024-09-20 18:18:33.964367484 +0000 UTC Remote: 2024-09-20 18:18:33.883472234 +0000 UTC m=+356.278282007 (delta=80.89525ms)
	I0920 18:18:33.994695   74753 fix.go:200] guest clock delta is within tolerance: 80.89525ms
	I0920 18:18:33.994701   74753 start.go:83] releasing machines lock for "no-preload-956403", held for 19.215743841s
	I0920 18:18:33.994726   74753 main.go:141] libmachine: (no-preload-956403) Calling .DriverName
	I0920 18:18:33.994976   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetIP
	I0920 18:18:33.997685   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.998089   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:33.998114   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.998291   74753 main.go:141] libmachine: (no-preload-956403) Calling .DriverName
	I0920 18:18:33.998765   74753 main.go:141] libmachine: (no-preload-956403) Calling .DriverName
	I0920 18:18:33.998923   74753 main.go:141] libmachine: (no-preload-956403) Calling .DriverName
	I0920 18:18:33.999007   74753 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 18:18:33.999054   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:18:33.999142   74753 ssh_runner.go:195] Run: cat /version.json
	I0920 18:18:33.999168   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:18:34.001725   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:34.001891   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:34.002197   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:34.002225   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:34.002369   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHPort
	I0920 18:18:34.002424   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:34.002452   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:34.002533   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:34.002625   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHPort
	I0920 18:18:34.002695   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHUsername
	I0920 18:18:34.002744   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:34.002813   74753 sshutil.go:53] new ssh client: &{IP:192.168.50.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/no-preload-956403/id_rsa Username:docker}
	I0920 18:18:34.002888   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHUsername
	I0920 18:18:34.003032   74753 sshutil.go:53] new ssh client: &{IP:192.168.50.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/no-preload-956403/id_rsa Username:docker}
	I0920 18:18:34.079092   74753 ssh_runner.go:195] Run: systemctl --version
	I0920 18:18:34.112066   74753 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 18:18:34.257205   74753 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 18:18:34.265774   74753 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 18:18:34.265871   74753 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 18:18:34.285222   74753 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 18:18:34.285247   74753 start.go:495] detecting cgroup driver to use...
	I0920 18:18:34.285320   74753 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 18:18:34.302192   74753 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 18:18:34.316624   74753 docker.go:217] disabling cri-docker service (if available) ...
	I0920 18:18:34.316695   74753 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 18:18:34.331098   74753 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 18:18:34.345433   74753 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 18:18:34.481886   74753 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 18:18:34.644442   74753 docker.go:233] disabling docker service ...
	I0920 18:18:34.644530   74753 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 18:18:34.658714   74753 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 18:18:34.671506   74753 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 18:18:34.809548   74753 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 18:18:34.957438   74753 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 18:18:34.972102   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 18:18:34.993129   74753 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 18:18:34.993199   74753 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:18:35.004196   74753 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 18:18:35.004273   74753 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:18:35.015258   74753 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:18:35.026240   74753 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:18:35.037658   74753 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 18:18:35.048637   74753 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:18:35.059494   74753 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:18:35.077264   74753 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:18:35.087777   74753 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 18:18:35.097812   74753 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 18:18:35.097947   74753 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 18:18:35.112200   74753 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 18:18:35.123381   74753 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:18:35.262386   74753 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 18:18:35.361152   74753 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 18:18:35.361257   74753 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 18:18:35.366321   74753 start.go:563] Will wait 60s for crictl version
	I0920 18:18:35.366390   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:18:35.370379   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 18:18:35.415080   74753 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 18:18:35.415182   74753 ssh_runner.go:195] Run: crio --version
	I0920 18:18:35.446934   74753 ssh_runner.go:195] Run: crio --version
	I0920 18:18:35.478823   74753 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 18:18:34.026138   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:36.525133   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:35.479956   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetIP
	I0920 18:18:35.483428   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:35.483820   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:35.483848   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:35.484079   74753 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0920 18:18:35.488686   74753 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:18:35.501184   74753 kubeadm.go:883] updating cluster {Name:no-preload-956403 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-956403 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.47 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 18:18:35.501348   74753 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:18:35.501391   74753 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:18:35.535377   74753 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 18:18:35.535400   74753 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0920 18:18:35.535466   74753 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 18:18:35.535499   74753 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0920 18:18:35.535510   74753 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0920 18:18:35.535534   74753 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0920 18:18:35.535538   74753 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0920 18:18:35.535513   74753 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0920 18:18:35.535625   74753 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0920 18:18:35.535483   74753 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:18:35.537100   74753 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 18:18:35.537104   74753 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0920 18:18:35.537126   74753 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0920 18:18:35.537091   74753 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0920 18:18:35.537164   74753 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0920 18:18:35.537180   74753 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0920 18:18:35.537191   74753 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0920 18:18:35.537105   74753 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:18:35.770046   74753 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0920 18:18:35.796364   74753 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0920 18:18:35.800776   74753 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0920 18:18:35.802291   74753 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0920 18:18:35.845969   74753 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0920 18:18:35.860263   74753 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0920 18:18:35.892349   74753 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 18:18:35.919269   74753 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0920 18:18:35.919323   74753 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0920 18:18:35.919375   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:18:35.929045   74753 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0920 18:18:35.929096   74753 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0920 18:18:35.929143   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:18:35.929181   74753 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0920 18:18:35.929235   74753 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0920 18:18:35.929299   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:18:35.968513   74753 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0920 18:18:35.968557   74753 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0920 18:18:35.968553   74753 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0920 18:18:35.968589   74753 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0920 18:18:35.968612   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:18:35.968636   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:18:35.984335   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0920 18:18:35.984366   74753 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0920 18:18:35.984379   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0920 18:18:35.984403   74753 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 18:18:35.984433   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0920 18:18:35.984450   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:18:35.984474   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0920 18:18:35.984505   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0920 18:18:36.106947   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0920 18:18:36.106964   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 18:18:36.106954   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0920 18:18:36.107025   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0920 18:18:36.107112   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0920 18:18:36.107142   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0920 18:18:36.231892   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0920 18:18:36.231989   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0920 18:18:36.232026   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0920 18:18:36.232143   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0920 18:18:36.232193   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0920 18:18:36.232281   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 18:18:36.355534   74753 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0920 18:18:36.355632   74753 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0920 18:18:36.355650   74753 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0920 18:18:36.355730   74753 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0920 18:18:36.358584   74753 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0920 18:18:36.358653   74753 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0920 18:18:36.358678   74753 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0920 18:18:36.358677   74753 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0920 18:18:36.358723   74753 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0920 18:18:36.358751   74753 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0920 18:18:36.367463   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 18:18:36.372389   74753 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0920 18:18:36.372412   74753 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0920 18:18:36.372457   74753 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0920 18:18:36.372580   74753 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I0920 18:18:36.374496   74753 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0920 18:18:36.374538   74753 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I0920 18:18:36.374638   74753 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I0920 18:18:36.410860   74753 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0920 18:18:36.410961   74753 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0920 18:18:36.738003   74753 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:18:32.749877   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:33.249391   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:33.749510   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:34.250003   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:34.749967   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:35.249953   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:35.749992   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:36.250161   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:36.750339   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:37.249600   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:34.192782   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:36.692485   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:38.692626   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:39.024337   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:41.025364   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:38.480788   74753 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.108302901s)
	I0920 18:18:38.480819   74753 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0920 18:18:38.480839   74753 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0920 18:18:38.480869   74753 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1: (2.069867717s)
	I0920 18:18:38.480893   74753 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0920 18:18:38.480896   74753 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I0920 18:18:38.480922   74753 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.742864605s)
	I0920 18:18:38.480970   74753 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0920 18:18:38.480993   74753 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:18:38.481031   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:18:40.340748   74753 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (1.85982838s)
	I0920 18:18:40.340782   74753 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0920 18:18:40.340790   74753 ssh_runner.go:235] Completed: which crictl: (1.859743083s)
	I0920 18:18:40.340812   74753 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0920 18:18:40.340850   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:18:40.340869   74753 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0920 18:18:37.750269   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:38.249855   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:38.749845   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:39.249342   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:39.750306   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:40.250103   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:40.750346   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:41.250134   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:41.750136   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:42.249945   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:41.191300   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:43.193536   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:43.025860   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:45.527119   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:44.427309   74753 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (4.086436233s)
	I0920 18:18:44.427365   74753 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (4.086473186s)
	I0920 18:18:44.427395   74753 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0920 18:18:44.427400   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:18:44.427430   74753 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0920 18:18:44.427510   74753 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0920 18:18:46.393094   74753 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (1.965557326s)
	I0920 18:18:46.393133   74753 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0920 18:18:46.393163   74753 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0920 18:18:46.393170   74753 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.965750119s)
	I0920 18:18:46.393216   74753 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0920 18:18:46.393241   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:18:42.749980   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:43.249416   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:43.750087   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:44.249972   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:44.749649   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:45.250128   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:45.750346   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:46.249611   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:46.749566   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:47.249814   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:45.193713   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:47.691882   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:48.027047   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:50.527076   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:47.771559   74753 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.37829872s)
	I0920 18:18:47.771595   74753 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0920 18:18:47.771607   74753 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.378351302s)
	I0920 18:18:47.771618   74753 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0920 18:18:47.771645   74753 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0920 18:18:47.771668   74753 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0920 18:18:47.771726   74753 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0920 18:18:49.544117   74753 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.772423068s)
	I0920 18:18:49.544147   74753 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0920 18:18:49.544159   74753 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.772401848s)
	I0920 18:18:49.544187   74753 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0920 18:18:49.544199   74753 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0920 18:18:49.544275   74753 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0920 18:18:50.198691   74753 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0920 18:18:50.198744   74753 cache_images.go:123] Successfully loaded all cached images
	I0920 18:18:50.198752   74753 cache_images.go:92] duration metric: took 14.66333409s to LoadCachedImages
	I0920 18:18:50.198766   74753 kubeadm.go:934] updating node { 192.168.50.47 8443 v1.31.1 crio true true} ...
	I0920 18:18:50.198900   74753 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-956403 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.47
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-956403 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 18:18:50.198988   74753 ssh_runner.go:195] Run: crio config
	I0920 18:18:50.249876   74753 cni.go:84] Creating CNI manager for ""
	I0920 18:18:50.249901   74753 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:18:50.249915   74753 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 18:18:50.249942   74753 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.47 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-956403 NodeName:no-preload-956403 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.47"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.47 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 18:18:50.250150   74753 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.47
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-956403"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.47
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.47"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 18:18:50.250264   74753 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 18:18:50.262805   74753 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 18:18:50.262886   74753 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 18:18:50.272958   74753 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0920 18:18:50.290269   74753 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 18:18:50.306981   74753 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0920 18:18:50.324360   74753 ssh_runner.go:195] Run: grep 192.168.50.47	control-plane.minikube.internal$ /etc/hosts
	I0920 18:18:50.328382   74753 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.47	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:18:50.341021   74753 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:18:50.462105   74753 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:18:50.478780   74753 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/no-preload-956403 for IP: 192.168.50.47
	I0920 18:18:50.478804   74753 certs.go:194] generating shared ca certs ...
	I0920 18:18:50.478824   74753 certs.go:226] acquiring lock for ca certs: {Name:mkc7ef6c737c6bdc3fdd9dcff8f57029c020d8f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:18:50.479010   74753 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key
	I0920 18:18:50.479069   74753 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key
	I0920 18:18:50.479082   74753 certs.go:256] generating profile certs ...
	I0920 18:18:50.479188   74753 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/no-preload-956403/client.key
	I0920 18:18:50.479270   74753 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/no-preload-956403/apiserver.key.859cf278
	I0920 18:18:50.479335   74753 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/no-preload-956403/proxy-client.key
	I0920 18:18:50.479491   74753 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem (1338 bytes)
	W0920 18:18:50.479534   74753 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973_empty.pem, impossibly tiny 0 bytes
	I0920 18:18:50.479549   74753 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 18:18:50.479596   74753 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem (1082 bytes)
	I0920 18:18:50.479633   74753 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem (1123 bytes)
	I0920 18:18:50.479668   74753 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem (1675 bytes)
	I0920 18:18:50.479771   74753 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem (1708 bytes)
	I0920 18:18:50.480696   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 18:18:50.514559   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 18:18:50.548790   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 18:18:50.581384   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0920 18:18:50.609138   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/no-preload-956403/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0920 18:18:50.641098   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/no-preload-956403/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 18:18:50.680479   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/no-preload-956403/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 18:18:50.705168   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/no-preload-956403/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 18:18:50.727603   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /usr/share/ca-certificates/159732.pem (1708 bytes)
	I0920 18:18:50.750272   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 18:18:50.776117   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem --> /usr/share/ca-certificates/15973.pem (1338 bytes)
	I0920 18:18:50.799799   74753 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 18:18:50.816250   74753 ssh_runner.go:195] Run: openssl version
	I0920 18:18:50.821680   74753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/159732.pem && ln -fs /usr/share/ca-certificates/159732.pem /etc/ssl/certs/159732.pem"
	I0920 18:18:50.832295   74753 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/159732.pem
	I0920 18:18:50.836833   74753 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:04 /usr/share/ca-certificates/159732.pem
	I0920 18:18:50.836896   74753 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/159732.pem
	I0920 18:18:50.842626   74753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/159732.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 18:18:50.853297   74753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 18:18:50.864400   74753 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:18:50.868951   74753 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:18:50.869011   74753 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:18:50.874547   74753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 18:18:50.885615   74753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15973.pem && ln -fs /usr/share/ca-certificates/15973.pem /etc/ssl/certs/15973.pem"
	I0920 18:18:50.896960   74753 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15973.pem
	I0920 18:18:50.901798   74753 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:04 /usr/share/ca-certificates/15973.pem
	I0920 18:18:50.901879   74753 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15973.pem
	I0920 18:18:50.907920   74753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15973.pem /etc/ssl/certs/51391683.0"
	I0920 18:18:50.919345   74753 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 18:18:50.923671   74753 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 18:18:50.929701   74753 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 18:18:50.935649   74753 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 18:18:50.942162   74753 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 18:18:50.948012   74753 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 18:18:50.954097   74753 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 18:18:50.960442   74753 kubeadm.go:392] StartCluster: {Name:no-preload-956403 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-956403 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.47 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:18:50.960535   74753 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 18:18:50.960600   74753 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:18:50.998528   74753 cri.go:89] found id: ""
	I0920 18:18:50.998608   74753 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 18:18:51.008964   74753 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 18:18:51.008985   74753 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 18:18:51.009043   74753 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 18:18:51.018457   74753 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 18:18:51.019553   74753 kubeconfig.go:125] found "no-preload-956403" server: "https://192.168.50.47:8443"
	I0920 18:18:51.021712   74753 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 18:18:51.033439   74753 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.47
	I0920 18:18:51.033470   74753 kubeadm.go:1160] stopping kube-system containers ...
	I0920 18:18:51.033481   74753 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0920 18:18:51.033538   74753 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:18:51.070049   74753 cri.go:89] found id: ""
	I0920 18:18:51.070137   74753 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 18:18:51.087472   74753 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 18:18:51.098582   74753 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 18:18:51.098606   74753 kubeadm.go:157] found existing configuration files:
	
	I0920 18:18:51.098654   74753 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 18:18:51.107201   74753 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 18:18:51.107276   74753 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 18:18:51.116058   74753 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 18:18:51.124563   74753 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 18:18:51.124630   74753 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 18:18:51.134174   74753 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 18:18:51.142880   74753 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 18:18:51.142944   74753 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 18:18:51.152181   74753 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 18:18:51.161942   74753 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 18:18:51.162012   74753 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 18:18:51.171615   74753 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 18:18:51.180728   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:51.292140   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:52.019018   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:52.239327   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:52.317900   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:52.433910   74753 api_server.go:52] waiting for apiserver process to appear ...
	I0920 18:18:52.433991   74753 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:47.749487   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:48.249612   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:48.750324   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:49.250006   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:49.749667   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:50.249802   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:50.749597   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:51.250236   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:51.750203   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:52.250132   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:49.693075   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:52.192547   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:53.024979   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:55.025225   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:52.934956   74753 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:53.434681   74753 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:53.451797   74753 api_server.go:72] duration metric: took 1.017867426s to wait for apiserver process to appear ...
	I0920 18:18:53.451828   74753 api_server.go:88] waiting for apiserver healthz status ...
	I0920 18:18:53.451851   74753 api_server.go:253] Checking apiserver healthz at https://192.168.50.47:8443/healthz ...
	I0920 18:18:53.452366   74753 api_server.go:269] stopped: https://192.168.50.47:8443/healthz: Get "https://192.168.50.47:8443/healthz": dial tcp 192.168.50.47:8443: connect: connection refused
	I0920 18:18:53.952175   74753 api_server.go:253] Checking apiserver healthz at https://192.168.50.47:8443/healthz ...
	I0920 18:18:55.972801   74753 api_server.go:279] https://192.168.50.47:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 18:18:55.972835   74753 api_server.go:103] status: https://192.168.50.47:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 18:18:55.972854   74753 api_server.go:253] Checking apiserver healthz at https://192.168.50.47:8443/healthz ...
	I0920 18:18:56.018532   74753 api_server.go:279] https://192.168.50.47:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 18:18:56.018563   74753 api_server.go:103] status: https://192.168.50.47:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 18:18:56.452127   74753 api_server.go:253] Checking apiserver healthz at https://192.168.50.47:8443/healthz ...
	I0920 18:18:56.459752   74753 api_server.go:279] https://192.168.50.47:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 18:18:56.459796   74753 api_server.go:103] status: https://192.168.50.47:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 18:18:56.952284   74753 api_server.go:253] Checking apiserver healthz at https://192.168.50.47:8443/healthz ...
	I0920 18:18:56.959496   74753 api_server.go:279] https://192.168.50.47:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 18:18:56.959553   74753 api_server.go:103] status: https://192.168.50.47:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 18:18:57.452049   74753 api_server.go:253] Checking apiserver healthz at https://192.168.50.47:8443/healthz ...
	I0920 18:18:57.456586   74753 api_server.go:279] https://192.168.50.47:8443/healthz returned 200:
	ok
	I0920 18:18:57.463754   74753 api_server.go:141] control plane version: v1.31.1
	I0920 18:18:57.463785   74753 api_server.go:131] duration metric: took 4.011948835s to wait for apiserver health ...
	I0920 18:18:57.463795   74753 cni.go:84] Creating CNI manager for ""
	I0920 18:18:57.463804   74753 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:18:57.465916   74753 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 18:18:57.467416   74753 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 18:18:57.485487   74753 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 18:18:57.529426   74753 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 18:18:57.540046   74753 system_pods.go:59] 8 kube-system pods found
	I0920 18:18:57.540100   74753 system_pods.go:61] "coredns-7c65d6cfc9-j2t5h" [dd2636f1-3200-4f22-957c-046277c9be8c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 18:18:57.540113   74753 system_pods.go:61] "etcd-no-preload-956403" [daf84be0-e673-4685-a998-47ec64664484] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0920 18:18:57.540127   74753 system_pods.go:61] "kube-apiserver-no-preload-956403" [416217e6-3c91-49ec-bd69-812a49b4afd9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0920 18:18:57.540139   74753 system_pods.go:61] "kube-controller-manager-no-preload-956403" [30fae740-b073-4619-bca5-010ca90b2667] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0920 18:18:57.540148   74753 system_pods.go:61] "kube-proxy-sz4bm" [269600fb-ef65-4b17-8c07-76c79e35f5a8] Running
	I0920 18:18:57.540160   74753 system_pods.go:61] "kube-scheduler-no-preload-956403" [46432927-e506-4665-8825-c5f6a2ed0458] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0920 18:18:57.540174   74753 system_pods.go:61] "metrics-server-6867b74b74-tfsff" [599ba06a-6d4d-483b-b390-a3595a814757] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 18:18:57.540184   74753 system_pods.go:61] "storage-provisioner" [df661627-7d32-4467-9805-1ae65d4fa35c] Running
	I0920 18:18:57.540196   74753 system_pods.go:74] duration metric: took 10.749595ms to wait for pod list to return data ...
	I0920 18:18:57.540208   74753 node_conditions.go:102] verifying NodePressure condition ...
	I0920 18:18:57.544401   74753 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 18:18:57.544438   74753 node_conditions.go:123] node cpu capacity is 2
	I0920 18:18:57.544451   74753 node_conditions.go:105] duration metric: took 4.237618ms to run NodePressure ...
	I0920 18:18:57.544471   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:52.749468   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:53.249519   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:53.750056   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:54.250286   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:54.750240   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:55.249980   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:55.750192   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:56.249940   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:56.749289   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:57.249615   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:54.193519   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:56.693152   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:57.818736   74753 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0920 18:18:57.823058   74753 kubeadm.go:739] kubelet initialised
	I0920 18:18:57.823082   74753 kubeadm.go:740] duration metric: took 4.316691ms waiting for restarted kubelet to initialise ...
	I0920 18:18:57.823091   74753 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:18:57.827594   74753 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-j2t5h" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:57.833207   74753 pod_ready.go:98] node "no-preload-956403" hosting pod "coredns-7c65d6cfc9-j2t5h" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:57.833240   74753 pod_ready.go:82] duration metric: took 5.620335ms for pod "coredns-7c65d6cfc9-j2t5h" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:57.833252   74753 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-956403" hosting pod "coredns-7c65d6cfc9-j2t5h" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:57.833269   74753 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:57.838687   74753 pod_ready.go:98] node "no-preload-956403" hosting pod "etcd-no-preload-956403" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:57.838723   74753 pod_ready.go:82] duration metric: took 5.441795ms for pod "etcd-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:57.838737   74753 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-956403" hosting pod "etcd-no-preload-956403" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:57.838747   74753 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:57.848481   74753 pod_ready.go:98] node "no-preload-956403" hosting pod "kube-apiserver-no-preload-956403" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:57.848510   74753 pod_ready.go:82] duration metric: took 9.754053ms for pod "kube-apiserver-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:57.848520   74753 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-956403" hosting pod "kube-apiserver-no-preload-956403" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:57.848530   74753 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:57.934356   74753 pod_ready.go:98] node "no-preload-956403" hosting pod "kube-controller-manager-no-preload-956403" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:57.934383   74753 pod_ready.go:82] duration metric: took 85.842265ms for pod "kube-controller-manager-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:57.934392   74753 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-956403" hosting pod "kube-controller-manager-no-preload-956403" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:57.934397   74753 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-sz4bm" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:58.332467   74753 pod_ready.go:98] node "no-preload-956403" hosting pod "kube-proxy-sz4bm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:58.332495   74753 pod_ready.go:82] duration metric: took 398.088821ms for pod "kube-proxy-sz4bm" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:58.332504   74753 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-956403" hosting pod "kube-proxy-sz4bm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:58.332510   74753 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:58.732836   74753 pod_ready.go:98] node "no-preload-956403" hosting pod "kube-scheduler-no-preload-956403" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:58.732859   74753 pod_ready.go:82] duration metric: took 400.343274ms for pod "kube-scheduler-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:58.732868   74753 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-956403" hosting pod "kube-scheduler-no-preload-956403" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:58.732874   74753 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:59.132465   74753 pod_ready.go:98] node "no-preload-956403" hosting pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:59.132489   74753 pod_ready.go:82] duration metric: took 399.606625ms for pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:59.132503   74753 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-956403" hosting pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:59.132510   74753 pod_ready.go:39] duration metric: took 1.309409155s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:18:59.132526   74753 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 18:18:59.147163   74753 ops.go:34] apiserver oom_adj: -16
	I0920 18:18:59.147190   74753 kubeadm.go:597] duration metric: took 8.138198351s to restartPrimaryControlPlane
	I0920 18:18:59.147200   74753 kubeadm.go:394] duration metric: took 8.186768244s to StartCluster
	I0920 18:18:59.147214   74753 settings.go:142] acquiring lock: {Name:mk2a13c58cdc0faf8cddca5d6716175d45db9bfd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:18:59.147287   74753 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19672-8777/kubeconfig
	I0920 18:18:59.149036   74753 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/kubeconfig: {Name:mkf32a4c736808e023459b2f0e40188618a38db1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:18:59.149303   74753 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.47 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:18:59.149381   74753 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 18:18:59.149479   74753 addons.go:69] Setting storage-provisioner=true in profile "no-preload-956403"
	I0920 18:18:59.149503   74753 addons.go:234] Setting addon storage-provisioner=true in "no-preload-956403"
	I0920 18:18:59.149501   74753 addons.go:69] Setting default-storageclass=true in profile "no-preload-956403"
	W0920 18:18:59.149515   74753 addons.go:243] addon storage-provisioner should already be in state true
	I0920 18:18:59.149526   74753 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-956403"
	I0920 18:18:59.149520   74753 addons.go:69] Setting metrics-server=true in profile "no-preload-956403"
	I0920 18:18:59.149546   74753 host.go:66] Checking if "no-preload-956403" exists ...
	I0920 18:18:59.149558   74753 addons.go:234] Setting addon metrics-server=true in "no-preload-956403"
	W0920 18:18:59.149575   74753 addons.go:243] addon metrics-server should already be in state true
	I0920 18:18:59.149588   74753 config.go:182] Loaded profile config "no-preload-956403": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:18:59.149610   74753 host.go:66] Checking if "no-preload-956403" exists ...
	I0920 18:18:59.149847   74753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:59.149889   74753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:59.149944   74753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:59.149954   74753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:59.150003   74753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:59.150075   74753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:59.151276   74753 out.go:177] * Verifying Kubernetes components...
	I0920 18:18:59.152577   74753 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:18:59.178294   74753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42565
	I0920 18:18:59.178917   74753 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:59.179414   74753 main.go:141] libmachine: Using API Version  1
	I0920 18:18:59.179435   74753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:59.179869   74753 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:59.180059   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetState
	I0920 18:18:59.183556   74753 addons.go:234] Setting addon default-storageclass=true in "no-preload-956403"
	W0920 18:18:59.183575   74753 addons.go:243] addon default-storageclass should already be in state true
	I0920 18:18:59.183606   74753 host.go:66] Checking if "no-preload-956403" exists ...
	I0920 18:18:59.183970   74753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:59.184011   74753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:59.195338   74753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43759
	I0920 18:18:59.195739   74753 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:59.196290   74753 main.go:141] libmachine: Using API Version  1
	I0920 18:18:59.196316   74753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:59.196742   74753 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:59.197211   74753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33531
	I0920 18:18:59.197327   74753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:59.197375   74753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:59.197552   74753 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:59.198055   74753 main.go:141] libmachine: Using API Version  1
	I0920 18:18:59.198080   74753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:59.198420   74753 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:59.198997   74753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:59.199037   74753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:59.203164   74753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40055
	I0920 18:18:59.203700   74753 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:59.204320   74753 main.go:141] libmachine: Using API Version  1
	I0920 18:18:59.204341   74753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:59.204745   74753 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:59.205399   74753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:59.205440   74753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:59.215457   74753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37991
	I0920 18:18:59.215953   74753 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:59.216312   74753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35461
	I0920 18:18:59.216521   74753 main.go:141] libmachine: Using API Version  1
	I0920 18:18:59.216723   74753 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:59.216826   74753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:59.217221   74753 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:59.217403   74753 main.go:141] libmachine: Using API Version  1
	I0920 18:18:59.217414   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetState
	I0920 18:18:59.217452   74753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:59.217942   74753 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:59.218403   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetState
	I0920 18:18:59.219340   74753 main.go:141] libmachine: (no-preload-956403) Calling .DriverName
	I0920 18:18:59.220536   74753 main.go:141] libmachine: (no-preload-956403) Calling .DriverName
	I0920 18:18:59.221934   74753 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0920 18:18:59.222788   74753 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:18:59.223744   74753 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 18:18:59.223766   74753 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 18:18:59.223788   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:18:59.224567   74753 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 18:18:59.224588   74753 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 18:18:59.224607   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:18:59.225333   74753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45207
	I0920 18:18:59.225992   74753 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:59.226975   74753 main.go:141] libmachine: Using API Version  1
	I0920 18:18:59.226991   74753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:59.227472   74753 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:59.227788   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetState
	I0920 18:18:59.227818   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:59.228234   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:59.228282   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:59.228441   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHPort
	I0920 18:18:59.228636   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:59.228671   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:59.228821   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHUsername
	I0920 18:18:59.228960   74753 sshutil.go:53] new ssh client: &{IP:192.168.50.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/no-preload-956403/id_rsa Username:docker}
	I0920 18:18:59.229280   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:59.229297   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:59.229478   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHPort
	I0920 18:18:59.229651   74753 main.go:141] libmachine: (no-preload-956403) Calling .DriverName
	I0920 18:18:59.229807   74753 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 18:18:59.229817   74753 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 18:18:59.229843   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:18:59.230195   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:59.230729   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHUsername
	I0920 18:18:59.230899   74753 sshutil.go:53] new ssh client: &{IP:192.168.50.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/no-preload-956403/id_rsa Username:docker}
	I0920 18:18:59.232419   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:59.232844   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:59.232877   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:59.233020   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHPort
	I0920 18:18:59.233201   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:59.233311   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHUsername
	I0920 18:18:59.233424   74753 sshutil.go:53] new ssh client: &{IP:192.168.50.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/no-preload-956403/id_rsa Username:docker}
	I0920 18:18:59.373284   74753 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:18:59.391114   74753 node_ready.go:35] waiting up to 6m0s for node "no-preload-956403" to be "Ready" ...
	I0920 18:18:59.475607   74753 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 18:18:59.530301   74753 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 18:18:59.530320   74753 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0920 18:18:59.530804   74753 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 18:18:59.607246   74753 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 18:18:59.607272   74753 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 18:18:59.685234   74753 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 18:18:59.685273   74753 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 18:18:59.753902   74753 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 18:19:00.029605   74753 main.go:141] libmachine: Making call to close driver server
	I0920 18:19:00.029630   74753 main.go:141] libmachine: (no-preload-956403) Calling .Close
	I0920 18:19:00.029968   74753 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:19:00.030016   74753 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:19:00.030032   74753 main.go:141] libmachine: Making call to close driver server
	I0920 18:19:00.030039   74753 main.go:141] libmachine: (no-preload-956403) Calling .Close
	I0920 18:19:00.030285   74753 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:19:00.030299   74753 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:19:00.049472   74753 main.go:141] libmachine: Making call to close driver server
	I0920 18:19:00.049500   74753 main.go:141] libmachine: (no-preload-956403) Calling .Close
	I0920 18:19:00.049765   74753 main.go:141] libmachine: (no-preload-956403) DBG | Closing plugin on server side
	I0920 18:19:00.049781   74753 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:19:00.049797   74753 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:19:00.790635   74753 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.259791892s)
	I0920 18:19:00.790694   74753 main.go:141] libmachine: Making call to close driver server
	I0920 18:19:00.790703   74753 main.go:141] libmachine: (no-preload-956403) Calling .Close
	I0920 18:19:00.791004   74753 main.go:141] libmachine: (no-preload-956403) DBG | Closing plugin on server side
	I0920 18:19:00.791007   74753 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:19:00.791075   74753 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:19:00.791100   74753 main.go:141] libmachine: Making call to close driver server
	I0920 18:19:00.791110   74753 main.go:141] libmachine: (no-preload-956403) Calling .Close
	I0920 18:19:00.791339   74753 main.go:141] libmachine: (no-preload-956403) DBG | Closing plugin on server side
	I0920 18:19:00.791347   74753 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:19:00.791358   74753 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:19:00.829250   74753 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.075301587s)
	I0920 18:19:00.829297   74753 main.go:141] libmachine: Making call to close driver server
	I0920 18:19:00.829311   74753 main.go:141] libmachine: (no-preload-956403) Calling .Close
	I0920 18:19:00.829607   74753 main.go:141] libmachine: (no-preload-956403) DBG | Closing plugin on server side
	I0920 18:19:00.829612   74753 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:19:00.829632   74753 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:19:00.829641   74753 main.go:141] libmachine: Making call to close driver server
	I0920 18:19:00.829647   74753 main.go:141] libmachine: (no-preload-956403) Calling .Close
	I0920 18:19:00.829905   74753 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:19:00.829927   74753 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:19:00.829926   74753 main.go:141] libmachine: (no-preload-956403) DBG | Closing plugin on server side
	I0920 18:19:00.829938   74753 addons.go:475] Verifying addon metrics-server=true in "no-preload-956403"
	I0920 18:19:00.831897   74753 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0920 18:18:57.524979   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:59.526500   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:02.024579   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:00.833093   74753 addons.go:510] duration metric: took 1.683722999s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0920 18:19:01.394507   74753 node_ready.go:53] node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:57.750113   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:58.250100   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:58.750133   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:59.249427   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:59.749350   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:00.250267   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:00.749723   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:01.249549   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:01.749698   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:02.250043   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:59.195097   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:01.692391   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:03.692510   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:04.024713   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:06.525591   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:03.395035   74753 node_ready.go:53] node "no-preload-956403" has status "Ready":"False"
	I0920 18:19:05.895477   74753 node_ready.go:53] node "no-preload-956403" has status "Ready":"False"
	I0920 18:19:06.395177   74753 node_ready.go:49] node "no-preload-956403" has status "Ready":"True"
	I0920 18:19:06.395211   74753 node_ready.go:38] duration metric: took 7.0040677s for node "no-preload-956403" to be "Ready" ...
	I0920 18:19:06.395224   74753 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:19:06.400929   74753 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-j2t5h" in "kube-system" namespace to be "Ready" ...
	I0920 18:19:06.406052   74753 pod_ready.go:93] pod "coredns-7c65d6cfc9-j2t5h" in "kube-system" namespace has status "Ready":"True"
	I0920 18:19:06.406076   74753 pod_ready.go:82] duration metric: took 5.118178ms for pod "coredns-7c65d6cfc9-j2t5h" in "kube-system" namespace to be "Ready" ...
	I0920 18:19:06.406088   74753 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	I0920 18:19:07.412930   74753 pod_ready.go:93] pod "etcd-no-preload-956403" in "kube-system" namespace has status "Ready":"True"
	I0920 18:19:07.412946   74753 pod_ready.go:82] duration metric: took 1.006851075s for pod "etcd-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	I0920 18:19:07.412955   74753 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	I0920 18:19:07.420312   74753 pod_ready.go:93] pod "kube-apiserver-no-preload-956403" in "kube-system" namespace has status "Ready":"True"
	I0920 18:19:07.420342   74753 pod_ready.go:82] duration metric: took 7.380815ms for pod "kube-apiserver-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	I0920 18:19:07.420354   74753 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	I0920 18:19:02.750082   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:03.249861   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:03.749400   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:04.250213   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:04.749806   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:05.249569   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:05.750115   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:06.250182   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:06.750188   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:07.250041   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:06.191328   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:08.192222   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:09.024510   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:11.025023   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:08.426149   74753 pod_ready.go:93] pod "kube-controller-manager-no-preload-956403" in "kube-system" namespace has status "Ready":"True"
	I0920 18:19:08.426173   74753 pod_ready.go:82] duration metric: took 1.005811855s for pod "kube-controller-manager-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	I0920 18:19:08.426182   74753 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-sz4bm" in "kube-system" namespace to be "Ready" ...
	I0920 18:19:08.431446   74753 pod_ready.go:93] pod "kube-proxy-sz4bm" in "kube-system" namespace has status "Ready":"True"
	I0920 18:19:08.431474   74753 pod_ready.go:82] duration metric: took 5.284488ms for pod "kube-proxy-sz4bm" in "kube-system" namespace to be "Ready" ...
	I0920 18:19:08.431486   74753 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	I0920 18:19:08.794973   74753 pod_ready.go:93] pod "kube-scheduler-no-preload-956403" in "kube-system" namespace has status "Ready":"True"
	I0920 18:19:08.794997   74753 pod_ready.go:82] duration metric: took 363.504269ms for pod "kube-scheduler-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	I0920 18:19:08.795005   74753 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace to be "Ready" ...
	I0920 18:19:10.802181   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:07.749479   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:08.249641   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:08.749637   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:09.249803   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:09.749534   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:10.249365   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:10.749366   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:11.250045   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:11.750298   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:12.249766   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:10.192631   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:12.692470   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:13.525217   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:16.025516   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:13.300755   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:15.301345   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:12.749447   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:13.249356   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:13.749514   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:14.249942   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:14.749988   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:15.249405   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:15.749620   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:16.249962   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:16.750325   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:17.250198   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:15.191379   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:17.692235   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:18.525176   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:21.023910   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:17.801315   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:20.302369   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:22.302578   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:17.749509   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:18.250076   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:18.749518   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:19.249440   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:19.750156   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:20.249686   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:20.750315   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:21.249755   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:21.749370   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:22.249767   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:20.192027   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:22.192925   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:23.024677   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:25.025216   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:24.801558   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:26.802796   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:22.749289   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:23.249386   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:23.749870   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:24.249424   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:24.750349   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:25.249582   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:25.249657   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:25.287604   75577 cri.go:89] found id: ""
	I0920 18:19:25.287633   75577 logs.go:276] 0 containers: []
	W0920 18:19:25.287641   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:25.287647   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:25.287692   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:25.322093   75577 cri.go:89] found id: ""
	I0920 18:19:25.322127   75577 logs.go:276] 0 containers: []
	W0920 18:19:25.322137   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:25.322144   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:25.322194   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:25.354812   75577 cri.go:89] found id: ""
	I0920 18:19:25.354840   75577 logs.go:276] 0 containers: []
	W0920 18:19:25.354851   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:25.354863   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:25.354928   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:25.387777   75577 cri.go:89] found id: ""
	I0920 18:19:25.387804   75577 logs.go:276] 0 containers: []
	W0920 18:19:25.387813   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:25.387819   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:25.387867   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:25.420325   75577 cri.go:89] found id: ""
	I0920 18:19:25.420354   75577 logs.go:276] 0 containers: []
	W0920 18:19:25.420364   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:25.420372   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:25.420437   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:25.454824   75577 cri.go:89] found id: ""
	I0920 18:19:25.454853   75577 logs.go:276] 0 containers: []
	W0920 18:19:25.454864   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:25.454871   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:25.454933   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:25.493520   75577 cri.go:89] found id: ""
	I0920 18:19:25.493548   75577 logs.go:276] 0 containers: []
	W0920 18:19:25.493559   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:25.493566   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:25.493639   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:25.528795   75577 cri.go:89] found id: ""
	I0920 18:19:25.528820   75577 logs.go:276] 0 containers: []
	W0920 18:19:25.528828   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:25.528836   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:25.528847   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:25.571411   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:25.571442   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:25.625648   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:25.625693   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:25.639891   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:25.639920   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:25.769263   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:25.769289   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:25.769303   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:19:24.691757   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:26.695197   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:27.524327   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:29.525192   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:32.024450   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:29.301400   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:31.301988   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:28.346894   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:28.359891   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:28.359967   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:28.396117   75577 cri.go:89] found id: ""
	I0920 18:19:28.396144   75577 logs.go:276] 0 containers: []
	W0920 18:19:28.396152   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:28.396158   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:28.396206   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:28.447888   75577 cri.go:89] found id: ""
	I0920 18:19:28.447923   75577 logs.go:276] 0 containers: []
	W0920 18:19:28.447933   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:28.447941   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:28.447998   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:28.485018   75577 cri.go:89] found id: ""
	I0920 18:19:28.485046   75577 logs.go:276] 0 containers: []
	W0920 18:19:28.485054   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:28.485060   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:28.485108   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:28.520904   75577 cri.go:89] found id: ""
	I0920 18:19:28.520934   75577 logs.go:276] 0 containers: []
	W0920 18:19:28.520942   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:28.520948   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:28.520996   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:28.561375   75577 cri.go:89] found id: ""
	I0920 18:19:28.561407   75577 logs.go:276] 0 containers: []
	W0920 18:19:28.561415   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:28.561421   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:28.561467   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:28.597643   75577 cri.go:89] found id: ""
	I0920 18:19:28.597671   75577 logs.go:276] 0 containers: []
	W0920 18:19:28.597679   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:28.597685   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:28.597747   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:28.632885   75577 cri.go:89] found id: ""
	I0920 18:19:28.632912   75577 logs.go:276] 0 containers: []
	W0920 18:19:28.632921   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:28.632926   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:28.632973   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:28.670639   75577 cri.go:89] found id: ""
	I0920 18:19:28.670665   75577 logs.go:276] 0 containers: []
	W0920 18:19:28.670675   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:28.670699   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:28.670714   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:19:28.749623   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:28.749659   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:28.790808   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:28.790831   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:28.842027   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:28.842070   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:28.855927   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:28.855954   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:28.935807   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:31.436321   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:31.450159   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:31.450221   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:31.485444   75577 cri.go:89] found id: ""
	I0920 18:19:31.485483   75577 logs.go:276] 0 containers: []
	W0920 18:19:31.485494   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:31.485502   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:31.485562   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:31.520888   75577 cri.go:89] found id: ""
	I0920 18:19:31.520918   75577 logs.go:276] 0 containers: []
	W0920 18:19:31.520927   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:31.520941   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:31.521007   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:31.555001   75577 cri.go:89] found id: ""
	I0920 18:19:31.555029   75577 logs.go:276] 0 containers: []
	W0920 18:19:31.555040   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:31.555047   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:31.555111   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:31.588766   75577 cri.go:89] found id: ""
	I0920 18:19:31.588794   75577 logs.go:276] 0 containers: []
	W0920 18:19:31.588802   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:31.588808   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:31.588872   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:31.622006   75577 cri.go:89] found id: ""
	I0920 18:19:31.622037   75577 logs.go:276] 0 containers: []
	W0920 18:19:31.622048   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:31.622056   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:31.622110   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:31.659475   75577 cri.go:89] found id: ""
	I0920 18:19:31.659508   75577 logs.go:276] 0 containers: []
	W0920 18:19:31.659519   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:31.659527   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:31.659589   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:31.695380   75577 cri.go:89] found id: ""
	I0920 18:19:31.695415   75577 logs.go:276] 0 containers: []
	W0920 18:19:31.695426   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:31.695436   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:31.695521   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:31.729507   75577 cri.go:89] found id: ""
	I0920 18:19:31.729540   75577 logs.go:276] 0 containers: []
	W0920 18:19:31.729550   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:31.729561   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:31.729574   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:31.781857   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:31.781908   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:31.795325   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:31.795356   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:31.868684   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:31.868708   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:31.868722   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:19:31.945334   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:31.945371   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:29.191780   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:31.192712   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:33.693996   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:34.024591   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:36.024999   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:33.801443   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:36.300715   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:34.485723   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:34.499039   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:34.499106   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:34.534664   75577 cri.go:89] found id: ""
	I0920 18:19:34.534697   75577 logs.go:276] 0 containers: []
	W0920 18:19:34.534709   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:34.534717   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:34.534777   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:34.567515   75577 cri.go:89] found id: ""
	I0920 18:19:34.567545   75577 logs.go:276] 0 containers: []
	W0920 18:19:34.567556   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:34.567564   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:34.567624   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:34.601653   75577 cri.go:89] found id: ""
	I0920 18:19:34.601693   75577 logs.go:276] 0 containers: []
	W0920 18:19:34.601704   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:34.601712   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:34.601775   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:34.635238   75577 cri.go:89] found id: ""
	I0920 18:19:34.635271   75577 logs.go:276] 0 containers: []
	W0920 18:19:34.635282   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:34.635291   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:34.635361   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:34.669665   75577 cri.go:89] found id: ""
	I0920 18:19:34.669689   75577 logs.go:276] 0 containers: []
	W0920 18:19:34.669697   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:34.669703   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:34.669751   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:34.705984   75577 cri.go:89] found id: ""
	I0920 18:19:34.706012   75577 logs.go:276] 0 containers: []
	W0920 18:19:34.706022   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:34.706032   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:34.706110   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:34.740052   75577 cri.go:89] found id: ""
	I0920 18:19:34.740079   75577 logs.go:276] 0 containers: []
	W0920 18:19:34.740087   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:34.740092   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:34.740139   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:34.774930   75577 cri.go:89] found id: ""
	I0920 18:19:34.774962   75577 logs.go:276] 0 containers: []
	W0920 18:19:34.774973   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:34.774984   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:34.775081   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:34.791674   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:34.791714   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:34.894988   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:34.895019   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:34.895035   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:19:34.973785   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:34.973823   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:35.010928   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:35.010964   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:37.561543   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:37.574865   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:37.574925   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:37.609145   75577 cri.go:89] found id: ""
	I0920 18:19:37.609170   75577 logs.go:276] 0 containers: []
	W0920 18:19:37.609178   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:37.609183   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:37.609247   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:37.645080   75577 cri.go:89] found id: ""
	I0920 18:19:37.645107   75577 logs.go:276] 0 containers: []
	W0920 18:19:37.645115   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:37.645121   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:37.645168   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:37.680060   75577 cri.go:89] found id: ""
	I0920 18:19:37.680094   75577 logs.go:276] 0 containers: []
	W0920 18:19:37.680105   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:37.680113   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:37.680173   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:37.713586   75577 cri.go:89] found id: ""
	I0920 18:19:37.713618   75577 logs.go:276] 0 containers: []
	W0920 18:19:37.713629   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:37.713636   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:37.713694   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:36.192483   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:38.692017   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:38.025974   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:40.524671   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:38.302403   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:40.802748   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:37.750249   75577 cri.go:89] found id: ""
	I0920 18:19:37.750274   75577 logs.go:276] 0 containers: []
	W0920 18:19:37.750282   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:37.750289   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:37.750351   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:37.785613   75577 cri.go:89] found id: ""
	I0920 18:19:37.785642   75577 logs.go:276] 0 containers: []
	W0920 18:19:37.785650   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:37.785656   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:37.785705   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:37.824240   75577 cri.go:89] found id: ""
	I0920 18:19:37.824267   75577 logs.go:276] 0 containers: []
	W0920 18:19:37.824278   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:37.824286   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:37.824348   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:37.858522   75577 cri.go:89] found id: ""
	I0920 18:19:37.858547   75577 logs.go:276] 0 containers: []
	W0920 18:19:37.858556   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:37.858564   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:37.858575   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:19:37.939852   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:37.939891   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:37.981029   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:37.981062   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:38.030606   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:38.030633   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:38.043914   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:38.043953   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:38.124846   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:40.625196   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:40.640638   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:40.640708   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:40.687107   75577 cri.go:89] found id: ""
	I0920 18:19:40.687131   75577 logs.go:276] 0 containers: []
	W0920 18:19:40.687140   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:40.687148   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:40.687206   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:40.725727   75577 cri.go:89] found id: ""
	I0920 18:19:40.725858   75577 logs.go:276] 0 containers: []
	W0920 18:19:40.725875   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:40.725889   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:40.725967   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:40.762965   75577 cri.go:89] found id: ""
	I0920 18:19:40.762988   75577 logs.go:276] 0 containers: []
	W0920 18:19:40.762996   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:40.763002   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:40.763049   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:40.797387   75577 cri.go:89] found id: ""
	I0920 18:19:40.797415   75577 logs.go:276] 0 containers: []
	W0920 18:19:40.797427   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:40.797439   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:40.797498   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:40.834482   75577 cri.go:89] found id: ""
	I0920 18:19:40.834513   75577 logs.go:276] 0 containers: []
	W0920 18:19:40.834521   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:40.834528   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:40.834578   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:40.870062   75577 cri.go:89] found id: ""
	I0920 18:19:40.870090   75577 logs.go:276] 0 containers: []
	W0920 18:19:40.870099   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:40.870106   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:40.870164   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:40.906613   75577 cri.go:89] found id: ""
	I0920 18:19:40.906642   75577 logs.go:276] 0 containers: []
	W0920 18:19:40.906653   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:40.906662   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:40.906721   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:40.953033   75577 cri.go:89] found id: ""
	I0920 18:19:40.953056   75577 logs.go:276] 0 containers: []
	W0920 18:19:40.953065   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:40.953073   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:40.953083   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:40.998490   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:40.998523   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:41.051488   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:41.051525   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:41.067908   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:41.067937   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:41.144854   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:41.144878   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:41.144890   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:19:40.692456   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:43.192532   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:43.024238   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:45.024998   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:47.025427   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:43.302334   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:45.801415   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:43.723613   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:43.736857   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:43.736924   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:43.772588   75577 cri.go:89] found id: ""
	I0920 18:19:43.772624   75577 logs.go:276] 0 containers: []
	W0920 18:19:43.772635   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:43.772643   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:43.772712   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:43.808580   75577 cri.go:89] found id: ""
	I0920 18:19:43.808611   75577 logs.go:276] 0 containers: []
	W0920 18:19:43.808622   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:43.808629   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:43.808695   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:43.847167   75577 cri.go:89] found id: ""
	I0920 18:19:43.847207   75577 logs.go:276] 0 containers: []
	W0920 18:19:43.847218   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:43.847243   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:43.847302   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:43.883614   75577 cri.go:89] found id: ""
	I0920 18:19:43.883646   75577 logs.go:276] 0 containers: []
	W0920 18:19:43.883658   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:43.883667   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:43.883738   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:43.917602   75577 cri.go:89] found id: ""
	I0920 18:19:43.917627   75577 logs.go:276] 0 containers: []
	W0920 18:19:43.917635   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:43.917641   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:43.917694   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:43.953232   75577 cri.go:89] found id: ""
	I0920 18:19:43.953259   75577 logs.go:276] 0 containers: []
	W0920 18:19:43.953268   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:43.953273   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:43.953325   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:43.991204   75577 cri.go:89] found id: ""
	I0920 18:19:43.991234   75577 logs.go:276] 0 containers: []
	W0920 18:19:43.991246   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:43.991253   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:43.991334   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:44.028117   75577 cri.go:89] found id: ""
	I0920 18:19:44.028140   75577 logs.go:276] 0 containers: []
	W0920 18:19:44.028147   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:44.028154   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:44.028164   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:44.068175   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:44.068203   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:44.119953   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:44.119993   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:44.134127   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:44.134154   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:44.211563   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:44.211592   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:44.211604   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:19:46.787328   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:46.803429   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:46.803516   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:46.844813   75577 cri.go:89] found id: ""
	I0920 18:19:46.844839   75577 logs.go:276] 0 containers: []
	W0920 18:19:46.844850   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:46.844856   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:46.844914   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:46.878441   75577 cri.go:89] found id: ""
	I0920 18:19:46.878483   75577 logs.go:276] 0 containers: []
	W0920 18:19:46.878497   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:46.878506   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:46.878580   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:46.913933   75577 cri.go:89] found id: ""
	I0920 18:19:46.913976   75577 logs.go:276] 0 containers: []
	W0920 18:19:46.913986   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:46.913993   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:46.914066   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:46.948577   75577 cri.go:89] found id: ""
	I0920 18:19:46.948609   75577 logs.go:276] 0 containers: []
	W0920 18:19:46.948618   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:46.948625   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:46.948689   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:46.982742   75577 cri.go:89] found id: ""
	I0920 18:19:46.982770   75577 logs.go:276] 0 containers: []
	W0920 18:19:46.982778   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:46.982785   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:46.982836   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:47.017075   75577 cri.go:89] found id: ""
	I0920 18:19:47.017107   75577 logs.go:276] 0 containers: []
	W0920 18:19:47.017120   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:47.017128   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:47.017190   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:47.055494   75577 cri.go:89] found id: ""
	I0920 18:19:47.055520   75577 logs.go:276] 0 containers: []
	W0920 18:19:47.055528   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:47.055534   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:47.055586   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:47.091966   75577 cri.go:89] found id: ""
	I0920 18:19:47.091998   75577 logs.go:276] 0 containers: []
	W0920 18:19:47.092006   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:47.092019   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:47.092035   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:47.142916   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:47.142955   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:47.158471   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:47.158502   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:47.241530   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:47.241553   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:47.241567   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:19:47.316958   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:47.316992   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:45.192724   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:47.692046   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:49.524915   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:52.026403   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:48.302289   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:50.303276   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:49.854403   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:49.867317   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:49.867431   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:49.903627   75577 cri.go:89] found id: ""
	I0920 18:19:49.903661   75577 logs.go:276] 0 containers: []
	W0920 18:19:49.903674   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:49.903682   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:49.903745   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:49.936857   75577 cri.go:89] found id: ""
	I0920 18:19:49.936892   75577 logs.go:276] 0 containers: []
	W0920 18:19:49.936902   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:49.936909   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:49.936975   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:49.974403   75577 cri.go:89] found id: ""
	I0920 18:19:49.974433   75577 logs.go:276] 0 containers: []
	W0920 18:19:49.974441   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:49.974447   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:49.974505   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:50.011810   75577 cri.go:89] found id: ""
	I0920 18:19:50.011839   75577 logs.go:276] 0 containers: []
	W0920 18:19:50.011850   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:50.011857   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:50.011921   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:50.050491   75577 cri.go:89] found id: ""
	I0920 18:19:50.050527   75577 logs.go:276] 0 containers: []
	W0920 18:19:50.050538   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:50.050546   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:50.050610   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:50.088270   75577 cri.go:89] found id: ""
	I0920 18:19:50.088299   75577 logs.go:276] 0 containers: []
	W0920 18:19:50.088308   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:50.088314   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:50.088375   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:50.126355   75577 cri.go:89] found id: ""
	I0920 18:19:50.126382   75577 logs.go:276] 0 containers: []
	W0920 18:19:50.126392   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:50.126399   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:50.126460   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:50.160770   75577 cri.go:89] found id: ""
	I0920 18:19:50.160798   75577 logs.go:276] 0 containers: []
	W0920 18:19:50.160808   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:50.160819   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:50.160834   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:50.212866   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:50.212914   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:50.227269   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:50.227295   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:50.301363   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:50.301392   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:50.301406   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:19:50.376293   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:50.376330   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:50.192002   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:52.192488   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:54.524436   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:56.526032   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:52.801396   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:54.802393   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:57.301066   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:52.916567   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:52.930445   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:52.930525   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:52.965360   75577 cri.go:89] found id: ""
	I0920 18:19:52.965392   75577 logs.go:276] 0 containers: []
	W0920 18:19:52.965403   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:52.965411   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:52.965468   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:53.000439   75577 cri.go:89] found id: ""
	I0920 18:19:53.000480   75577 logs.go:276] 0 containers: []
	W0920 18:19:53.000495   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:53.000503   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:53.000577   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:53.034598   75577 cri.go:89] found id: ""
	I0920 18:19:53.034630   75577 logs.go:276] 0 containers: []
	W0920 18:19:53.034640   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:53.034647   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:53.034744   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:53.068636   75577 cri.go:89] found id: ""
	I0920 18:19:53.068664   75577 logs.go:276] 0 containers: []
	W0920 18:19:53.068673   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:53.068678   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:53.068750   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:53.104381   75577 cri.go:89] found id: ""
	I0920 18:19:53.104408   75577 logs.go:276] 0 containers: []
	W0920 18:19:53.104416   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:53.104421   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:53.104474   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:53.137865   75577 cri.go:89] found id: ""
	I0920 18:19:53.137897   75577 logs.go:276] 0 containers: []
	W0920 18:19:53.137909   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:53.137922   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:53.137983   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:53.171825   75577 cri.go:89] found id: ""
	I0920 18:19:53.171861   75577 logs.go:276] 0 containers: []
	W0920 18:19:53.171874   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:53.171883   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:53.171952   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:53.210742   75577 cri.go:89] found id: ""
	I0920 18:19:53.210774   75577 logs.go:276] 0 containers: []
	W0920 18:19:53.210784   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:53.210796   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:53.210811   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:19:53.285597   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:53.285637   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:53.329821   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:53.329871   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:53.381362   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:53.381415   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:53.396044   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:53.396074   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:53.471582   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:55.972369   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:55.987205   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:55.987281   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:56.031322   75577 cri.go:89] found id: ""
	I0920 18:19:56.031355   75577 logs.go:276] 0 containers: []
	W0920 18:19:56.031363   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:56.031368   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:56.031420   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:56.066442   75577 cri.go:89] found id: ""
	I0920 18:19:56.066487   75577 logs.go:276] 0 containers: []
	W0920 18:19:56.066576   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:56.066617   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:56.066695   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:56.101818   75577 cri.go:89] found id: ""
	I0920 18:19:56.101864   75577 logs.go:276] 0 containers: []
	W0920 18:19:56.101876   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:56.101883   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:56.101947   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:56.139513   75577 cri.go:89] found id: ""
	I0920 18:19:56.139545   75577 logs.go:276] 0 containers: []
	W0920 18:19:56.139557   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:56.139565   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:56.139618   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:56.173638   75577 cri.go:89] found id: ""
	I0920 18:19:56.173669   75577 logs.go:276] 0 containers: []
	W0920 18:19:56.173680   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:56.173688   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:56.173752   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:56.210657   75577 cri.go:89] found id: ""
	I0920 18:19:56.210689   75577 logs.go:276] 0 containers: []
	W0920 18:19:56.210700   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:56.210709   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:56.210768   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:56.246035   75577 cri.go:89] found id: ""
	I0920 18:19:56.246063   75577 logs.go:276] 0 containers: []
	W0920 18:19:56.246071   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:56.246077   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:56.246123   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:56.280766   75577 cri.go:89] found id: ""
	I0920 18:19:56.280796   75577 logs.go:276] 0 containers: []
	W0920 18:19:56.280807   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:56.280818   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:56.280834   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:56.320511   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:56.320540   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:56.373746   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:56.373785   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:56.389294   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:56.389322   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:56.460079   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:56.460100   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:56.460112   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:19:54.692781   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:57.192706   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:59.025015   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:01.025196   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:59.302190   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:01.801923   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:59.044541   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:59.058395   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:59.058464   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:59.097442   75577 cri.go:89] found id: ""
	I0920 18:19:59.097482   75577 logs.go:276] 0 containers: []
	W0920 18:19:59.097495   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:59.097512   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:59.097593   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:59.133091   75577 cri.go:89] found id: ""
	I0920 18:19:59.133116   75577 logs.go:276] 0 containers: []
	W0920 18:19:59.133128   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:59.133135   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:59.133264   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:59.166902   75577 cri.go:89] found id: ""
	I0920 18:19:59.166927   75577 logs.go:276] 0 containers: []
	W0920 18:19:59.166938   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:59.166945   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:59.167008   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:59.202551   75577 cri.go:89] found id: ""
	I0920 18:19:59.202573   75577 logs.go:276] 0 containers: []
	W0920 18:19:59.202581   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:59.202586   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:59.202633   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:59.235197   75577 cri.go:89] found id: ""
	I0920 18:19:59.235222   75577 logs.go:276] 0 containers: []
	W0920 18:19:59.235230   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:59.235236   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:59.235286   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:59.269148   75577 cri.go:89] found id: ""
	I0920 18:19:59.269176   75577 logs.go:276] 0 containers: []
	W0920 18:19:59.269187   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:59.269196   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:59.269262   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:59.307670   75577 cri.go:89] found id: ""
	I0920 18:19:59.307692   75577 logs.go:276] 0 containers: []
	W0920 18:19:59.307699   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:59.307705   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:59.307753   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:59.340554   75577 cri.go:89] found id: ""
	I0920 18:19:59.340589   75577 logs.go:276] 0 containers: []
	W0920 18:19:59.340599   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:59.340610   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:59.340623   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:59.382339   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:59.382470   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:59.432176   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:59.432211   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:59.445889   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:59.445922   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:59.516564   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:59.516590   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:59.516609   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:02.101918   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:02.115064   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:02.115128   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:02.148718   75577 cri.go:89] found id: ""
	I0920 18:20:02.148749   75577 logs.go:276] 0 containers: []
	W0920 18:20:02.148757   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:02.148764   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:02.148822   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:02.182673   75577 cri.go:89] found id: ""
	I0920 18:20:02.182711   75577 logs.go:276] 0 containers: []
	W0920 18:20:02.182722   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:02.182729   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:02.182797   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:02.218734   75577 cri.go:89] found id: ""
	I0920 18:20:02.218771   75577 logs.go:276] 0 containers: []
	W0920 18:20:02.218782   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:02.218791   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:02.218856   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:02.258956   75577 cri.go:89] found id: ""
	I0920 18:20:02.258981   75577 logs.go:276] 0 containers: []
	W0920 18:20:02.258989   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:02.258995   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:02.259066   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:02.294879   75577 cri.go:89] found id: ""
	I0920 18:20:02.294910   75577 logs.go:276] 0 containers: []
	W0920 18:20:02.294919   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:02.294925   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:02.294971   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:02.331010   75577 cri.go:89] found id: ""
	I0920 18:20:02.331039   75577 logs.go:276] 0 containers: []
	W0920 18:20:02.331049   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:02.331057   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:02.331122   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:02.365023   75577 cri.go:89] found id: ""
	I0920 18:20:02.365056   75577 logs.go:276] 0 containers: []
	W0920 18:20:02.365066   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:02.365072   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:02.365122   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:02.400501   75577 cri.go:89] found id: ""
	I0920 18:20:02.400528   75577 logs.go:276] 0 containers: []
	W0920 18:20:02.400537   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:02.400545   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:02.400556   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:02.450597   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:02.450636   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:02.467637   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:02.467671   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:02.540668   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:02.540690   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:02.540706   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:02.629706   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:02.629752   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:59.192799   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:01.691537   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:03.692613   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:03.524517   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:05.525418   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:04.300845   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:06.301250   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:05.168119   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:05.182954   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:05.183043   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:05.221938   75577 cri.go:89] found id: ""
	I0920 18:20:05.221971   75577 logs.go:276] 0 containers: []
	W0920 18:20:05.221980   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:05.221990   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:05.222051   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:05.258454   75577 cri.go:89] found id: ""
	I0920 18:20:05.258479   75577 logs.go:276] 0 containers: []
	W0920 18:20:05.258487   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:05.258492   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:05.258540   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:05.293091   75577 cri.go:89] found id: ""
	I0920 18:20:05.293125   75577 logs.go:276] 0 containers: []
	W0920 18:20:05.293138   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:05.293146   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:05.293196   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:05.332992   75577 cri.go:89] found id: ""
	I0920 18:20:05.333025   75577 logs.go:276] 0 containers: []
	W0920 18:20:05.333034   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:05.333040   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:05.333088   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:05.368743   75577 cri.go:89] found id: ""
	I0920 18:20:05.368778   75577 logs.go:276] 0 containers: []
	W0920 18:20:05.368790   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:05.368798   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:05.368859   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:05.404913   75577 cri.go:89] found id: ""
	I0920 18:20:05.404941   75577 logs.go:276] 0 containers: []
	W0920 18:20:05.404948   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:05.404954   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:05.405003   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:05.442111   75577 cri.go:89] found id: ""
	I0920 18:20:05.442143   75577 logs.go:276] 0 containers: []
	W0920 18:20:05.442154   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:05.442163   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:05.442228   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:05.478808   75577 cri.go:89] found id: ""
	I0920 18:20:05.478842   75577 logs.go:276] 0 containers: []
	W0920 18:20:05.478853   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:05.478865   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:05.478879   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:05.531653   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:05.531691   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:05.545181   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:05.545210   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:05.615009   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:05.615041   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:05.615059   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:05.690842   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:05.690871   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:06.193177   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:08.691596   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:08.034009   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:10.524419   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:08.301457   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:10.801788   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:08.230851   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:08.244539   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:08.244609   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:08.281123   75577 cri.go:89] found id: ""
	I0920 18:20:08.281155   75577 logs.go:276] 0 containers: []
	W0920 18:20:08.281167   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:08.281174   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:08.281226   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:08.319704   75577 cri.go:89] found id: ""
	I0920 18:20:08.319740   75577 logs.go:276] 0 containers: []
	W0920 18:20:08.319754   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:08.319763   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:08.319828   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:08.354589   75577 cri.go:89] found id: ""
	I0920 18:20:08.354619   75577 logs.go:276] 0 containers: []
	W0920 18:20:08.354631   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:08.354638   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:08.354703   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:08.391580   75577 cri.go:89] found id: ""
	I0920 18:20:08.391603   75577 logs.go:276] 0 containers: []
	W0920 18:20:08.391612   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:08.391617   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:08.391666   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:08.425596   75577 cri.go:89] found id: ""
	I0920 18:20:08.425622   75577 logs.go:276] 0 containers: []
	W0920 18:20:08.425630   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:08.425638   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:08.425704   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:08.458720   75577 cri.go:89] found id: ""
	I0920 18:20:08.458747   75577 logs.go:276] 0 containers: []
	W0920 18:20:08.458758   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:08.458764   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:08.458812   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:08.496104   75577 cri.go:89] found id: ""
	I0920 18:20:08.496137   75577 logs.go:276] 0 containers: []
	W0920 18:20:08.496148   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:08.496155   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:08.496210   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:08.530961   75577 cri.go:89] found id: ""
	I0920 18:20:08.530989   75577 logs.go:276] 0 containers: []
	W0920 18:20:08.531000   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:08.531010   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:08.531023   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:08.568512   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:08.568541   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:08.619716   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:08.619754   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:08.634358   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:08.634390   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:08.721465   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:08.721488   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:08.721501   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:11.303942   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:11.316686   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:11.316759   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:11.353137   75577 cri.go:89] found id: ""
	I0920 18:20:11.353161   75577 logs.go:276] 0 containers: []
	W0920 18:20:11.353169   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:11.353176   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:11.353229   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:11.388271   75577 cri.go:89] found id: ""
	I0920 18:20:11.388298   75577 logs.go:276] 0 containers: []
	W0920 18:20:11.388315   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:11.388322   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:11.388388   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:11.422673   75577 cri.go:89] found id: ""
	I0920 18:20:11.422700   75577 logs.go:276] 0 containers: []
	W0920 18:20:11.422708   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:11.422714   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:11.422768   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:11.458869   75577 cri.go:89] found id: ""
	I0920 18:20:11.458900   75577 logs.go:276] 0 containers: []
	W0920 18:20:11.458910   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:11.458917   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:11.459068   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:11.494128   75577 cri.go:89] found id: ""
	I0920 18:20:11.494161   75577 logs.go:276] 0 containers: []
	W0920 18:20:11.494172   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:11.494180   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:11.494246   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:11.529111   75577 cri.go:89] found id: ""
	I0920 18:20:11.529135   75577 logs.go:276] 0 containers: []
	W0920 18:20:11.529150   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:11.529157   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:11.529223   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:11.562287   75577 cri.go:89] found id: ""
	I0920 18:20:11.562314   75577 logs.go:276] 0 containers: []
	W0920 18:20:11.562323   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:11.562329   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:11.562381   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:11.600066   75577 cri.go:89] found id: ""
	I0920 18:20:11.600106   75577 logs.go:276] 0 containers: []
	W0920 18:20:11.600117   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:11.600128   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:11.600143   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:11.681628   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:11.681665   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:11.722173   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:11.722205   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:11.773132   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:11.773171   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:11.787183   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:11.787215   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:11.856304   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:10.695174   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:13.191743   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:13.025069   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:15.525020   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:12.803677   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:15.301431   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:14.356881   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:14.371658   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:14.371729   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:14.406983   75577 cri.go:89] found id: ""
	I0920 18:20:14.407009   75577 logs.go:276] 0 containers: []
	W0920 18:20:14.407017   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:14.407022   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:14.407075   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:14.481048   75577 cri.go:89] found id: ""
	I0920 18:20:14.481075   75577 logs.go:276] 0 containers: []
	W0920 18:20:14.481086   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:14.481094   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:14.481156   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:14.513683   75577 cri.go:89] found id: ""
	I0920 18:20:14.513711   75577 logs.go:276] 0 containers: []
	W0920 18:20:14.513719   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:14.513725   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:14.513797   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:14.553330   75577 cri.go:89] found id: ""
	I0920 18:20:14.553363   75577 logs.go:276] 0 containers: []
	W0920 18:20:14.553375   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:14.553381   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:14.553446   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:14.590803   75577 cri.go:89] found id: ""
	I0920 18:20:14.590837   75577 logs.go:276] 0 containers: []
	W0920 18:20:14.590848   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:14.590856   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:14.590927   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:14.625100   75577 cri.go:89] found id: ""
	I0920 18:20:14.625130   75577 logs.go:276] 0 containers: []
	W0920 18:20:14.625141   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:14.625151   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:14.625219   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:14.659313   75577 cri.go:89] found id: ""
	I0920 18:20:14.659342   75577 logs.go:276] 0 containers: []
	W0920 18:20:14.659351   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:14.659357   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:14.659418   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:14.694901   75577 cri.go:89] found id: ""
	I0920 18:20:14.694931   75577 logs.go:276] 0 containers: []
	W0920 18:20:14.694939   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:14.694951   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:14.694966   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:14.708406   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:14.708437   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:14.785174   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:14.785200   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:14.785214   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:14.873622   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:14.873666   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:14.919130   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:14.919166   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:17.468595   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:17.483397   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:17.483473   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:17.522415   75577 cri.go:89] found id: ""
	I0920 18:20:17.522445   75577 logs.go:276] 0 containers: []
	W0920 18:20:17.522455   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:17.522463   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:17.522523   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:17.558957   75577 cri.go:89] found id: ""
	I0920 18:20:17.558991   75577 logs.go:276] 0 containers: []
	W0920 18:20:17.559002   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:17.559010   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:17.559174   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:17.595183   75577 cri.go:89] found id: ""
	I0920 18:20:17.595217   75577 logs.go:276] 0 containers: []
	W0920 18:20:17.595229   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:17.595237   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:17.595302   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:17.631727   75577 cri.go:89] found id: ""
	I0920 18:20:17.631757   75577 logs.go:276] 0 containers: []
	W0920 18:20:17.631768   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:17.631775   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:17.631894   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:17.666374   75577 cri.go:89] found id: ""
	I0920 18:20:17.666409   75577 logs.go:276] 0 containers: []
	W0920 18:20:17.666420   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:17.666427   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:17.666488   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:17.702256   75577 cri.go:89] found id: ""
	I0920 18:20:17.702281   75577 logs.go:276] 0 containers: []
	W0920 18:20:17.702291   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:17.702299   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:17.702370   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:15.191971   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:17.192990   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:17.525552   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:20.025229   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:17.802045   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:20.302538   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:17.742133   75577 cri.go:89] found id: ""
	I0920 18:20:17.742161   75577 logs.go:276] 0 containers: []
	W0920 18:20:17.742172   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:17.742179   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:17.742249   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:17.781269   75577 cri.go:89] found id: ""
	I0920 18:20:17.781300   75577 logs.go:276] 0 containers: []
	W0920 18:20:17.781311   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:17.781321   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:17.781336   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:17.835886   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:17.835923   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:17.851365   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:17.851396   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:17.932682   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:17.932711   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:17.932726   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:18.013680   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:18.013731   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:20.555074   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:20.568007   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:20.568071   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:20.602250   75577 cri.go:89] found id: ""
	I0920 18:20:20.602278   75577 logs.go:276] 0 containers: []
	W0920 18:20:20.602287   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:20.602293   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:20.602353   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:20.636711   75577 cri.go:89] found id: ""
	I0920 18:20:20.636739   75577 logs.go:276] 0 containers: []
	W0920 18:20:20.636748   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:20.636753   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:20.636807   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:20.669529   75577 cri.go:89] found id: ""
	I0920 18:20:20.669570   75577 logs.go:276] 0 containers: []
	W0920 18:20:20.669583   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:20.669593   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:20.669651   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:20.710330   75577 cri.go:89] found id: ""
	I0920 18:20:20.710364   75577 logs.go:276] 0 containers: []
	W0920 18:20:20.710372   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:20.710378   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:20.710427   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:20.747664   75577 cri.go:89] found id: ""
	I0920 18:20:20.747690   75577 logs.go:276] 0 containers: []
	W0920 18:20:20.747699   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:20.747705   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:20.747760   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:20.785680   75577 cri.go:89] found id: ""
	I0920 18:20:20.785717   75577 logs.go:276] 0 containers: []
	W0920 18:20:20.785726   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:20.785732   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:20.785794   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:20.822570   75577 cri.go:89] found id: ""
	I0920 18:20:20.822599   75577 logs.go:276] 0 containers: []
	W0920 18:20:20.822608   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:20.822613   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:20.822659   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:20.856739   75577 cri.go:89] found id: ""
	I0920 18:20:20.856772   75577 logs.go:276] 0 containers: []
	W0920 18:20:20.856786   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:20.856806   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:20.856822   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:20.896606   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:20.896643   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:20.948275   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:20.948313   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:20.962576   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:20.962606   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:21.033511   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:21.033534   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:21.033547   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:19.691723   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:21.694099   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:22.524982   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:24.527065   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:27.026374   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:22.803300   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:25.302416   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:23.615731   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:23.628619   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:23.628698   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:23.664344   75577 cri.go:89] found id: ""
	I0920 18:20:23.664367   75577 logs.go:276] 0 containers: []
	W0920 18:20:23.664375   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:23.664381   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:23.664431   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:23.698791   75577 cri.go:89] found id: ""
	I0920 18:20:23.698823   75577 logs.go:276] 0 containers: []
	W0920 18:20:23.698832   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:23.698839   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:23.698889   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:23.732552   75577 cri.go:89] found id: ""
	I0920 18:20:23.732590   75577 logs.go:276] 0 containers: []
	W0920 18:20:23.732602   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:23.732610   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:23.732676   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:23.769463   75577 cri.go:89] found id: ""
	I0920 18:20:23.769490   75577 logs.go:276] 0 containers: []
	W0920 18:20:23.769501   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:23.769508   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:23.769567   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:23.807329   75577 cri.go:89] found id: ""
	I0920 18:20:23.807361   75577 logs.go:276] 0 containers: []
	W0920 18:20:23.807374   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:23.807382   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:23.807442   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:23.847889   75577 cri.go:89] found id: ""
	I0920 18:20:23.847913   75577 logs.go:276] 0 containers: []
	W0920 18:20:23.847920   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:23.847927   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:23.847985   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:23.883287   75577 cri.go:89] found id: ""
	I0920 18:20:23.883314   75577 logs.go:276] 0 containers: []
	W0920 18:20:23.883322   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:23.883335   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:23.883389   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:23.920006   75577 cri.go:89] found id: ""
	I0920 18:20:23.920034   75577 logs.go:276] 0 containers: []
	W0920 18:20:23.920045   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:23.920057   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:23.920070   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:23.995572   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:23.995608   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:24.035953   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:24.035983   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:24.085803   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:24.085860   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:24.100226   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:24.100255   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:24.173555   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:26.673726   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:26.687372   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:26.687449   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:26.726544   75577 cri.go:89] found id: ""
	I0920 18:20:26.726574   75577 logs.go:276] 0 containers: []
	W0920 18:20:26.726583   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:26.726590   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:26.726651   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:26.761542   75577 cri.go:89] found id: ""
	I0920 18:20:26.761571   75577 logs.go:276] 0 containers: []
	W0920 18:20:26.761580   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:26.761587   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:26.761639   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:26.803869   75577 cri.go:89] found id: ""
	I0920 18:20:26.803896   75577 logs.go:276] 0 containers: []
	W0920 18:20:26.803904   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:26.803910   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:26.803970   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:26.854835   75577 cri.go:89] found id: ""
	I0920 18:20:26.854876   75577 logs.go:276] 0 containers: []
	W0920 18:20:26.854888   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:26.854896   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:26.854958   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:26.896113   75577 cri.go:89] found id: ""
	I0920 18:20:26.896151   75577 logs.go:276] 0 containers: []
	W0920 18:20:26.896162   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:26.896170   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:26.896255   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:26.942828   75577 cri.go:89] found id: ""
	I0920 18:20:26.942853   75577 logs.go:276] 0 containers: []
	W0920 18:20:26.942863   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:26.942930   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:26.943007   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:26.977246   75577 cri.go:89] found id: ""
	I0920 18:20:26.977278   75577 logs.go:276] 0 containers: []
	W0920 18:20:26.977289   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:26.977296   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:26.977367   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:27.012408   75577 cri.go:89] found id: ""
	I0920 18:20:27.012440   75577 logs.go:276] 0 containers: []
	W0920 18:20:27.012451   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:27.012462   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:27.012477   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:27.063970   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:27.064017   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:27.078082   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:27.078119   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:27.148050   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:27.148079   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:27.148094   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:27.230836   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:27.230880   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:24.192842   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:26.196350   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:28.692682   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:29.525508   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:32.025275   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:27.802268   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:30.301519   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:29.771845   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:29.785479   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:29.785553   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:29.823101   75577 cri.go:89] found id: ""
	I0920 18:20:29.823132   75577 logs.go:276] 0 containers: []
	W0920 18:20:29.823143   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:29.823150   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:29.823228   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:29.858679   75577 cri.go:89] found id: ""
	I0920 18:20:29.858713   75577 logs.go:276] 0 containers: []
	W0920 18:20:29.858724   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:29.858732   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:29.858796   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:29.901044   75577 cri.go:89] found id: ""
	I0920 18:20:29.901073   75577 logs.go:276] 0 containers: []
	W0920 18:20:29.901083   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:29.901091   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:29.901160   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:29.937743   75577 cri.go:89] found id: ""
	I0920 18:20:29.937775   75577 logs.go:276] 0 containers: []
	W0920 18:20:29.937792   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:29.937800   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:29.937884   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:29.972813   75577 cri.go:89] found id: ""
	I0920 18:20:29.972838   75577 logs.go:276] 0 containers: []
	W0920 18:20:29.972846   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:29.972852   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:29.972901   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:30.008024   75577 cri.go:89] found id: ""
	I0920 18:20:30.008053   75577 logs.go:276] 0 containers: []
	W0920 18:20:30.008062   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:30.008068   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:30.008117   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:30.048544   75577 cri.go:89] found id: ""
	I0920 18:20:30.048577   75577 logs.go:276] 0 containers: []
	W0920 18:20:30.048585   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:30.048591   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:30.048643   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:30.084501   75577 cri.go:89] found id: ""
	I0920 18:20:30.084525   75577 logs.go:276] 0 containers: []
	W0920 18:20:30.084534   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:30.084545   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:30.084559   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:30.136234   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:30.136279   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:30.149557   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:30.149591   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:30.229765   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:30.229792   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:30.229806   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:30.307786   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:30.307825   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:31.191475   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:33.192864   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:34.025515   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:36.027276   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:32.801952   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:34.802033   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:37.301059   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:32.845220   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:32.859734   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:32.859810   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:32.896390   75577 cri.go:89] found id: ""
	I0920 18:20:32.896434   75577 logs.go:276] 0 containers: []
	W0920 18:20:32.896446   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:32.896464   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:32.896538   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:32.934548   75577 cri.go:89] found id: ""
	I0920 18:20:32.934572   75577 logs.go:276] 0 containers: []
	W0920 18:20:32.934580   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:32.934587   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:32.934640   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:32.982987   75577 cri.go:89] found id: ""
	I0920 18:20:32.983013   75577 logs.go:276] 0 containers: []
	W0920 18:20:32.983020   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:32.983026   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:32.983079   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:33.019059   75577 cri.go:89] found id: ""
	I0920 18:20:33.019085   75577 logs.go:276] 0 containers: []
	W0920 18:20:33.019093   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:33.019099   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:33.019148   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:33.057704   75577 cri.go:89] found id: ""
	I0920 18:20:33.057738   75577 logs.go:276] 0 containers: []
	W0920 18:20:33.057750   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:33.057759   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:33.057821   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:33.093702   75577 cri.go:89] found id: ""
	I0920 18:20:33.093732   75577 logs.go:276] 0 containers: []
	W0920 18:20:33.093743   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:33.093751   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:33.093809   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:33.128476   75577 cri.go:89] found id: ""
	I0920 18:20:33.128504   75577 logs.go:276] 0 containers: []
	W0920 18:20:33.128516   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:33.128523   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:33.128591   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:33.165513   75577 cri.go:89] found id: ""
	I0920 18:20:33.165542   75577 logs.go:276] 0 containers: []
	W0920 18:20:33.165550   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:33.165559   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:33.165569   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:33.219613   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:33.219650   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:33.232291   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:33.232317   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:33.304172   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:33.304197   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:33.304212   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:33.380057   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:33.380094   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:35.920842   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:35.935500   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:35.935593   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:35.974445   75577 cri.go:89] found id: ""
	I0920 18:20:35.974471   75577 logs.go:276] 0 containers: []
	W0920 18:20:35.974479   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:35.974485   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:35.974548   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:36.009487   75577 cri.go:89] found id: ""
	I0920 18:20:36.009518   75577 logs.go:276] 0 containers: []
	W0920 18:20:36.009530   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:36.009538   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:36.009706   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:36.049428   75577 cri.go:89] found id: ""
	I0920 18:20:36.049451   75577 logs.go:276] 0 containers: []
	W0920 18:20:36.049463   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:36.049469   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:36.049518   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:36.083973   75577 cri.go:89] found id: ""
	I0920 18:20:36.084011   75577 logs.go:276] 0 containers: []
	W0920 18:20:36.084022   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:36.084030   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:36.084119   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:36.117952   75577 cri.go:89] found id: ""
	I0920 18:20:36.117985   75577 logs.go:276] 0 containers: []
	W0920 18:20:36.117993   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:36.117998   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:36.118052   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:36.151222   75577 cri.go:89] found id: ""
	I0920 18:20:36.151256   75577 logs.go:276] 0 containers: []
	W0920 18:20:36.151265   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:36.151271   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:36.151319   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:36.184665   75577 cri.go:89] found id: ""
	I0920 18:20:36.184697   75577 logs.go:276] 0 containers: []
	W0920 18:20:36.184708   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:36.184716   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:36.184771   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:36.222974   75577 cri.go:89] found id: ""
	I0920 18:20:36.223003   75577 logs.go:276] 0 containers: []
	W0920 18:20:36.223012   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:36.223021   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:36.223033   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:36.300403   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:36.300424   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:36.300437   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:36.381220   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:36.381260   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:36.419010   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:36.419042   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:36.471758   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:36.471799   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:35.192949   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:37.693384   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:38.526219   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:41.024309   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:39.301511   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:41.801558   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:38.985359   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:38.998817   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:38.998898   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:39.036705   75577 cri.go:89] found id: ""
	I0920 18:20:39.036737   75577 logs.go:276] 0 containers: []
	W0920 18:20:39.036747   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:39.036755   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:39.036815   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:39.074254   75577 cri.go:89] found id: ""
	I0920 18:20:39.074285   75577 logs.go:276] 0 containers: []
	W0920 18:20:39.074294   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:39.074300   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:39.074367   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:39.108385   75577 cri.go:89] found id: ""
	I0920 18:20:39.108420   75577 logs.go:276] 0 containers: []
	W0920 18:20:39.108493   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:39.108506   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:39.108561   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:39.142283   75577 cri.go:89] found id: ""
	I0920 18:20:39.142313   75577 logs.go:276] 0 containers: []
	W0920 18:20:39.142325   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:39.142332   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:39.142396   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:39.176881   75577 cri.go:89] found id: ""
	I0920 18:20:39.176918   75577 logs.go:276] 0 containers: []
	W0920 18:20:39.176933   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:39.176941   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:39.177002   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:39.217734   75577 cri.go:89] found id: ""
	I0920 18:20:39.217759   75577 logs.go:276] 0 containers: []
	W0920 18:20:39.217767   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:39.217773   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:39.217852   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:39.250823   75577 cri.go:89] found id: ""
	I0920 18:20:39.250852   75577 logs.go:276] 0 containers: []
	W0920 18:20:39.250860   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:39.250868   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:39.250936   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:39.287494   75577 cri.go:89] found id: ""
	I0920 18:20:39.287519   75577 logs.go:276] 0 containers: []
	W0920 18:20:39.287528   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:39.287540   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:39.287552   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:39.343091   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:39.343126   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:39.359028   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:39.359063   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:39.425293   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:39.425321   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:39.425336   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:39.503303   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:39.503350   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:42.044784   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:42.057764   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:42.057876   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:42.091798   75577 cri.go:89] found id: ""
	I0920 18:20:42.091825   75577 logs.go:276] 0 containers: []
	W0920 18:20:42.091833   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:42.091839   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:42.091888   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:42.125325   75577 cri.go:89] found id: ""
	I0920 18:20:42.125352   75577 logs.go:276] 0 containers: []
	W0920 18:20:42.125362   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:42.125370   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:42.125423   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:42.158641   75577 cri.go:89] found id: ""
	I0920 18:20:42.158680   75577 logs.go:276] 0 containers: []
	W0920 18:20:42.158692   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:42.158701   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:42.158771   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:42.194065   75577 cri.go:89] found id: ""
	I0920 18:20:42.194090   75577 logs.go:276] 0 containers: []
	W0920 18:20:42.194101   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:42.194109   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:42.194174   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:42.227237   75577 cri.go:89] found id: ""
	I0920 18:20:42.227262   75577 logs.go:276] 0 containers: []
	W0920 18:20:42.227270   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:42.227275   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:42.227324   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:42.260205   75577 cri.go:89] found id: ""
	I0920 18:20:42.260240   75577 logs.go:276] 0 containers: []
	W0920 18:20:42.260251   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:42.260258   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:42.260325   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:42.300124   75577 cri.go:89] found id: ""
	I0920 18:20:42.300159   75577 logs.go:276] 0 containers: []
	W0920 18:20:42.300169   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:42.300175   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:42.300273   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:42.335638   75577 cri.go:89] found id: ""
	I0920 18:20:42.335674   75577 logs.go:276] 0 containers: []
	W0920 18:20:42.335685   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:42.335695   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:42.335710   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:42.386176   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:42.386214   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:42.400113   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:42.400147   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:42.479877   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:42.479900   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:42.479912   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:42.554654   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:42.554701   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:40.191251   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:42.192910   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:43.026185   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:45.525362   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:43.801927   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:45.802309   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:45.093962   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:45.107747   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:45.107811   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:45.147091   75577 cri.go:89] found id: ""
	I0920 18:20:45.147120   75577 logs.go:276] 0 containers: []
	W0920 18:20:45.147132   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:45.147140   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:45.147205   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:45.185332   75577 cri.go:89] found id: ""
	I0920 18:20:45.185396   75577 logs.go:276] 0 containers: []
	W0920 18:20:45.185431   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:45.185446   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:45.185523   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:45.224496   75577 cri.go:89] found id: ""
	I0920 18:20:45.224525   75577 logs.go:276] 0 containers: []
	W0920 18:20:45.224535   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:45.224542   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:45.224612   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:45.260686   75577 cri.go:89] found id: ""
	I0920 18:20:45.260718   75577 logs.go:276] 0 containers: []
	W0920 18:20:45.260729   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:45.260737   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:45.260801   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:45.301231   75577 cri.go:89] found id: ""
	I0920 18:20:45.301259   75577 logs.go:276] 0 containers: []
	W0920 18:20:45.301269   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:45.301277   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:45.301343   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:45.346441   75577 cri.go:89] found id: ""
	I0920 18:20:45.346468   75577 logs.go:276] 0 containers: []
	W0920 18:20:45.346476   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:45.346482   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:45.346537   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:45.385042   75577 cri.go:89] found id: ""
	I0920 18:20:45.385071   75577 logs.go:276] 0 containers: []
	W0920 18:20:45.385082   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:45.385090   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:45.385149   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:45.422208   75577 cri.go:89] found id: ""
	I0920 18:20:45.422238   75577 logs.go:276] 0 containers: []
	W0920 18:20:45.422249   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:45.422260   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:45.422275   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:45.472692   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:45.472742   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:45.487981   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:45.488011   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:45.563012   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:45.563035   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:45.563051   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:45.640750   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:45.640786   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:44.691791   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:46.692359   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:48.693165   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:48.024959   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:50.025944   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:48.302699   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:50.801740   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:48.181093   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:48.194433   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:48.194516   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:48.234349   75577 cri.go:89] found id: ""
	I0920 18:20:48.234383   75577 logs.go:276] 0 containers: []
	W0920 18:20:48.234394   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:48.234403   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:48.234467   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:48.269424   75577 cri.go:89] found id: ""
	I0920 18:20:48.269450   75577 logs.go:276] 0 containers: []
	W0920 18:20:48.269457   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:48.269462   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:48.269514   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:48.303968   75577 cri.go:89] found id: ""
	I0920 18:20:48.303990   75577 logs.go:276] 0 containers: []
	W0920 18:20:48.303997   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:48.304002   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:48.304061   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:48.339149   75577 cri.go:89] found id: ""
	I0920 18:20:48.339180   75577 logs.go:276] 0 containers: []
	W0920 18:20:48.339191   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:48.339198   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:48.339259   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:48.374529   75577 cri.go:89] found id: ""
	I0920 18:20:48.374559   75577 logs.go:276] 0 containers: []
	W0920 18:20:48.374571   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:48.374578   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:48.374644   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:48.409170   75577 cri.go:89] found id: ""
	I0920 18:20:48.409203   75577 logs.go:276] 0 containers: []
	W0920 18:20:48.409211   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:48.409217   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:48.409292   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:48.443960   75577 cri.go:89] found id: ""
	I0920 18:20:48.443990   75577 logs.go:276] 0 containers: []
	W0920 18:20:48.444009   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:48.444017   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:48.444074   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:48.480934   75577 cri.go:89] found id: ""
	I0920 18:20:48.480965   75577 logs.go:276] 0 containers: []
	W0920 18:20:48.480978   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:48.480990   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:48.481006   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:48.533261   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:48.533295   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:48.546460   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:48.546488   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:48.615183   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:48.615206   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:48.615219   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:48.695299   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:48.695337   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:51.240343   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:51.255327   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:51.255411   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:51.294611   75577 cri.go:89] found id: ""
	I0920 18:20:51.294642   75577 logs.go:276] 0 containers: []
	W0920 18:20:51.294650   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:51.294656   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:51.294710   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:51.328569   75577 cri.go:89] found id: ""
	I0920 18:20:51.328598   75577 logs.go:276] 0 containers: []
	W0920 18:20:51.328613   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:51.328621   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:51.328675   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:51.364255   75577 cri.go:89] found id: ""
	I0920 18:20:51.364283   75577 logs.go:276] 0 containers: []
	W0920 18:20:51.364291   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:51.364297   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:51.364347   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:51.406178   75577 cri.go:89] found id: ""
	I0920 18:20:51.406204   75577 logs.go:276] 0 containers: []
	W0920 18:20:51.406215   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:51.406223   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:51.406284   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:51.449491   75577 cri.go:89] found id: ""
	I0920 18:20:51.449519   75577 logs.go:276] 0 containers: []
	W0920 18:20:51.449529   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:51.449536   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:51.449600   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:51.483243   75577 cri.go:89] found id: ""
	I0920 18:20:51.483269   75577 logs.go:276] 0 containers: []
	W0920 18:20:51.483278   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:51.483284   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:51.483334   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:51.517280   75577 cri.go:89] found id: ""
	I0920 18:20:51.517304   75577 logs.go:276] 0 containers: []
	W0920 18:20:51.517311   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:51.517316   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:51.517378   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:51.553517   75577 cri.go:89] found id: ""
	I0920 18:20:51.553545   75577 logs.go:276] 0 containers: []
	W0920 18:20:51.553556   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:51.553565   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:51.553576   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:51.607330   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:51.607369   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:51.620628   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:51.620662   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:51.687586   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:51.687618   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:51.687638   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:51.764882   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:51.764924   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:51.191732   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:53.193078   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:52.524529   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:54.526138   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:57.025365   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:52.802328   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:54.804955   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:57.301987   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:54.307276   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:54.321777   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:54.321860   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:54.355565   75577 cri.go:89] found id: ""
	I0920 18:20:54.355598   75577 logs.go:276] 0 containers: []
	W0920 18:20:54.355609   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:54.355618   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:54.355680   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:54.391184   75577 cri.go:89] found id: ""
	I0920 18:20:54.391216   75577 logs.go:276] 0 containers: []
	W0920 18:20:54.391227   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:54.391236   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:54.391301   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:54.425680   75577 cri.go:89] found id: ""
	I0920 18:20:54.425709   75577 logs.go:276] 0 containers: []
	W0920 18:20:54.425718   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:54.425725   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:54.425775   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:54.461514   75577 cri.go:89] found id: ""
	I0920 18:20:54.461541   75577 logs.go:276] 0 containers: []
	W0920 18:20:54.461552   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:54.461559   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:54.461625   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:54.495290   75577 cri.go:89] found id: ""
	I0920 18:20:54.495319   75577 logs.go:276] 0 containers: []
	W0920 18:20:54.495327   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:54.495333   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:54.495384   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:54.530014   75577 cri.go:89] found id: ""
	I0920 18:20:54.530038   75577 logs.go:276] 0 containers: []
	W0920 18:20:54.530046   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:54.530052   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:54.530103   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:54.562559   75577 cri.go:89] found id: ""
	I0920 18:20:54.562597   75577 logs.go:276] 0 containers: []
	W0920 18:20:54.562611   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:54.562621   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:54.562694   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:54.599894   75577 cri.go:89] found id: ""
	I0920 18:20:54.599920   75577 logs.go:276] 0 containers: []
	W0920 18:20:54.599928   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:54.599946   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:54.599982   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:54.636853   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:54.636880   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:54.687932   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:54.687978   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:54.701642   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:54.701673   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:54.769649   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:54.769678   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:54.769695   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:57.356015   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:57.368860   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:57.368923   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:57.409339   75577 cri.go:89] found id: ""
	I0920 18:20:57.409365   75577 logs.go:276] 0 containers: []
	W0920 18:20:57.409375   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:57.409382   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:57.409444   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:57.443055   75577 cri.go:89] found id: ""
	I0920 18:20:57.443085   75577 logs.go:276] 0 containers: []
	W0920 18:20:57.443095   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:57.443102   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:57.443158   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:57.481816   75577 cri.go:89] found id: ""
	I0920 18:20:57.481859   75577 logs.go:276] 0 containers: []
	W0920 18:20:57.481871   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:57.481879   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:57.481942   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:57.517327   75577 cri.go:89] found id: ""
	I0920 18:20:57.517361   75577 logs.go:276] 0 containers: []
	W0920 18:20:57.517372   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:57.517379   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:57.517442   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:57.555121   75577 cri.go:89] found id: ""
	I0920 18:20:57.555151   75577 logs.go:276] 0 containers: []
	W0920 18:20:57.555159   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:57.555164   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:57.555222   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:57.592632   75577 cri.go:89] found id: ""
	I0920 18:20:57.592666   75577 logs.go:276] 0 containers: []
	W0920 18:20:57.592679   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:57.592685   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:57.592734   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:57.627519   75577 cri.go:89] found id: ""
	I0920 18:20:57.627556   75577 logs.go:276] 0 containers: []
	W0920 18:20:57.627567   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:57.627574   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:57.627636   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:57.663817   75577 cri.go:89] found id: ""
	I0920 18:20:57.663844   75577 logs.go:276] 0 containers: []
	W0920 18:20:57.663853   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:57.663862   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:57.663877   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:57.704896   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:57.704924   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:55.692746   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:58.191973   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:59.524732   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:01.525346   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:59.801537   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:01.802088   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:57.757933   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:57.757972   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:57.772646   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:57.772673   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:57.850634   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:57.850665   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:57.850681   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:00.431504   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:00.445175   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:00.445241   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:00.482050   75577 cri.go:89] found id: ""
	I0920 18:21:00.482076   75577 logs.go:276] 0 containers: []
	W0920 18:21:00.482088   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:00.482095   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:00.482150   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:00.515793   75577 cri.go:89] found id: ""
	I0920 18:21:00.515822   75577 logs.go:276] 0 containers: []
	W0920 18:21:00.515833   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:00.515841   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:00.515903   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:00.550250   75577 cri.go:89] found id: ""
	I0920 18:21:00.550278   75577 logs.go:276] 0 containers: []
	W0920 18:21:00.550288   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:00.550296   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:00.550374   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:00.585977   75577 cri.go:89] found id: ""
	I0920 18:21:00.586011   75577 logs.go:276] 0 containers: []
	W0920 18:21:00.586034   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:00.586051   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:00.586118   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:00.621828   75577 cri.go:89] found id: ""
	I0920 18:21:00.621869   75577 logs.go:276] 0 containers: []
	W0920 18:21:00.621879   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:00.621886   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:00.621937   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:00.655492   75577 cri.go:89] found id: ""
	I0920 18:21:00.655528   75577 logs.go:276] 0 containers: []
	W0920 18:21:00.655540   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:00.655600   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:00.655673   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:00.709445   75577 cri.go:89] found id: ""
	I0920 18:21:00.709476   75577 logs.go:276] 0 containers: []
	W0920 18:21:00.709488   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:00.709496   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:00.709566   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:00.744098   75577 cri.go:89] found id: ""
	I0920 18:21:00.744123   75577 logs.go:276] 0 containers: []
	W0920 18:21:00.744134   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:00.744144   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:00.744164   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:00.792576   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:00.792612   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:00.807766   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:00.807794   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:00.883626   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:00.883649   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:00.883663   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:00.966233   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:00.966269   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:00.692181   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:03.192227   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:04.024582   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:06.029376   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:04.301078   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:06.301727   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:03.505502   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:03.520407   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:03.520534   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:03.557891   75577 cri.go:89] found id: ""
	I0920 18:21:03.557926   75577 logs.go:276] 0 containers: []
	W0920 18:21:03.557936   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:03.557944   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:03.558006   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:03.593869   75577 cri.go:89] found id: ""
	I0920 18:21:03.593898   75577 logs.go:276] 0 containers: []
	W0920 18:21:03.593908   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:03.593916   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:03.593982   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:03.629270   75577 cri.go:89] found id: ""
	I0920 18:21:03.629302   75577 logs.go:276] 0 containers: []
	W0920 18:21:03.629311   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:03.629317   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:03.629366   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:03.664658   75577 cri.go:89] found id: ""
	I0920 18:21:03.664688   75577 logs.go:276] 0 containers: []
	W0920 18:21:03.664699   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:03.664706   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:03.664769   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:03.700846   75577 cri.go:89] found id: ""
	I0920 18:21:03.700868   75577 logs.go:276] 0 containers: []
	W0920 18:21:03.700875   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:03.700882   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:03.700941   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:03.736311   75577 cri.go:89] found id: ""
	I0920 18:21:03.736345   75577 logs.go:276] 0 containers: []
	W0920 18:21:03.736355   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:03.736363   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:03.736421   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:03.770760   75577 cri.go:89] found id: ""
	I0920 18:21:03.770788   75577 logs.go:276] 0 containers: []
	W0920 18:21:03.770800   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:03.770808   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:03.770868   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:03.808724   75577 cri.go:89] found id: ""
	I0920 18:21:03.808749   75577 logs.go:276] 0 containers: []
	W0920 18:21:03.808756   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:03.808764   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:03.808775   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:03.851231   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:03.851265   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:03.899607   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:03.899641   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:03.915051   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:03.915079   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:03.984016   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:03.984038   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:03.984053   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:06.564776   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:06.578524   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:06.578604   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:06.614305   75577 cri.go:89] found id: ""
	I0920 18:21:06.614340   75577 logs.go:276] 0 containers: []
	W0920 18:21:06.614351   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:06.614365   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:06.614427   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:06.648929   75577 cri.go:89] found id: ""
	I0920 18:21:06.648958   75577 logs.go:276] 0 containers: []
	W0920 18:21:06.648968   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:06.648976   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:06.649036   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:06.689975   75577 cri.go:89] found id: ""
	I0920 18:21:06.690016   75577 logs.go:276] 0 containers: []
	W0920 18:21:06.690027   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:06.690034   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:06.690092   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:06.729721   75577 cri.go:89] found id: ""
	I0920 18:21:06.729747   75577 logs.go:276] 0 containers: []
	W0920 18:21:06.729755   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:06.729762   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:06.729808   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:06.766310   75577 cri.go:89] found id: ""
	I0920 18:21:06.766343   75577 logs.go:276] 0 containers: []
	W0920 18:21:06.766354   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:06.766361   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:06.766437   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:06.800801   75577 cri.go:89] found id: ""
	I0920 18:21:06.800829   75577 logs.go:276] 0 containers: []
	W0920 18:21:06.800839   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:06.800847   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:06.800909   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:06.836391   75577 cri.go:89] found id: ""
	I0920 18:21:06.836429   75577 logs.go:276] 0 containers: []
	W0920 18:21:06.836447   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:06.836455   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:06.836521   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:06.873014   75577 cri.go:89] found id: ""
	I0920 18:21:06.873041   75577 logs.go:276] 0 containers: []
	W0920 18:21:06.873049   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:06.873057   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:06.873070   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:06.953084   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:06.953110   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:06.953122   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:07.032398   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:07.032434   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:07.082196   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:07.082224   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:07.159604   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:07.159640   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:05.691350   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:07.692127   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:08.526757   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:10.526990   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:08.801736   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:11.301001   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:09.675022   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:09.687924   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:09.687999   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:09.722963   75577 cri.go:89] found id: ""
	I0920 18:21:09.722994   75577 logs.go:276] 0 containers: []
	W0920 18:21:09.723005   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:09.723017   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:09.723084   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:09.756433   75577 cri.go:89] found id: ""
	I0920 18:21:09.756458   75577 logs.go:276] 0 containers: []
	W0920 18:21:09.756472   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:09.756486   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:09.756549   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:09.792200   75577 cri.go:89] found id: ""
	I0920 18:21:09.792232   75577 logs.go:276] 0 containers: []
	W0920 18:21:09.792248   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:09.792256   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:09.792323   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:09.829058   75577 cri.go:89] found id: ""
	I0920 18:21:09.829081   75577 logs.go:276] 0 containers: []
	W0920 18:21:09.829098   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:09.829104   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:09.829150   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:09.863225   75577 cri.go:89] found id: ""
	I0920 18:21:09.863252   75577 logs.go:276] 0 containers: []
	W0920 18:21:09.863259   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:09.863265   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:09.863312   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:09.897682   75577 cri.go:89] found id: ""
	I0920 18:21:09.897708   75577 logs.go:276] 0 containers: []
	W0920 18:21:09.897725   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:09.897731   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:09.897789   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:09.936799   75577 cri.go:89] found id: ""
	I0920 18:21:09.936826   75577 logs.go:276] 0 containers: []
	W0920 18:21:09.936836   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:09.936843   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:09.936904   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:09.970502   75577 cri.go:89] found id: ""
	I0920 18:21:09.970539   75577 logs.go:276] 0 containers: []
	W0920 18:21:09.970550   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:09.970560   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:09.970574   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:09.983676   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:09.983703   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:10.058844   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:10.058865   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:10.058874   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:10.136998   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:10.137040   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:10.178902   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:10.178933   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:12.729619   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:09.692361   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:12.192257   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:13.025403   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:15.025667   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:13.302087   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:15.802019   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:12.743816   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:12.743876   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:12.779407   75577 cri.go:89] found id: ""
	I0920 18:21:12.779431   75577 logs.go:276] 0 containers: []
	W0920 18:21:12.779439   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:12.779446   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:12.779503   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:12.820700   75577 cri.go:89] found id: ""
	I0920 18:21:12.820730   75577 logs.go:276] 0 containers: []
	W0920 18:21:12.820740   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:12.820748   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:12.820812   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:12.862179   75577 cri.go:89] found id: ""
	I0920 18:21:12.862210   75577 logs.go:276] 0 containers: []
	W0920 18:21:12.862221   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:12.862228   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:12.862284   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:12.908976   75577 cri.go:89] found id: ""
	I0920 18:21:12.908999   75577 logs.go:276] 0 containers: []
	W0920 18:21:12.909007   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:12.909013   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:12.909072   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:12.953653   75577 cri.go:89] found id: ""
	I0920 18:21:12.953688   75577 logs.go:276] 0 containers: []
	W0920 18:21:12.953696   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:12.953702   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:12.953757   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:12.998519   75577 cri.go:89] found id: ""
	I0920 18:21:12.998546   75577 logs.go:276] 0 containers: []
	W0920 18:21:12.998557   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:12.998564   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:12.998640   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:13.035518   75577 cri.go:89] found id: ""
	I0920 18:21:13.035541   75577 logs.go:276] 0 containers: []
	W0920 18:21:13.035549   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:13.035555   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:13.035609   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:13.073551   75577 cri.go:89] found id: ""
	I0920 18:21:13.073591   75577 logs.go:276] 0 containers: []
	W0920 18:21:13.073605   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:13.073617   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:13.073638   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:13.125118   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:13.125155   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:13.139384   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:13.139415   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:13.204980   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:13.205006   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:13.205021   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:13.289400   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:13.289446   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:15.829405   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:15.842291   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:15.842354   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:15.875886   75577 cri.go:89] found id: ""
	I0920 18:21:15.875921   75577 logs.go:276] 0 containers: []
	W0920 18:21:15.875930   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:15.875937   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:15.875990   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:15.911706   75577 cri.go:89] found id: ""
	I0920 18:21:15.911746   75577 logs.go:276] 0 containers: []
	W0920 18:21:15.911760   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:15.911768   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:15.911831   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:15.950117   75577 cri.go:89] found id: ""
	I0920 18:21:15.950150   75577 logs.go:276] 0 containers: []
	W0920 18:21:15.950161   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:15.950170   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:15.950243   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:15.986029   75577 cri.go:89] found id: ""
	I0920 18:21:15.986066   75577 logs.go:276] 0 containers: []
	W0920 18:21:15.986083   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:15.986091   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:15.986159   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:16.020307   75577 cri.go:89] found id: ""
	I0920 18:21:16.020335   75577 logs.go:276] 0 containers: []
	W0920 18:21:16.020346   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:16.020354   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:16.020412   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:16.054744   75577 cri.go:89] found id: ""
	I0920 18:21:16.054769   75577 logs.go:276] 0 containers: []
	W0920 18:21:16.054777   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:16.054782   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:16.054831   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:16.091756   75577 cri.go:89] found id: ""
	I0920 18:21:16.091789   75577 logs.go:276] 0 containers: []
	W0920 18:21:16.091800   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:16.091807   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:16.091868   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:16.127818   75577 cri.go:89] found id: ""
	I0920 18:21:16.127843   75577 logs.go:276] 0 containers: []
	W0920 18:21:16.127851   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:16.127861   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:16.127877   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:16.200114   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:16.200138   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:16.200149   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:16.279473   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:16.279508   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:16.319139   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:16.319171   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:16.370721   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:16.370769   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:14.192782   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:16.192866   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:18.691599   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:17.524026   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:19.524385   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:21.525244   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:18.301696   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:20.301993   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:18.884966   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:18.900270   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:18.900355   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:18.938541   75577 cri.go:89] found id: ""
	I0920 18:21:18.938582   75577 logs.go:276] 0 containers: []
	W0920 18:21:18.938594   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:18.938602   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:18.938673   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:18.972343   75577 cri.go:89] found id: ""
	I0920 18:21:18.972380   75577 logs.go:276] 0 containers: []
	W0920 18:21:18.972391   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:18.972400   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:18.972458   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:19.011996   75577 cri.go:89] found id: ""
	I0920 18:21:19.012037   75577 logs.go:276] 0 containers: []
	W0920 18:21:19.012048   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:19.012054   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:19.012105   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:19.053789   75577 cri.go:89] found id: ""
	I0920 18:21:19.053818   75577 logs.go:276] 0 containers: []
	W0920 18:21:19.053839   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:19.053849   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:19.053907   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:19.094830   75577 cri.go:89] found id: ""
	I0920 18:21:19.094862   75577 logs.go:276] 0 containers: []
	W0920 18:21:19.094872   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:19.094881   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:19.094941   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:19.133894   75577 cri.go:89] found id: ""
	I0920 18:21:19.133923   75577 logs.go:276] 0 containers: []
	W0920 18:21:19.133934   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:19.133943   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:19.134001   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:19.171640   75577 cri.go:89] found id: ""
	I0920 18:21:19.171662   75577 logs.go:276] 0 containers: []
	W0920 18:21:19.171670   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:19.171676   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:19.171730   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:19.207821   75577 cri.go:89] found id: ""
	I0920 18:21:19.207852   75577 logs.go:276] 0 containers: []
	W0920 18:21:19.207861   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:19.207869   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:19.207880   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:19.292486   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:19.292530   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:19.332664   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:19.332695   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:19.383793   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:19.383828   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:19.399234   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:19.399269   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:19.470636   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:21.971772   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:21.985558   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:21.985630   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:22.018745   75577 cri.go:89] found id: ""
	I0920 18:21:22.018775   75577 logs.go:276] 0 containers: []
	W0920 18:21:22.018785   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:22.018795   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:22.018854   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:22.053586   75577 cri.go:89] found id: ""
	I0920 18:21:22.053617   75577 logs.go:276] 0 containers: []
	W0920 18:21:22.053627   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:22.053635   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:22.053697   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:22.088290   75577 cri.go:89] found id: ""
	I0920 18:21:22.088320   75577 logs.go:276] 0 containers: []
	W0920 18:21:22.088337   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:22.088344   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:22.088394   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:22.121026   75577 cri.go:89] found id: ""
	I0920 18:21:22.121050   75577 logs.go:276] 0 containers: []
	W0920 18:21:22.121057   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:22.121063   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:22.121117   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:22.153669   75577 cri.go:89] found id: ""
	I0920 18:21:22.153702   75577 logs.go:276] 0 containers: []
	W0920 18:21:22.153715   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:22.153725   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:22.153793   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:22.192179   75577 cri.go:89] found id: ""
	I0920 18:21:22.192208   75577 logs.go:276] 0 containers: []
	W0920 18:21:22.192218   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:22.192226   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:22.192294   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:22.226115   75577 cri.go:89] found id: ""
	I0920 18:21:22.226142   75577 logs.go:276] 0 containers: []
	W0920 18:21:22.226153   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:22.226161   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:22.226231   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:22.259032   75577 cri.go:89] found id: ""
	I0920 18:21:22.259059   75577 logs.go:276] 0 containers: []
	W0920 18:21:22.259070   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:22.259080   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:22.259094   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:22.308989   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:22.309020   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:22.322084   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:22.322113   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:22.397567   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:22.397587   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:22.397598   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:22.476551   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:22.476596   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:20.691837   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:23.191939   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:24.024785   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:26.024824   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:22.801102   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:24.801697   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:26.801789   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:25.017659   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:25.032637   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:25.032698   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:25.067671   75577 cri.go:89] found id: ""
	I0920 18:21:25.067701   75577 logs.go:276] 0 containers: []
	W0920 18:21:25.067711   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:25.067718   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:25.067774   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:25.109354   75577 cri.go:89] found id: ""
	I0920 18:21:25.109385   75577 logs.go:276] 0 containers: []
	W0920 18:21:25.109396   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:25.109403   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:25.109463   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:25.142888   75577 cri.go:89] found id: ""
	I0920 18:21:25.142924   75577 logs.go:276] 0 containers: []
	W0920 18:21:25.142935   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:25.142942   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:25.143004   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:25.179005   75577 cri.go:89] found id: ""
	I0920 18:21:25.179032   75577 logs.go:276] 0 containers: []
	W0920 18:21:25.179043   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:25.179050   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:25.179114   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:25.211639   75577 cri.go:89] found id: ""
	I0920 18:21:25.211662   75577 logs.go:276] 0 containers: []
	W0920 18:21:25.211669   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:25.211676   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:25.211729   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:25.246678   75577 cri.go:89] found id: ""
	I0920 18:21:25.246709   75577 logs.go:276] 0 containers: []
	W0920 18:21:25.246718   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:25.246724   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:25.246780   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:25.284187   75577 cri.go:89] found id: ""
	I0920 18:21:25.284216   75577 logs.go:276] 0 containers: []
	W0920 18:21:25.284240   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:25.284247   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:25.284309   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:25.318882   75577 cri.go:89] found id: ""
	I0920 18:21:25.318917   75577 logs.go:276] 0 containers: []
	W0920 18:21:25.318929   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:25.318940   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:25.318954   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:25.357017   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:25.357046   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:25.407584   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:25.407627   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:25.421405   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:25.421437   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:25.492849   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:25.492870   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:25.492881   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:25.192551   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:27.691730   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:28.025042   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:30.025806   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:29.301637   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:31.302080   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:28.076101   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:28.088741   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:28.088805   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:28.124889   75577 cri.go:89] found id: ""
	I0920 18:21:28.124925   75577 logs.go:276] 0 containers: []
	W0920 18:21:28.124944   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:28.124952   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:28.125013   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:28.158590   75577 cri.go:89] found id: ""
	I0920 18:21:28.158615   75577 logs.go:276] 0 containers: []
	W0920 18:21:28.158623   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:28.158629   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:28.158677   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:28.195636   75577 cri.go:89] found id: ""
	I0920 18:21:28.195670   75577 logs.go:276] 0 containers: []
	W0920 18:21:28.195683   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:28.195692   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:28.195762   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:28.228714   75577 cri.go:89] found id: ""
	I0920 18:21:28.228759   75577 logs.go:276] 0 containers: []
	W0920 18:21:28.228771   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:28.228780   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:28.228840   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:28.261488   75577 cri.go:89] found id: ""
	I0920 18:21:28.261519   75577 logs.go:276] 0 containers: []
	W0920 18:21:28.261531   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:28.261538   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:28.261606   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:28.293534   75577 cri.go:89] found id: ""
	I0920 18:21:28.293560   75577 logs.go:276] 0 containers: []
	W0920 18:21:28.293568   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:28.293573   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:28.293629   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:28.330095   75577 cri.go:89] found id: ""
	I0920 18:21:28.330120   75577 logs.go:276] 0 containers: []
	W0920 18:21:28.330128   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:28.330134   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:28.330191   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:28.365643   75577 cri.go:89] found id: ""
	I0920 18:21:28.365675   75577 logs.go:276] 0 containers: []
	W0920 18:21:28.365684   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:28.365693   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:28.365712   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:28.379982   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:28.380007   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:28.456355   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:28.456386   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:28.456402   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:28.538725   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:28.538759   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:28.576067   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:28.576104   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:31.129705   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:31.145814   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:31.145919   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:31.181469   75577 cri.go:89] found id: ""
	I0920 18:21:31.181497   75577 logs.go:276] 0 containers: []
	W0920 18:21:31.181507   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:31.181514   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:31.181562   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:31.216391   75577 cri.go:89] found id: ""
	I0920 18:21:31.216416   75577 logs.go:276] 0 containers: []
	W0920 18:21:31.216426   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:31.216433   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:31.216492   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:31.250236   75577 cri.go:89] found id: ""
	I0920 18:21:31.250266   75577 logs.go:276] 0 containers: []
	W0920 18:21:31.250277   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:31.250285   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:31.250357   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:31.283406   75577 cri.go:89] found id: ""
	I0920 18:21:31.283430   75577 logs.go:276] 0 containers: []
	W0920 18:21:31.283439   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:31.283446   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:31.283519   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:31.317276   75577 cri.go:89] found id: ""
	I0920 18:21:31.317308   75577 logs.go:276] 0 containers: []
	W0920 18:21:31.317327   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:31.317335   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:31.317400   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:31.351563   75577 cri.go:89] found id: ""
	I0920 18:21:31.351588   75577 logs.go:276] 0 containers: []
	W0920 18:21:31.351595   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:31.351600   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:31.351652   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:31.385932   75577 cri.go:89] found id: ""
	I0920 18:21:31.385972   75577 logs.go:276] 0 containers: []
	W0920 18:21:31.385984   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:31.385992   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:31.386056   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:31.422679   75577 cri.go:89] found id: ""
	I0920 18:21:31.422718   75577 logs.go:276] 0 containers: []
	W0920 18:21:31.422729   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:31.422740   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:31.422755   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:31.475827   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:31.475867   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:31.489324   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:31.489354   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:31.560357   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:31.560377   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:31.560390   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:31.642137   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:31.642171   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:30.191820   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:32.692178   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:32.524953   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:35.025736   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:33.801586   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:36.301145   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:34.179063   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:34.193077   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:34.193157   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:34.227451   75577 cri.go:89] found id: ""
	I0920 18:21:34.227484   75577 logs.go:276] 0 containers: []
	W0920 18:21:34.227495   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:34.227503   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:34.227571   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:34.263740   75577 cri.go:89] found id: ""
	I0920 18:21:34.263766   75577 logs.go:276] 0 containers: []
	W0920 18:21:34.263774   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:34.263779   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:34.263838   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:34.298094   75577 cri.go:89] found id: ""
	I0920 18:21:34.298121   75577 logs.go:276] 0 containers: []
	W0920 18:21:34.298132   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:34.298139   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:34.298202   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:34.334253   75577 cri.go:89] found id: ""
	I0920 18:21:34.334281   75577 logs.go:276] 0 containers: []
	W0920 18:21:34.334290   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:34.334297   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:34.334361   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:34.370656   75577 cri.go:89] found id: ""
	I0920 18:21:34.370688   75577 logs.go:276] 0 containers: []
	W0920 18:21:34.370699   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:34.370706   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:34.370759   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:34.405658   75577 cri.go:89] found id: ""
	I0920 18:21:34.405681   75577 logs.go:276] 0 containers: []
	W0920 18:21:34.405689   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:34.405695   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:34.405748   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:34.442284   75577 cri.go:89] found id: ""
	I0920 18:21:34.442311   75577 logs.go:276] 0 containers: []
	W0920 18:21:34.442319   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:34.442325   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:34.442394   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:34.477895   75577 cri.go:89] found id: ""
	I0920 18:21:34.477927   75577 logs.go:276] 0 containers: []
	W0920 18:21:34.477939   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:34.477950   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:34.477964   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:34.531225   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:34.531256   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:34.544325   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:34.544359   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:34.615914   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:34.615933   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:34.615948   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:34.692119   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:34.692160   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:37.228930   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:37.242070   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:37.242151   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:37.276489   75577 cri.go:89] found id: ""
	I0920 18:21:37.276531   75577 logs.go:276] 0 containers: []
	W0920 18:21:37.276543   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:37.276551   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:37.276617   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:37.311106   75577 cri.go:89] found id: ""
	I0920 18:21:37.311140   75577 logs.go:276] 0 containers: []
	W0920 18:21:37.311148   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:37.311156   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:37.311217   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:37.343147   75577 cri.go:89] found id: ""
	I0920 18:21:37.343173   75577 logs.go:276] 0 containers: []
	W0920 18:21:37.343181   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:37.343188   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:37.343237   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:37.378858   75577 cri.go:89] found id: ""
	I0920 18:21:37.378934   75577 logs.go:276] 0 containers: []
	W0920 18:21:37.378947   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:37.378955   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:37.379005   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:37.412321   75577 cri.go:89] found id: ""
	I0920 18:21:37.412355   75577 logs.go:276] 0 containers: []
	W0920 18:21:37.412366   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:37.412374   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:37.412443   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:37.447478   75577 cri.go:89] found id: ""
	I0920 18:21:37.447510   75577 logs.go:276] 0 containers: []
	W0920 18:21:37.447520   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:37.447526   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:37.447580   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:37.481172   75577 cri.go:89] found id: ""
	I0920 18:21:37.481201   75577 logs.go:276] 0 containers: []
	W0920 18:21:37.481209   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:37.481216   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:37.481269   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:37.518001   75577 cri.go:89] found id: ""
	I0920 18:21:37.518032   75577 logs.go:276] 0 containers: []
	W0920 18:21:37.518041   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:37.518050   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:37.518062   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:37.567675   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:37.567707   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:37.582279   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:37.582308   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:37.654514   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:37.654546   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:37.654563   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:37.735895   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:37.735929   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:35.192406   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:37.692460   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:37.524744   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:39.526068   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:42.024627   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:38.301432   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:40.302489   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:40.276416   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:40.291651   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:40.291713   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:40.328309   75577 cri.go:89] found id: ""
	I0920 18:21:40.328346   75577 logs.go:276] 0 containers: []
	W0920 18:21:40.328359   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:40.328378   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:40.328441   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:40.368126   75577 cri.go:89] found id: ""
	I0920 18:21:40.368154   75577 logs.go:276] 0 containers: []
	W0920 18:21:40.368162   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:40.368167   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:40.368217   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:40.408324   75577 cri.go:89] found id: ""
	I0920 18:21:40.408359   75577 logs.go:276] 0 containers: []
	W0920 18:21:40.408371   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:40.408380   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:40.408448   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:40.453867   75577 cri.go:89] found id: ""
	I0920 18:21:40.453892   75577 logs.go:276] 0 containers: []
	W0920 18:21:40.453900   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:40.453906   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:40.453969   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:40.500625   75577 cri.go:89] found id: ""
	I0920 18:21:40.500660   75577 logs.go:276] 0 containers: []
	W0920 18:21:40.500670   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:40.500678   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:40.500750   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:40.533998   75577 cri.go:89] found id: ""
	I0920 18:21:40.534028   75577 logs.go:276] 0 containers: []
	W0920 18:21:40.534039   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:40.534048   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:40.534111   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:40.566279   75577 cri.go:89] found id: ""
	I0920 18:21:40.566308   75577 logs.go:276] 0 containers: []
	W0920 18:21:40.566319   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:40.566326   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:40.566392   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:40.599154   75577 cri.go:89] found id: ""
	I0920 18:21:40.599179   75577 logs.go:276] 0 containers: []
	W0920 18:21:40.599186   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:40.599194   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:40.599210   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:40.668568   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:40.668596   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:40.668608   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:40.747895   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:40.747969   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:40.789568   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:40.789604   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:40.839816   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:40.839852   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:39.693537   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:42.192617   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:44.025059   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:46.524700   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:42.801135   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:44.802083   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:46.802537   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:43.354847   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:43.369077   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:43.369167   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:43.404667   75577 cri.go:89] found id: ""
	I0920 18:21:43.404698   75577 logs.go:276] 0 containers: []
	W0920 18:21:43.404710   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:43.404718   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:43.404778   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:43.440968   75577 cri.go:89] found id: ""
	I0920 18:21:43.441001   75577 logs.go:276] 0 containers: []
	W0920 18:21:43.441012   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:43.441021   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:43.441088   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:43.474481   75577 cri.go:89] found id: ""
	I0920 18:21:43.474511   75577 logs.go:276] 0 containers: []
	W0920 18:21:43.474520   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:43.474529   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:43.474592   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:43.508196   75577 cri.go:89] found id: ""
	I0920 18:21:43.508228   75577 logs.go:276] 0 containers: []
	W0920 18:21:43.508241   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:43.508248   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:43.508307   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:43.543679   75577 cri.go:89] found id: ""
	I0920 18:21:43.543710   75577 logs.go:276] 0 containers: []
	W0920 18:21:43.543721   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:43.543728   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:43.543788   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:43.577115   75577 cri.go:89] found id: ""
	I0920 18:21:43.577138   75577 logs.go:276] 0 containers: []
	W0920 18:21:43.577145   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:43.577152   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:43.577198   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:43.611552   75577 cri.go:89] found id: ""
	I0920 18:21:43.611588   75577 logs.go:276] 0 containers: []
	W0920 18:21:43.611602   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:43.611616   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:43.611685   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:43.647913   75577 cri.go:89] found id: ""
	I0920 18:21:43.647946   75577 logs.go:276] 0 containers: []
	W0920 18:21:43.647957   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:43.647969   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:43.647983   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:43.661014   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:43.661043   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:43.736584   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:43.736607   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:43.736620   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:43.814340   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:43.814380   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:43.855968   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:43.855996   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:46.410577   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:46.423733   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:46.423800   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:46.456819   75577 cri.go:89] found id: ""
	I0920 18:21:46.456847   75577 logs.go:276] 0 containers: []
	W0920 18:21:46.456855   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:46.456861   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:46.456927   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:46.493259   75577 cri.go:89] found id: ""
	I0920 18:21:46.493291   75577 logs.go:276] 0 containers: []
	W0920 18:21:46.493301   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:46.493307   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:46.493372   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:46.531200   75577 cri.go:89] found id: ""
	I0920 18:21:46.531233   75577 logs.go:276] 0 containers: []
	W0920 18:21:46.531241   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:46.531247   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:46.531294   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:46.567508   75577 cri.go:89] found id: ""
	I0920 18:21:46.567530   75577 logs.go:276] 0 containers: []
	W0920 18:21:46.567538   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:46.567544   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:46.567601   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:46.600257   75577 cri.go:89] found id: ""
	I0920 18:21:46.600290   75577 logs.go:276] 0 containers: []
	W0920 18:21:46.600303   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:46.600311   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:46.600375   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:46.635568   75577 cri.go:89] found id: ""
	I0920 18:21:46.635598   75577 logs.go:276] 0 containers: []
	W0920 18:21:46.635606   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:46.635612   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:46.635668   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:46.668257   75577 cri.go:89] found id: ""
	I0920 18:21:46.668304   75577 logs.go:276] 0 containers: []
	W0920 18:21:46.668316   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:46.668324   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:46.668392   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:46.703631   75577 cri.go:89] found id: ""
	I0920 18:21:46.703654   75577 logs.go:276] 0 containers: []
	W0920 18:21:46.703662   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:46.703671   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:46.703680   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:46.780232   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:46.780295   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:46.816623   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:46.816647   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:46.868911   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:46.868954   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:46.882716   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:46.882743   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:46.965166   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:44.692655   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:47.192969   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:48.524828   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:50.525045   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:49.301263   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:51.802343   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:49.466238   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:49.480167   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:49.480228   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:49.513507   75577 cri.go:89] found id: ""
	I0920 18:21:49.513537   75577 logs.go:276] 0 containers: []
	W0920 18:21:49.513549   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:49.513558   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:49.513620   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:49.548708   75577 cri.go:89] found id: ""
	I0920 18:21:49.548739   75577 logs.go:276] 0 containers: []
	W0920 18:21:49.548750   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:49.548758   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:49.548810   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:49.583586   75577 cri.go:89] found id: ""
	I0920 18:21:49.583614   75577 logs.go:276] 0 containers: []
	W0920 18:21:49.583625   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:49.583633   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:49.583695   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:49.617574   75577 cri.go:89] found id: ""
	I0920 18:21:49.617605   75577 logs.go:276] 0 containers: []
	W0920 18:21:49.617616   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:49.617623   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:49.617684   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:49.656517   75577 cri.go:89] found id: ""
	I0920 18:21:49.656552   75577 logs.go:276] 0 containers: []
	W0920 18:21:49.656563   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:49.656571   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:49.656634   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:49.692848   75577 cri.go:89] found id: ""
	I0920 18:21:49.692870   75577 logs.go:276] 0 containers: []
	W0920 18:21:49.692877   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:49.692883   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:49.692950   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:49.728593   75577 cri.go:89] found id: ""
	I0920 18:21:49.728620   75577 logs.go:276] 0 containers: []
	W0920 18:21:49.728630   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:49.728643   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:49.728705   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:49.764187   75577 cri.go:89] found id: ""
	I0920 18:21:49.764218   75577 logs.go:276] 0 containers: []
	W0920 18:21:49.764229   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:49.764242   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:49.764260   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:49.837741   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:49.837764   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:49.837777   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:49.914941   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:49.914978   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:49.957609   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:49.957639   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:50.012075   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:50.012115   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:52.527722   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:52.542125   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:52.542206   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:52.579884   75577 cri.go:89] found id: ""
	I0920 18:21:52.579910   75577 logs.go:276] 0 containers: []
	W0920 18:21:52.579919   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:52.579924   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:52.579982   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:52.619138   75577 cri.go:89] found id: ""
	I0920 18:21:52.619167   75577 logs.go:276] 0 containers: []
	W0920 18:21:52.619180   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:52.619188   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:52.619246   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:52.654460   75577 cri.go:89] found id: ""
	I0920 18:21:52.654498   75577 logs.go:276] 0 containers: []
	W0920 18:21:52.654508   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:52.654515   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:52.654578   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:52.708015   75577 cri.go:89] found id: ""
	I0920 18:21:52.708042   75577 logs.go:276] 0 containers: []
	W0920 18:21:52.708051   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:52.708057   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:52.708127   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:49.692309   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:52.192466   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:53.025700   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:55.026088   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:53.802620   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:55.802856   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:52.743917   75577 cri.go:89] found id: ""
	I0920 18:21:52.743952   75577 logs.go:276] 0 containers: []
	W0920 18:21:52.743964   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:52.743972   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:52.744025   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:52.783458   75577 cri.go:89] found id: ""
	I0920 18:21:52.783481   75577 logs.go:276] 0 containers: []
	W0920 18:21:52.783488   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:52.783495   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:52.783552   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:52.817716   75577 cri.go:89] found id: ""
	I0920 18:21:52.817749   75577 logs.go:276] 0 containers: []
	W0920 18:21:52.817762   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:52.817771   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:52.817882   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:52.857141   75577 cri.go:89] found id: ""
	I0920 18:21:52.857169   75577 logs.go:276] 0 containers: []
	W0920 18:21:52.857180   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:52.857190   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:52.857204   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:52.910555   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:52.910597   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:52.923843   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:52.923873   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:52.994263   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:52.994296   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:52.994313   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:53.079782   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:53.079829   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:55.619418   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:55.633854   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:55.633922   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:55.669778   75577 cri.go:89] found id: ""
	I0920 18:21:55.669813   75577 logs.go:276] 0 containers: []
	W0920 18:21:55.669824   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:55.669853   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:55.669920   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:55.705687   75577 cri.go:89] found id: ""
	I0920 18:21:55.705724   75577 logs.go:276] 0 containers: []
	W0920 18:21:55.705736   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:55.705746   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:55.705811   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:55.739249   75577 cri.go:89] found id: ""
	I0920 18:21:55.739288   75577 logs.go:276] 0 containers: []
	W0920 18:21:55.739296   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:55.739302   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:55.739438   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:55.775077   75577 cri.go:89] found id: ""
	I0920 18:21:55.775109   75577 logs.go:276] 0 containers: []
	W0920 18:21:55.775121   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:55.775130   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:55.775181   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:55.810291   75577 cri.go:89] found id: ""
	I0920 18:21:55.810329   75577 logs.go:276] 0 containers: []
	W0920 18:21:55.810340   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:55.810349   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:55.810415   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:55.844440   75577 cri.go:89] found id: ""
	I0920 18:21:55.844468   75577 logs.go:276] 0 containers: []
	W0920 18:21:55.844476   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:55.844482   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:55.844551   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:55.878611   75577 cri.go:89] found id: ""
	I0920 18:21:55.878647   75577 logs.go:276] 0 containers: []
	W0920 18:21:55.878659   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:55.878667   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:55.878733   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:55.922064   75577 cri.go:89] found id: ""
	I0920 18:21:55.922101   75577 logs.go:276] 0 containers: []
	W0920 18:21:55.922114   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:55.922127   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:55.922147   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:55.979406   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:55.979445   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:55.993082   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:55.993111   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:56.065650   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:56.065701   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:56.065717   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:56.142640   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:56.142684   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:54.691482   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:56.691912   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:57.525280   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:00.025224   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:02.025722   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:58.300359   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:00.301252   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:02.302209   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:58.683524   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:58.698775   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:58.698833   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:58.737369   75577 cri.go:89] found id: ""
	I0920 18:21:58.737398   75577 logs.go:276] 0 containers: []
	W0920 18:21:58.737409   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:58.737417   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:58.737476   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:58.779504   75577 cri.go:89] found id: ""
	I0920 18:21:58.779533   75577 logs.go:276] 0 containers: []
	W0920 18:21:58.779544   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:58.779552   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:58.779610   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:58.814363   75577 cri.go:89] found id: ""
	I0920 18:21:58.814393   75577 logs.go:276] 0 containers: []
	W0920 18:21:58.814401   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:58.814407   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:58.814454   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:58.846219   75577 cri.go:89] found id: ""
	I0920 18:21:58.846242   75577 logs.go:276] 0 containers: []
	W0920 18:21:58.846251   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:58.846256   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:58.846307   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:58.882387   75577 cri.go:89] found id: ""
	I0920 18:21:58.882423   75577 logs.go:276] 0 containers: []
	W0920 18:21:58.882431   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:58.882437   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:58.882497   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:58.918632   75577 cri.go:89] found id: ""
	I0920 18:21:58.918668   75577 logs.go:276] 0 containers: []
	W0920 18:21:58.918679   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:58.918686   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:58.918751   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:58.953521   75577 cri.go:89] found id: ""
	I0920 18:21:58.953547   75577 logs.go:276] 0 containers: []
	W0920 18:21:58.953557   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:58.953564   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:58.953624   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:58.987412   75577 cri.go:89] found id: ""
	I0920 18:21:58.987439   75577 logs.go:276] 0 containers: []
	W0920 18:21:58.987447   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:58.987457   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:58.987471   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:59.077108   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:59.077169   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:59.141723   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:59.141758   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:59.208035   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:59.208081   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:59.221380   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:59.221404   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:59.297151   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:22:01.797684   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:22:01.812275   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:22:01.812338   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:22:01.846431   75577 cri.go:89] found id: ""
	I0920 18:22:01.846459   75577 logs.go:276] 0 containers: []
	W0920 18:22:01.846477   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:22:01.846483   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:22:01.846535   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:22:01.889016   75577 cri.go:89] found id: ""
	I0920 18:22:01.889049   75577 logs.go:276] 0 containers: []
	W0920 18:22:01.889061   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:22:01.889069   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:22:01.889144   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:22:01.926048   75577 cri.go:89] found id: ""
	I0920 18:22:01.926078   75577 logs.go:276] 0 containers: []
	W0920 18:22:01.926090   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:22:01.926108   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:22:01.926185   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:22:01.964510   75577 cri.go:89] found id: ""
	I0920 18:22:01.964536   75577 logs.go:276] 0 containers: []
	W0920 18:22:01.964547   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:22:01.964555   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:22:01.964624   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:22:02.005549   75577 cri.go:89] found id: ""
	I0920 18:22:02.005575   75577 logs.go:276] 0 containers: []
	W0920 18:22:02.005583   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:22:02.005588   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:22:02.005642   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:22:02.047359   75577 cri.go:89] found id: ""
	I0920 18:22:02.047385   75577 logs.go:276] 0 containers: []
	W0920 18:22:02.047393   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:22:02.047399   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:22:02.047455   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:22:02.082961   75577 cri.go:89] found id: ""
	I0920 18:22:02.082999   75577 logs.go:276] 0 containers: []
	W0920 18:22:02.083008   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:22:02.083015   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:22:02.083062   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:22:02.117696   75577 cri.go:89] found id: ""
	I0920 18:22:02.117732   75577 logs.go:276] 0 containers: []
	W0920 18:22:02.117741   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:22:02.117753   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:22:02.117769   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:22:02.131282   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:22:02.131311   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:22:02.200738   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:22:02.200760   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:22:02.200776   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:22:02.281162   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:22:02.281200   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:22:02.321854   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:22:02.321888   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:59.191434   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:01.192475   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:01.685900   75086 pod_ready.go:82] duration metric: took 4m0.000752648s for pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace to be "Ready" ...
	E0920 18:22:01.685945   75086 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace to be "Ready" (will not retry!)
	I0920 18:22:01.685972   75086 pod_ready.go:39] duration metric: took 4m13.051752581s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:22:01.686033   75086 kubeadm.go:597] duration metric: took 4m20.498687791s to restartPrimaryControlPlane
	W0920 18:22:01.686114   75086 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0920 18:22:01.686150   75086 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0920 18:22:04.026729   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:06.524404   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:04.803249   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:07.301696   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:04.874353   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:22:04.889710   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:22:04.889773   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:22:04.945719   75577 cri.go:89] found id: ""
	I0920 18:22:04.945745   75577 logs.go:276] 0 containers: []
	W0920 18:22:04.945753   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:22:04.945759   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:22:04.945802   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:22:04.995492   75577 cri.go:89] found id: ""
	I0920 18:22:04.995535   75577 logs.go:276] 0 containers: []
	W0920 18:22:04.995547   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:22:04.995555   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:22:04.995615   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:22:05.037827   75577 cri.go:89] found id: ""
	I0920 18:22:05.037871   75577 logs.go:276] 0 containers: []
	W0920 18:22:05.037882   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:22:05.037888   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:22:05.037935   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:22:05.077652   75577 cri.go:89] found id: ""
	I0920 18:22:05.077691   75577 logs.go:276] 0 containers: []
	W0920 18:22:05.077704   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:22:05.077712   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:22:05.077772   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:22:05.114472   75577 cri.go:89] found id: ""
	I0920 18:22:05.114509   75577 logs.go:276] 0 containers: []
	W0920 18:22:05.114520   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:22:05.114527   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:22:05.114590   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:22:05.150809   75577 cri.go:89] found id: ""
	I0920 18:22:05.150841   75577 logs.go:276] 0 containers: []
	W0920 18:22:05.150853   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:22:05.150860   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:22:05.150908   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:22:05.188880   75577 cri.go:89] found id: ""
	I0920 18:22:05.188910   75577 logs.go:276] 0 containers: []
	W0920 18:22:05.188921   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:22:05.188929   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:22:05.188990   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:22:05.224990   75577 cri.go:89] found id: ""
	I0920 18:22:05.225015   75577 logs.go:276] 0 containers: []
	W0920 18:22:05.225023   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:22:05.225032   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:22:05.225043   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:22:05.298000   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:22:05.298023   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:22:05.298037   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:22:05.382969   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:22:05.383008   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:22:05.424911   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:22:05.424940   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:22:05.475988   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:22:05.476024   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:22:08.525098   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:10.525321   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:09.801936   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:12.301073   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:07.990660   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:22:08.005572   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:22:08.005637   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:22:08.044354   75577 cri.go:89] found id: ""
	I0920 18:22:08.044381   75577 logs.go:276] 0 containers: []
	W0920 18:22:08.044390   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:22:08.044401   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:22:08.044449   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:22:08.082899   75577 cri.go:89] found id: ""
	I0920 18:22:08.082928   75577 logs.go:276] 0 containers: []
	W0920 18:22:08.082939   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:22:08.082948   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:22:08.083009   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:22:08.129297   75577 cri.go:89] found id: ""
	I0920 18:22:08.129325   75577 logs.go:276] 0 containers: []
	W0920 18:22:08.129335   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:22:08.129343   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:22:08.129404   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:22:08.168744   75577 cri.go:89] found id: ""
	I0920 18:22:08.168774   75577 logs.go:276] 0 containers: []
	W0920 18:22:08.168787   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:22:08.168792   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:22:08.168849   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:22:08.203830   75577 cri.go:89] found id: ""
	I0920 18:22:08.203860   75577 logs.go:276] 0 containers: []
	W0920 18:22:08.203871   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:22:08.203879   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:22:08.203940   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:22:08.238226   75577 cri.go:89] found id: ""
	I0920 18:22:08.238250   75577 logs.go:276] 0 containers: []
	W0920 18:22:08.238258   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:22:08.238263   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:22:08.238331   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:22:08.271457   75577 cri.go:89] found id: ""
	I0920 18:22:08.271485   75577 logs.go:276] 0 containers: []
	W0920 18:22:08.271495   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:22:08.271502   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:22:08.271563   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:22:08.306661   75577 cri.go:89] found id: ""
	I0920 18:22:08.306692   75577 logs.go:276] 0 containers: []
	W0920 18:22:08.306703   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:22:08.306712   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:22:08.306730   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:22:08.357760   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:22:08.357793   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:22:08.372625   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:22:08.372660   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:22:08.442477   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:22:08.442501   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:22:08.442517   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:22:08.528381   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:22:08.528412   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:22:11.064842   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:22:11.078086   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:22:11.078158   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:22:11.113454   75577 cri.go:89] found id: ""
	I0920 18:22:11.113485   75577 logs.go:276] 0 containers: []
	W0920 18:22:11.113497   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:22:11.113511   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:22:11.113575   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:22:11.148039   75577 cri.go:89] found id: ""
	I0920 18:22:11.148064   75577 logs.go:276] 0 containers: []
	W0920 18:22:11.148072   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:22:11.148078   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:22:11.148127   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:22:11.182667   75577 cri.go:89] found id: ""
	I0920 18:22:11.182697   75577 logs.go:276] 0 containers: []
	W0920 18:22:11.182708   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:22:11.182715   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:22:11.182776   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:22:11.217022   75577 cri.go:89] found id: ""
	I0920 18:22:11.217055   75577 logs.go:276] 0 containers: []
	W0920 18:22:11.217067   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:22:11.217075   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:22:11.217141   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:22:11.253970   75577 cri.go:89] found id: ""
	I0920 18:22:11.254001   75577 logs.go:276] 0 containers: []
	W0920 18:22:11.254012   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:22:11.254019   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:22:11.254085   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:22:11.288070   75577 cri.go:89] found id: ""
	I0920 18:22:11.288103   75577 logs.go:276] 0 containers: []
	W0920 18:22:11.288114   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:22:11.288121   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:22:11.288189   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:22:11.324215   75577 cri.go:89] found id: ""
	I0920 18:22:11.324240   75577 logs.go:276] 0 containers: []
	W0920 18:22:11.324254   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:22:11.324261   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:22:11.324319   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:22:11.359377   75577 cri.go:89] found id: ""
	I0920 18:22:11.359406   75577 logs.go:276] 0 containers: []
	W0920 18:22:11.359414   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:22:11.359423   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:22:11.359433   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:22:11.416479   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:22:11.416520   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:22:11.430240   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:22:11.430269   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:22:11.501531   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:22:11.501553   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:22:11.501565   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:22:11.580748   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:22:11.580787   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:22:13.024330   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:15.025065   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:17.026642   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:14.301652   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:16.802017   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:14.119084   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:22:14.132428   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:22:14.132505   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:22:14.167674   75577 cri.go:89] found id: ""
	I0920 18:22:14.167699   75577 logs.go:276] 0 containers: []
	W0920 18:22:14.167710   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:22:14.167717   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:22:14.167772   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:22:14.204570   75577 cri.go:89] found id: ""
	I0920 18:22:14.204595   75577 logs.go:276] 0 containers: []
	W0920 18:22:14.204603   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:22:14.204608   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:22:14.204655   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:22:14.241687   75577 cri.go:89] found id: ""
	I0920 18:22:14.241716   75577 logs.go:276] 0 containers: []
	W0920 18:22:14.241724   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:22:14.241731   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:22:14.241789   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:22:14.275776   75577 cri.go:89] found id: ""
	I0920 18:22:14.275802   75577 logs.go:276] 0 containers: []
	W0920 18:22:14.275810   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:22:14.275816   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:22:14.275872   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:22:14.309565   75577 cri.go:89] found id: ""
	I0920 18:22:14.309589   75577 logs.go:276] 0 containers: []
	W0920 18:22:14.309596   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:22:14.309602   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:22:14.309662   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:22:14.341858   75577 cri.go:89] found id: ""
	I0920 18:22:14.341884   75577 logs.go:276] 0 containers: []
	W0920 18:22:14.341892   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:22:14.341898   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:22:14.341963   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:22:14.377863   75577 cri.go:89] found id: ""
	I0920 18:22:14.377896   75577 logs.go:276] 0 containers: []
	W0920 18:22:14.377906   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:22:14.377912   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:22:14.377988   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:22:14.413182   75577 cri.go:89] found id: ""
	I0920 18:22:14.413207   75577 logs.go:276] 0 containers: []
	W0920 18:22:14.413214   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:22:14.413236   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:22:14.413253   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:22:14.466782   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:22:14.466820   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:22:14.480627   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:22:14.480668   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:22:14.552071   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:22:14.552104   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:22:14.552121   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:22:14.626481   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:22:14.626518   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:22:17.163264   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:22:17.181609   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:22:17.181684   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:22:17.219871   75577 cri.go:89] found id: ""
	I0920 18:22:17.219899   75577 logs.go:276] 0 containers: []
	W0920 18:22:17.219923   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:22:17.219932   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:22:17.220013   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:22:17.256315   75577 cri.go:89] found id: ""
	I0920 18:22:17.256346   75577 logs.go:276] 0 containers: []
	W0920 18:22:17.256356   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:22:17.256364   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:22:17.256434   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:22:17.294326   75577 cri.go:89] found id: ""
	I0920 18:22:17.294352   75577 logs.go:276] 0 containers: []
	W0920 18:22:17.294360   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:22:17.294366   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:22:17.294421   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:22:17.336461   75577 cri.go:89] found id: ""
	I0920 18:22:17.336491   75577 logs.go:276] 0 containers: []
	W0920 18:22:17.336500   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:22:17.336506   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:22:17.336568   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:22:17.376242   75577 cri.go:89] found id: ""
	I0920 18:22:17.376283   75577 logs.go:276] 0 containers: []
	W0920 18:22:17.376295   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:22:17.376302   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:22:17.376363   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:22:17.409147   75577 cri.go:89] found id: ""
	I0920 18:22:17.409180   75577 logs.go:276] 0 containers: []
	W0920 18:22:17.409190   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:22:17.409198   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:22:17.409269   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:22:17.444698   75577 cri.go:89] found id: ""
	I0920 18:22:17.444725   75577 logs.go:276] 0 containers: []
	W0920 18:22:17.444734   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:22:17.444738   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:22:17.444791   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:22:17.485914   75577 cri.go:89] found id: ""
	I0920 18:22:17.485948   75577 logs.go:276] 0 containers: []
	W0920 18:22:17.485959   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:22:17.485970   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:22:17.485983   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:22:17.523567   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:22:17.523601   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:22:17.588864   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:22:17.588905   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:22:17.605052   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:22:17.605092   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:22:17.677012   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:22:17.677039   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:22:17.677056   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:22:19.525154   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:22.025422   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:18.802052   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:20.804024   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:20.252258   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:22:20.267607   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:22:20.267698   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:22:20.302260   75577 cri.go:89] found id: ""
	I0920 18:22:20.302291   75577 logs.go:276] 0 containers: []
	W0920 18:22:20.302301   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:22:20.302309   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:22:20.302373   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:22:20.335343   75577 cri.go:89] found id: ""
	I0920 18:22:20.335377   75577 logs.go:276] 0 containers: []
	W0920 18:22:20.335389   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:22:20.335397   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:22:20.335460   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:22:20.369612   75577 cri.go:89] found id: ""
	I0920 18:22:20.369641   75577 logs.go:276] 0 containers: []
	W0920 18:22:20.369649   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:22:20.369655   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:22:20.369703   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:22:20.403703   75577 cri.go:89] found id: ""
	I0920 18:22:20.403732   75577 logs.go:276] 0 containers: []
	W0920 18:22:20.403740   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:22:20.403746   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:22:20.403804   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:22:20.436288   75577 cri.go:89] found id: ""
	I0920 18:22:20.436316   75577 logs.go:276] 0 containers: []
	W0920 18:22:20.436328   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:22:20.436336   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:22:20.436399   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:22:20.471536   75577 cri.go:89] found id: ""
	I0920 18:22:20.471572   75577 logs.go:276] 0 containers: []
	W0920 18:22:20.471584   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:22:20.471593   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:22:20.471657   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:22:20.506012   75577 cri.go:89] found id: ""
	I0920 18:22:20.506107   75577 logs.go:276] 0 containers: []
	W0920 18:22:20.506134   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:22:20.506143   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:22:20.506199   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:22:20.542614   75577 cri.go:89] found id: ""
	I0920 18:22:20.542650   75577 logs.go:276] 0 containers: []
	W0920 18:22:20.542660   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:22:20.542671   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:22:20.542687   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:22:20.596316   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:22:20.596357   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:22:20.610438   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:22:20.610465   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:22:20.688061   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:22:20.688079   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:22:20.688091   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:22:20.768249   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:22:20.768296   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:22:24.525259   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:25.524857   75264 pod_ready.go:82] duration metric: took 4m0.006563805s for pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace to be "Ready" ...
	E0920 18:22:25.524883   75264 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0920 18:22:25.524891   75264 pod_ready.go:39] duration metric: took 4m8.542422056s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:22:25.524906   75264 api_server.go:52] waiting for apiserver process to appear ...
	I0920 18:22:25.524977   75264 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:22:25.525029   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:22:25.579894   75264 cri.go:89] found id: "0ea0cfbd9902ae8e3797c77592bfe05a5f8162f75a391f5c2dc992eb5154c4ad"
	I0920 18:22:25.579914   75264 cri.go:89] found id: ""
	I0920 18:22:25.579923   75264 logs.go:276] 1 containers: [0ea0cfbd9902ae8e3797c77592bfe05a5f8162f75a391f5c2dc992eb5154c4ad]
	I0920 18:22:25.579979   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:25.585095   75264 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:22:25.585176   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:22:25.621769   75264 cri.go:89] found id: "65da0bae1c849a395cc241962bb6cbdf9f3edf11cdc95f4495ac2e427f9602d4"
	I0920 18:22:25.621796   75264 cri.go:89] found id: ""
	I0920 18:22:25.621805   75264 logs.go:276] 1 containers: [65da0bae1c849a395cc241962bb6cbdf9f3edf11cdc95f4495ac2e427f9602d4]
	I0920 18:22:25.621881   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:25.626318   75264 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:22:25.626400   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:22:25.662732   75264 cri.go:89] found id: "606f7c8a9a095f499ae422de76f76105607db72e57b9b0a6db3e3c5a81656c5c"
	I0920 18:22:25.662753   75264 cri.go:89] found id: ""
	I0920 18:22:25.662760   75264 logs.go:276] 1 containers: [606f7c8a9a095f499ae422de76f76105607db72e57b9b0a6db3e3c5a81656c5c]
	I0920 18:22:25.662818   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:25.667212   75264 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:22:25.667299   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:22:25.703639   75264 cri.go:89] found id: "6ba313deffc6185e59dac50051e933f6b2ea70d630cf36b2b8c2c07042f793ba"
	I0920 18:22:25.703663   75264 cri.go:89] found id: ""
	I0920 18:22:25.703670   75264 logs.go:276] 1 containers: [6ba313deffc6185e59dac50051e933f6b2ea70d630cf36b2b8c2c07042f793ba]
	I0920 18:22:25.703721   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:25.708034   75264 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:22:25.708115   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:22:25.745358   75264 cri.go:89] found id: "702f7f440eb6016c2c758a6be4e1072274ab6dcd169d7a081e5464869a7c299c"
	I0920 18:22:25.745387   75264 cri.go:89] found id: ""
	I0920 18:22:25.745397   75264 logs.go:276] 1 containers: [702f7f440eb6016c2c758a6be4e1072274ab6dcd169d7a081e5464869a7c299c]
	I0920 18:22:25.745455   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:25.749702   75264 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:22:25.749787   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:22:25.790592   75264 cri.go:89] found id: "4ca303b795795bd920f261c88630f30fd51a91b9694a9a6e67af8164bab8242b"
	I0920 18:22:25.790615   75264 cri.go:89] found id: ""
	I0920 18:22:25.790623   75264 logs.go:276] 1 containers: [4ca303b795795bd920f261c88630f30fd51a91b9694a9a6e67af8164bab8242b]
	I0920 18:22:25.790686   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:25.794993   75264 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:22:25.795062   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:22:25.836621   75264 cri.go:89] found id: ""
	I0920 18:22:25.836646   75264 logs.go:276] 0 containers: []
	W0920 18:22:25.836654   75264 logs.go:278] No container was found matching "kindnet"
	I0920 18:22:25.836661   75264 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0920 18:22:25.836723   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0920 18:22:25.874191   75264 cri.go:89] found id: "001bdc98537f0917f021c14a5d0733bd31cf19be1b4862502458a7a90e4300da"
	I0920 18:22:25.874215   75264 cri.go:89] found id: "c42201b6e3d55b6fd8beaf04d416658b228bf627e519ddb022dad6d17219cc1c"
	I0920 18:22:25.874220   75264 cri.go:89] found id: ""
	I0920 18:22:25.874229   75264 logs.go:276] 2 containers: [001bdc98537f0917f021c14a5d0733bd31cf19be1b4862502458a7a90e4300da c42201b6e3d55b6fd8beaf04d416658b228bf627e519ddb022dad6d17219cc1c]
	I0920 18:22:25.874316   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:25.878788   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:25.882840   75264 logs.go:123] Gathering logs for kube-proxy [702f7f440eb6016c2c758a6be4e1072274ab6dcd169d7a081e5464869a7c299c] ...
	I0920 18:22:25.882869   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 702f7f440eb6016c2c758a6be4e1072274ab6dcd169d7a081e5464869a7c299c"
	I0920 18:22:25.921609   75264 logs.go:123] Gathering logs for kube-controller-manager [4ca303b795795bd920f261c88630f30fd51a91b9694a9a6e67af8164bab8242b] ...
	I0920 18:22:25.921640   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ca303b795795bd920f261c88630f30fd51a91b9694a9a6e67af8164bab8242b"
	I0920 18:22:25.987199   75264 logs.go:123] Gathering logs for kubelet ...
	I0920 18:22:25.987236   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:22:26.061857   75264 logs.go:123] Gathering logs for kube-apiserver [0ea0cfbd9902ae8e3797c77592bfe05a5f8162f75a391f5c2dc992eb5154c4ad] ...
	I0920 18:22:26.061897   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ea0cfbd9902ae8e3797c77592bfe05a5f8162f75a391f5c2dc992eb5154c4ad"
	I0920 18:22:26.104901   75264 logs.go:123] Gathering logs for etcd [65da0bae1c849a395cc241962bb6cbdf9f3edf11cdc95f4495ac2e427f9602d4] ...
	I0920 18:22:26.104935   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 65da0bae1c849a395cc241962bb6cbdf9f3edf11cdc95f4495ac2e427f9602d4"
	I0920 18:22:26.160795   75264 logs.go:123] Gathering logs for kube-scheduler [6ba313deffc6185e59dac50051e933f6b2ea70d630cf36b2b8c2c07042f793ba] ...
	I0920 18:22:26.160833   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ba313deffc6185e59dac50051e933f6b2ea70d630cf36b2b8c2c07042f793ba"
	I0920 18:22:26.199286   75264 logs.go:123] Gathering logs for storage-provisioner [001bdc98537f0917f021c14a5d0733bd31cf19be1b4862502458a7a90e4300da] ...
	I0920 18:22:26.199316   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 001bdc98537f0917f021c14a5d0733bd31cf19be1b4862502458a7a90e4300da"
	I0920 18:22:26.239560   75264 logs.go:123] Gathering logs for storage-provisioner [c42201b6e3d55b6fd8beaf04d416658b228bf627e519ddb022dad6d17219cc1c] ...
	I0920 18:22:26.239591   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c42201b6e3d55b6fd8beaf04d416658b228bf627e519ddb022dad6d17219cc1c"
	I0920 18:22:26.275424   75264 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:22:26.275450   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:22:26.786849   75264 logs.go:123] Gathering logs for container status ...
	I0920 18:22:26.786895   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:22:26.848751   75264 logs.go:123] Gathering logs for dmesg ...
	I0920 18:22:26.848783   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:22:26.867329   75264 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:22:26.867375   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 18:22:27.017385   75264 logs.go:123] Gathering logs for coredns [606f7c8a9a095f499ae422de76f76105607db72e57b9b0a6db3e3c5a81656c5c] ...
	I0920 18:22:27.017419   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 606f7c8a9a095f499ae422de76f76105607db72e57b9b0a6db3e3c5a81656c5c"
	I0920 18:22:23.300658   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:25.302131   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:27.303929   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:23.307149   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:22:23.319889   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:22:23.319972   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:22:23.356613   75577 cri.go:89] found id: ""
	I0920 18:22:23.356645   75577 logs.go:276] 0 containers: []
	W0920 18:22:23.356656   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:22:23.356663   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:22:23.356728   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:22:23.391632   75577 cri.go:89] found id: ""
	I0920 18:22:23.391663   75577 logs.go:276] 0 containers: []
	W0920 18:22:23.391675   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:22:23.391683   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:22:23.391748   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:22:23.426908   75577 cri.go:89] found id: ""
	I0920 18:22:23.426936   75577 logs.go:276] 0 containers: []
	W0920 18:22:23.426946   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:22:23.426952   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:22:23.427013   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:22:23.461890   75577 cri.go:89] found id: ""
	I0920 18:22:23.461925   75577 logs.go:276] 0 containers: []
	W0920 18:22:23.461938   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:22:23.461947   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:22:23.462014   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:22:23.503517   75577 cri.go:89] found id: ""
	I0920 18:22:23.503549   75577 logs.go:276] 0 containers: []
	W0920 18:22:23.503560   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:22:23.503566   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:22:23.503615   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:22:23.540699   75577 cri.go:89] found id: ""
	I0920 18:22:23.540722   75577 logs.go:276] 0 containers: []
	W0920 18:22:23.540731   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:22:23.540736   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:22:23.540783   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:22:23.575485   75577 cri.go:89] found id: ""
	I0920 18:22:23.575509   75577 logs.go:276] 0 containers: []
	W0920 18:22:23.575517   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:22:23.575523   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:22:23.575576   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:22:23.610998   75577 cri.go:89] found id: ""
	I0920 18:22:23.611028   75577 logs.go:276] 0 containers: []
	W0920 18:22:23.611039   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:22:23.611051   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:22:23.611065   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:22:23.687072   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:22:23.687109   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:22:23.724790   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:22:23.724824   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:22:23.778905   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:22:23.778945   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:22:23.794298   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:22:23.794329   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:22:23.873341   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:22:26.374541   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:22:26.388151   75577 kubeadm.go:597] duration metric: took 4m2.956358154s to restartPrimaryControlPlane
	W0920 18:22:26.388229   75577 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0920 18:22:26.388260   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0920 18:22:27.454521   75577 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.066231187s)
	I0920 18:22:27.454602   75577 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:22:27.469409   75577 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 18:22:27.480314   75577 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 18:22:27.491389   75577 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 18:22:27.491416   75577 kubeadm.go:157] found existing configuration files:
	
	I0920 18:22:27.491468   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 18:22:27.501448   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 18:22:27.501534   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 18:22:27.511370   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 18:22:27.521767   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 18:22:27.521855   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 18:22:27.531717   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 18:22:27.540517   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 18:22:27.540575   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 18:22:27.549644   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 18:22:27.558412   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 18:22:27.558480   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 18:22:27.567702   75577 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 18:22:28.070939   75086 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.384759009s)
	I0920 18:22:28.071025   75086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:22:28.087712   75086 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 18:22:28.097709   75086 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 18:22:28.107217   75086 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 18:22:28.107243   75086 kubeadm.go:157] found existing configuration files:
	
	I0920 18:22:28.107297   75086 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 18:22:28.116947   75086 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 18:22:28.117013   75086 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 18:22:28.126559   75086 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 18:22:28.135261   75086 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 18:22:28.135338   75086 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 18:22:28.144456   75086 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 18:22:28.153412   75086 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 18:22:28.153465   75086 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 18:22:28.165825   75086 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 18:22:28.175003   75086 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 18:22:28.175068   75086 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 18:22:28.184917   75086 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 18:22:28.236825   75086 kubeadm.go:310] W0920 18:22:28.216092    2540 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 18:22:28.237654   75086 kubeadm.go:310] W0920 18:22:28.216986    2540 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 18:22:28.352695   75086 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 18:22:29.557496   75264 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:22:29.578981   75264 api_server.go:72] duration metric: took 4m20.322735753s to wait for apiserver process to appear ...
	I0920 18:22:29.579009   75264 api_server.go:88] waiting for apiserver healthz status ...
	I0920 18:22:29.579044   75264 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:22:29.579093   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:22:29.616252   75264 cri.go:89] found id: "0ea0cfbd9902ae8e3797c77592bfe05a5f8162f75a391f5c2dc992eb5154c4ad"
	I0920 18:22:29.616281   75264 cri.go:89] found id: ""
	I0920 18:22:29.616292   75264 logs.go:276] 1 containers: [0ea0cfbd9902ae8e3797c77592bfe05a5f8162f75a391f5c2dc992eb5154c4ad]
	I0920 18:22:29.616345   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:29.620605   75264 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:22:29.620678   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:22:29.658077   75264 cri.go:89] found id: "65da0bae1c849a395cc241962bb6cbdf9f3edf11cdc95f4495ac2e427f9602d4"
	I0920 18:22:29.658106   75264 cri.go:89] found id: ""
	I0920 18:22:29.658114   75264 logs.go:276] 1 containers: [65da0bae1c849a395cc241962bb6cbdf9f3edf11cdc95f4495ac2e427f9602d4]
	I0920 18:22:29.658170   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:29.662227   75264 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:22:29.662288   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:22:29.695906   75264 cri.go:89] found id: "606f7c8a9a095f499ae422de76f76105607db72e57b9b0a6db3e3c5a81656c5c"
	I0920 18:22:29.695927   75264 cri.go:89] found id: ""
	I0920 18:22:29.695934   75264 logs.go:276] 1 containers: [606f7c8a9a095f499ae422de76f76105607db72e57b9b0a6db3e3c5a81656c5c]
	I0920 18:22:29.695979   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:29.699986   75264 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:22:29.700083   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:22:29.736560   75264 cri.go:89] found id: "6ba313deffc6185e59dac50051e933f6b2ea70d630cf36b2b8c2c07042f793ba"
	I0920 18:22:29.736589   75264 cri.go:89] found id: ""
	I0920 18:22:29.736600   75264 logs.go:276] 1 containers: [6ba313deffc6185e59dac50051e933f6b2ea70d630cf36b2b8c2c07042f793ba]
	I0920 18:22:29.736660   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:29.740751   75264 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:22:29.740803   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:22:29.778930   75264 cri.go:89] found id: "702f7f440eb6016c2c758a6be4e1072274ab6dcd169d7a081e5464869a7c299c"
	I0920 18:22:29.778952   75264 cri.go:89] found id: ""
	I0920 18:22:29.778959   75264 logs.go:276] 1 containers: [702f7f440eb6016c2c758a6be4e1072274ab6dcd169d7a081e5464869a7c299c]
	I0920 18:22:29.779011   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:29.783022   75264 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:22:29.783092   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:22:29.829491   75264 cri.go:89] found id: "4ca303b795795bd920f261c88630f30fd51a91b9694a9a6e67af8164bab8242b"
	I0920 18:22:29.829521   75264 cri.go:89] found id: ""
	I0920 18:22:29.829531   75264 logs.go:276] 1 containers: [4ca303b795795bd920f261c88630f30fd51a91b9694a9a6e67af8164bab8242b]
	I0920 18:22:29.829597   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:29.833853   75264 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:22:29.833924   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:22:29.872376   75264 cri.go:89] found id: ""
	I0920 18:22:29.872404   75264 logs.go:276] 0 containers: []
	W0920 18:22:29.872412   75264 logs.go:278] No container was found matching "kindnet"
	I0920 18:22:29.872419   75264 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0920 18:22:29.872482   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0920 18:22:29.923096   75264 cri.go:89] found id: "001bdc98537f0917f021c14a5d0733bd31cf19be1b4862502458a7a90e4300da"
	I0920 18:22:29.923136   75264 cri.go:89] found id: "c42201b6e3d55b6fd8beaf04d416658b228bf627e519ddb022dad6d17219cc1c"
	I0920 18:22:29.923143   75264 cri.go:89] found id: ""
	I0920 18:22:29.923152   75264 logs.go:276] 2 containers: [001bdc98537f0917f021c14a5d0733bd31cf19be1b4862502458a7a90e4300da c42201b6e3d55b6fd8beaf04d416658b228bf627e519ddb022dad6d17219cc1c]
	I0920 18:22:29.923226   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:29.930023   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:29.935091   75264 logs.go:123] Gathering logs for coredns [606f7c8a9a095f499ae422de76f76105607db72e57b9b0a6db3e3c5a81656c5c] ...
	I0920 18:22:29.935119   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 606f7c8a9a095f499ae422de76f76105607db72e57b9b0a6db3e3c5a81656c5c"
	I0920 18:22:29.974064   75264 logs.go:123] Gathering logs for kube-scheduler [6ba313deffc6185e59dac50051e933f6b2ea70d630cf36b2b8c2c07042f793ba] ...
	I0920 18:22:29.974101   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ba313deffc6185e59dac50051e933f6b2ea70d630cf36b2b8c2c07042f793ba"
	I0920 18:22:30.018770   75264 logs.go:123] Gathering logs for kube-controller-manager [4ca303b795795bd920f261c88630f30fd51a91b9694a9a6e67af8164bab8242b] ...
	I0920 18:22:30.018805   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ca303b795795bd920f261c88630f30fd51a91b9694a9a6e67af8164bab8242b"
	I0920 18:22:30.077578   75264 logs.go:123] Gathering logs for storage-provisioner [c42201b6e3d55b6fd8beaf04d416658b228bf627e519ddb022dad6d17219cc1c] ...
	I0920 18:22:30.077625   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c42201b6e3d55b6fd8beaf04d416658b228bf627e519ddb022dad6d17219cc1c"
	I0920 18:22:30.123874   75264 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:22:30.123910   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:22:30.553215   75264 logs.go:123] Gathering logs for kube-proxy [702f7f440eb6016c2c758a6be4e1072274ab6dcd169d7a081e5464869a7c299c] ...
	I0920 18:22:30.553261   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 702f7f440eb6016c2c758a6be4e1072274ab6dcd169d7a081e5464869a7c299c"
	I0920 18:22:30.597397   75264 logs.go:123] Gathering logs for storage-provisioner [001bdc98537f0917f021c14a5d0733bd31cf19be1b4862502458a7a90e4300da] ...
	I0920 18:22:30.597428   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 001bdc98537f0917f021c14a5d0733bd31cf19be1b4862502458a7a90e4300da"
	I0920 18:22:30.643777   75264 logs.go:123] Gathering logs for container status ...
	I0920 18:22:30.643807   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:22:30.688174   75264 logs.go:123] Gathering logs for kubelet ...
	I0920 18:22:30.688205   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:22:30.760547   75264 logs.go:123] Gathering logs for dmesg ...
	I0920 18:22:30.760591   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:22:30.776702   75264 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:22:30.776729   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 18:22:30.913200   75264 logs.go:123] Gathering logs for kube-apiserver [0ea0cfbd9902ae8e3797c77592bfe05a5f8162f75a391f5c2dc992eb5154c4ad] ...
	I0920 18:22:30.913240   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ea0cfbd9902ae8e3797c77592bfe05a5f8162f75a391f5c2dc992eb5154c4ad"
	I0920 18:22:30.957548   75264 logs.go:123] Gathering logs for etcd [65da0bae1c849a395cc241962bb6cbdf9f3edf11cdc95f4495ac2e427f9602d4] ...
	I0920 18:22:30.957589   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 65da0bae1c849a395cc241962bb6cbdf9f3edf11cdc95f4495ac2e427f9602d4"
	I0920 18:22:29.803036   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:32.301986   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:27.790386   75577 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 18:22:36.572379   75086 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 18:22:36.572452   75086 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 18:22:36.572558   75086 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 18:22:36.572702   75086 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 18:22:36.572826   75086 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 18:22:36.572926   75086 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 18:22:36.574903   75086 out.go:235]   - Generating certificates and keys ...
	I0920 18:22:36.575032   75086 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 18:22:36.575117   75086 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 18:22:36.575224   75086 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 18:22:36.575342   75086 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 18:22:36.575446   75086 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 18:22:36.575535   75086 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 18:22:36.575606   75086 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 18:22:36.575687   75086 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 18:22:36.575790   75086 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 18:22:36.575867   75086 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 18:22:36.575903   75086 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 18:22:36.575970   75086 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 18:22:36.576054   75086 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 18:22:36.576141   75086 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 18:22:36.576223   75086 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 18:22:36.576317   75086 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 18:22:36.576373   75086 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 18:22:36.576442   75086 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 18:22:36.576506   75086 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 18:22:36.577982   75086 out.go:235]   - Booting up control plane ...
	I0920 18:22:36.578071   75086 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 18:22:36.578147   75086 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 18:22:36.578204   75086 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 18:22:36.578312   75086 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 18:22:36.578400   75086 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 18:22:36.578438   75086 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 18:22:36.578568   75086 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 18:22:36.578660   75086 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 18:22:36.578719   75086 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.174126ms
	I0920 18:22:36.578788   75086 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 18:22:36.578839   75086 kubeadm.go:310] [api-check] The API server is healthy after 5.501599925s
	I0920 18:22:36.578933   75086 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 18:22:36.579037   75086 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 18:22:36.579090   75086 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 18:22:36.579268   75086 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-768431 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 18:22:36.579335   75086 kubeadm.go:310] [bootstrap-token] Using token: junoe1.zmowu95mn9yb1z1f
	I0920 18:22:36.580810   75086 out.go:235]   - Configuring RBAC rules ...
	I0920 18:22:36.580919   75086 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 18:22:36.580989   75086 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 18:22:36.581127   75086 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 18:22:36.581290   75086 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 18:22:36.581456   75086 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 18:22:36.581589   75086 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 18:22:36.581742   75086 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 18:22:36.581827   75086 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 18:22:36.581916   75086 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 18:22:36.581925   75086 kubeadm.go:310] 
	I0920 18:22:36.581980   75086 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 18:22:36.581986   75086 kubeadm.go:310] 
	I0920 18:22:36.582086   75086 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 18:22:36.582097   75086 kubeadm.go:310] 
	I0920 18:22:36.582137   75086 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 18:22:36.582204   75086 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 18:22:36.582287   75086 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 18:22:36.582297   75086 kubeadm.go:310] 
	I0920 18:22:36.582389   75086 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 18:22:36.582398   75086 kubeadm.go:310] 
	I0920 18:22:36.582479   75086 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 18:22:36.582495   75086 kubeadm.go:310] 
	I0920 18:22:36.582540   75086 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 18:22:36.582605   75086 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 18:22:36.582682   75086 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 18:22:36.582697   75086 kubeadm.go:310] 
	I0920 18:22:36.582796   75086 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 18:22:36.582892   75086 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 18:22:36.582901   75086 kubeadm.go:310] 
	I0920 18:22:36.583026   75086 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token junoe1.zmowu95mn9yb1z1f \
	I0920 18:22:36.583163   75086 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:736f3fb521855813decb4a7214f2b8beff5f81c3971b60decfa20a8807626a2d \
	I0920 18:22:36.583199   75086 kubeadm.go:310] 	--control-plane 
	I0920 18:22:36.583208   75086 kubeadm.go:310] 
	I0920 18:22:36.583316   75086 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 18:22:36.583335   75086 kubeadm.go:310] 
	I0920 18:22:36.583489   75086 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token junoe1.zmowu95mn9yb1z1f \
	I0920 18:22:36.583648   75086 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:736f3fb521855813decb4a7214f2b8beff5f81c3971b60decfa20a8807626a2d 
	I0920 18:22:36.583664   75086 cni.go:84] Creating CNI manager for ""
	I0920 18:22:36.583673   75086 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:22:36.585378   75086 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 18:22:33.523585   75264 api_server.go:253] Checking apiserver healthz at https://192.168.72.190:8444/healthz ...
	I0920 18:22:33.528658   75264 api_server.go:279] https://192.168.72.190:8444/healthz returned 200:
	ok
	I0920 18:22:33.529826   75264 api_server.go:141] control plane version: v1.31.1
	I0920 18:22:33.529861   75264 api_server.go:131] duration metric: took 3.95084435s to wait for apiserver health ...
	I0920 18:22:33.529870   75264 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 18:22:33.529894   75264 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:22:33.529938   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:22:33.571762   75264 cri.go:89] found id: "0ea0cfbd9902ae8e3797c77592bfe05a5f8162f75a391f5c2dc992eb5154c4ad"
	I0920 18:22:33.571783   75264 cri.go:89] found id: ""
	I0920 18:22:33.571789   75264 logs.go:276] 1 containers: [0ea0cfbd9902ae8e3797c77592bfe05a5f8162f75a391f5c2dc992eb5154c4ad]
	I0920 18:22:33.571840   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:33.576180   75264 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:22:33.576268   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:22:33.621887   75264 cri.go:89] found id: "65da0bae1c849a395cc241962bb6cbdf9f3edf11cdc95f4495ac2e427f9602d4"
	I0920 18:22:33.621915   75264 cri.go:89] found id: ""
	I0920 18:22:33.621926   75264 logs.go:276] 1 containers: [65da0bae1c849a395cc241962bb6cbdf9f3edf11cdc95f4495ac2e427f9602d4]
	I0920 18:22:33.622013   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:33.626552   75264 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:22:33.626609   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:22:33.664879   75264 cri.go:89] found id: "606f7c8a9a095f499ae422de76f76105607db72e57b9b0a6db3e3c5a81656c5c"
	I0920 18:22:33.664902   75264 cri.go:89] found id: ""
	I0920 18:22:33.664912   75264 logs.go:276] 1 containers: [606f7c8a9a095f499ae422de76f76105607db72e57b9b0a6db3e3c5a81656c5c]
	I0920 18:22:33.664980   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:33.669155   75264 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:22:33.669231   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:22:33.710930   75264 cri.go:89] found id: "6ba313deffc6185e59dac50051e933f6b2ea70d630cf36b2b8c2c07042f793ba"
	I0920 18:22:33.710957   75264 cri.go:89] found id: ""
	I0920 18:22:33.710968   75264 logs.go:276] 1 containers: [6ba313deffc6185e59dac50051e933f6b2ea70d630cf36b2b8c2c07042f793ba]
	I0920 18:22:33.711030   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:33.715618   75264 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:22:33.715689   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:22:33.751646   75264 cri.go:89] found id: "702f7f440eb6016c2c758a6be4e1072274ab6dcd169d7a081e5464869a7c299c"
	I0920 18:22:33.751672   75264 cri.go:89] found id: ""
	I0920 18:22:33.751690   75264 logs.go:276] 1 containers: [702f7f440eb6016c2c758a6be4e1072274ab6dcd169d7a081e5464869a7c299c]
	I0920 18:22:33.751758   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:33.756376   75264 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:22:33.756441   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:22:33.803246   75264 cri.go:89] found id: "4ca303b795795bd920f261c88630f30fd51a91b9694a9a6e67af8164bab8242b"
	I0920 18:22:33.803267   75264 cri.go:89] found id: ""
	I0920 18:22:33.803276   75264 logs.go:276] 1 containers: [4ca303b795795bd920f261c88630f30fd51a91b9694a9a6e67af8164bab8242b]
	I0920 18:22:33.803334   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:33.807825   75264 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:22:33.807892   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:22:33.849531   75264 cri.go:89] found id: ""
	I0920 18:22:33.849559   75264 logs.go:276] 0 containers: []
	W0920 18:22:33.849569   75264 logs.go:278] No container was found matching "kindnet"
	I0920 18:22:33.849583   75264 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0920 18:22:33.849645   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0920 18:22:33.891058   75264 cri.go:89] found id: "001bdc98537f0917f021c14a5d0733bd31cf19be1b4862502458a7a90e4300da"
	I0920 18:22:33.891084   75264 cri.go:89] found id: "c42201b6e3d55b6fd8beaf04d416658b228bf627e519ddb022dad6d17219cc1c"
	I0920 18:22:33.891090   75264 cri.go:89] found id: ""
	I0920 18:22:33.891099   75264 logs.go:276] 2 containers: [001bdc98537f0917f021c14a5d0733bd31cf19be1b4862502458a7a90e4300da c42201b6e3d55b6fd8beaf04d416658b228bf627e519ddb022dad6d17219cc1c]
	I0920 18:22:33.891157   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:33.896028   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:33.900174   75264 logs.go:123] Gathering logs for container status ...
	I0920 18:22:33.900209   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:22:33.944777   75264 logs.go:123] Gathering logs for dmesg ...
	I0920 18:22:33.944803   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:22:33.960062   75264 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:22:33.960094   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 18:22:34.087507   75264 logs.go:123] Gathering logs for etcd [65da0bae1c849a395cc241962bb6cbdf9f3edf11cdc95f4495ac2e427f9602d4] ...
	I0920 18:22:34.087556   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 65da0bae1c849a395cc241962bb6cbdf9f3edf11cdc95f4495ac2e427f9602d4"
	I0920 18:22:34.140051   75264 logs.go:123] Gathering logs for kube-proxy [702f7f440eb6016c2c758a6be4e1072274ab6dcd169d7a081e5464869a7c299c] ...
	I0920 18:22:34.140088   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 702f7f440eb6016c2c758a6be4e1072274ab6dcd169d7a081e5464869a7c299c"
	I0920 18:22:34.183540   75264 logs.go:123] Gathering logs for kube-controller-manager [4ca303b795795bd920f261c88630f30fd51a91b9694a9a6e67af8164bab8242b] ...
	I0920 18:22:34.183582   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ca303b795795bd920f261c88630f30fd51a91b9694a9a6e67af8164bab8242b"
	I0920 18:22:34.251978   75264 logs.go:123] Gathering logs for storage-provisioner [001bdc98537f0917f021c14a5d0733bd31cf19be1b4862502458a7a90e4300da] ...
	I0920 18:22:34.252025   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 001bdc98537f0917f021c14a5d0733bd31cf19be1b4862502458a7a90e4300da"
	I0920 18:22:34.296522   75264 logs.go:123] Gathering logs for storage-provisioner [c42201b6e3d55b6fd8beaf04d416658b228bf627e519ddb022dad6d17219cc1c] ...
	I0920 18:22:34.296562   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c42201b6e3d55b6fd8beaf04d416658b228bf627e519ddb022dad6d17219cc1c"
	I0920 18:22:34.338227   75264 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:22:34.338254   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:22:34.703986   75264 logs.go:123] Gathering logs for kubelet ...
	I0920 18:22:34.704037   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:22:34.788091   75264 logs.go:123] Gathering logs for kube-apiserver [0ea0cfbd9902ae8e3797c77592bfe05a5f8162f75a391f5c2dc992eb5154c4ad] ...
	I0920 18:22:34.788139   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ea0cfbd9902ae8e3797c77592bfe05a5f8162f75a391f5c2dc992eb5154c4ad"
	I0920 18:22:34.861403   75264 logs.go:123] Gathering logs for coredns [606f7c8a9a095f499ae422de76f76105607db72e57b9b0a6db3e3c5a81656c5c] ...
	I0920 18:22:34.861435   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 606f7c8a9a095f499ae422de76f76105607db72e57b9b0a6db3e3c5a81656c5c"
	I0920 18:22:34.906740   75264 logs.go:123] Gathering logs for kube-scheduler [6ba313deffc6185e59dac50051e933f6b2ea70d630cf36b2b8c2c07042f793ba] ...
	I0920 18:22:34.906773   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ba313deffc6185e59dac50051e933f6b2ea70d630cf36b2b8c2c07042f793ba"
	I0920 18:22:37.454205   75264 system_pods.go:59] 8 kube-system pods found
	I0920 18:22:37.454243   75264 system_pods.go:61] "coredns-7c65d6cfc9-dmdfb" [8e4ffdb3-37e0-4498-bdf5-d6e7dbcad020] Running
	I0920 18:22:37.454252   75264 system_pods.go:61] "etcd-default-k8s-diff-port-553719" [e8de0e96-5f3a-4f3f-ae69-c5da7d3c2eb7] Running
	I0920 18:22:37.454257   75264 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-553719" [5164e26c-7fcd-41dc-865f-429046a9ad61] Running
	I0920 18:22:37.454263   75264 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-553719" [b2c00a76-9645-4c63-9812-43e6ed9f1d4f] Running
	I0920 18:22:37.454268   75264 system_pods.go:61] "kube-proxy-p9crq" [83e0f53d-6960-42c4-904d-ea85ba9160f4] Running
	I0920 18:22:37.454273   75264 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-553719" [42155f7b-95bb-467b-acf5-89eb4f80cf76] Running
	I0920 18:22:37.454289   75264 system_pods.go:61] "metrics-server-6867b74b74-vtl79" [29e0b6eb-22a9-4e37-97f9-83b48cc38193] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 18:22:37.454296   75264 system_pods.go:61] "storage-provisioner" [6fad2d07-f99e-45ac-9657-bce6d73d7fce] Running
	I0920 18:22:37.454310   75264 system_pods.go:74] duration metric: took 3.924431145s to wait for pod list to return data ...
	I0920 18:22:37.454324   75264 default_sa.go:34] waiting for default service account to be created ...
	I0920 18:22:37.457599   75264 default_sa.go:45] found service account: "default"
	I0920 18:22:37.457619   75264 default_sa.go:55] duration metric: took 3.289936ms for default service account to be created ...
	I0920 18:22:37.457627   75264 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 18:22:37.461977   75264 system_pods.go:86] 8 kube-system pods found
	I0920 18:22:37.462001   75264 system_pods.go:89] "coredns-7c65d6cfc9-dmdfb" [8e4ffdb3-37e0-4498-bdf5-d6e7dbcad020] Running
	I0920 18:22:37.462006   75264 system_pods.go:89] "etcd-default-k8s-diff-port-553719" [e8de0e96-5f3a-4f3f-ae69-c5da7d3c2eb7] Running
	I0920 18:22:37.462011   75264 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-553719" [5164e26c-7fcd-41dc-865f-429046a9ad61] Running
	I0920 18:22:37.462015   75264 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-553719" [b2c00a76-9645-4c63-9812-43e6ed9f1d4f] Running
	I0920 18:22:37.462019   75264 system_pods.go:89] "kube-proxy-p9crq" [83e0f53d-6960-42c4-904d-ea85ba9160f4] Running
	I0920 18:22:37.462022   75264 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-553719" [42155f7b-95bb-467b-acf5-89eb4f80cf76] Running
	I0920 18:22:37.462028   75264 system_pods.go:89] "metrics-server-6867b74b74-vtl79" [29e0b6eb-22a9-4e37-97f9-83b48cc38193] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 18:22:37.462032   75264 system_pods.go:89] "storage-provisioner" [6fad2d07-f99e-45ac-9657-bce6d73d7fce] Running
	I0920 18:22:37.462040   75264 system_pods.go:126] duration metric: took 4.409042ms to wait for k8s-apps to be running ...
	I0920 18:22:37.462047   75264 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 18:22:37.462087   75264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:22:37.479350   75264 system_svc.go:56] duration metric: took 17.294926ms WaitForService to wait for kubelet
	I0920 18:22:37.479380   75264 kubeadm.go:582] duration metric: took 4m28.223143222s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:22:37.479402   75264 node_conditions.go:102] verifying NodePressure condition ...
	I0920 18:22:37.482497   75264 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 18:22:37.482517   75264 node_conditions.go:123] node cpu capacity is 2
	I0920 18:22:37.482532   75264 node_conditions.go:105] duration metric: took 3.125658ms to run NodePressure ...
	I0920 18:22:37.482543   75264 start.go:241] waiting for startup goroutines ...
	I0920 18:22:37.482549   75264 start.go:246] waiting for cluster config update ...
	I0920 18:22:37.482561   75264 start.go:255] writing updated cluster config ...
	I0920 18:22:37.482817   75264 ssh_runner.go:195] Run: rm -f paused
	I0920 18:22:37.532982   75264 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 18:22:37.534936   75264 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-553719" cluster and "default" namespace by default
	I0920 18:22:34.302204   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:36.802991   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:36.586809   75086 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 18:22:36.599904   75086 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 18:22:36.620050   75086 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 18:22:36.620133   75086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:22:36.620149   75086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-768431 minikube.k8s.io/updated_at=2024_09_20T18_22_36_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=0626f22cf0d915d75e291a5bce701f94395056e1 minikube.k8s.io/name=embed-certs-768431 minikube.k8s.io/primary=true
	I0920 18:22:36.636625   75086 ops.go:34] apiserver oom_adj: -16
	I0920 18:22:36.817434   75086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:22:37.318133   75086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:22:37.818515   75086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:22:38.318005   75086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:22:38.817759   75086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:22:39.318481   75086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:22:39.817943   75086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:22:40.317957   75086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:22:40.423342   75086 kubeadm.go:1113] duration metric: took 3.803277177s to wait for elevateKubeSystemPrivileges
	I0920 18:22:40.423389   75086 kubeadm.go:394] duration metric: took 4m59.290098215s to StartCluster
	I0920 18:22:40.423413   75086 settings.go:142] acquiring lock: {Name:mk2a13c58cdc0faf8cddca5d6716175d45db9bfd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:22:40.423516   75086 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19672-8777/kubeconfig
	I0920 18:22:40.426054   75086 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/kubeconfig: {Name:mkf32a4c736808e023459b2f0e40188618a38db1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:22:40.426360   75086 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.202 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:22:40.426456   75086 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 18:22:40.426546   75086 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-768431"
	I0920 18:22:40.426566   75086 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-768431"
	W0920 18:22:40.426578   75086 addons.go:243] addon storage-provisioner should already be in state true
	I0920 18:22:40.426576   75086 addons.go:69] Setting default-storageclass=true in profile "embed-certs-768431"
	I0920 18:22:40.426608   75086 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-768431"
	I0920 18:22:40.426622   75086 host.go:66] Checking if "embed-certs-768431" exists ...
	I0920 18:22:40.426621   75086 addons.go:69] Setting metrics-server=true in profile "embed-certs-768431"
	I0920 18:22:40.426662   75086 config.go:182] Loaded profile config "embed-certs-768431": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:22:40.426669   75086 addons.go:234] Setting addon metrics-server=true in "embed-certs-768431"
	W0920 18:22:40.426699   75086 addons.go:243] addon metrics-server should already be in state true
	I0920 18:22:40.426739   75086 host.go:66] Checking if "embed-certs-768431" exists ...
	I0920 18:22:40.427075   75086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:22:40.427116   75086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:22:40.427120   75086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:22:40.427155   75086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:22:40.427161   75086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:22:40.427251   75086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:22:40.427977   75086 out.go:177] * Verifying Kubernetes components...
	I0920 18:22:40.429647   75086 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:22:40.443468   75086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40103
	I0920 18:22:40.443663   75086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37871
	I0920 18:22:40.444055   75086 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:22:40.444062   75086 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:22:40.444562   75086 main.go:141] libmachine: Using API Version  1
	I0920 18:22:40.444586   75086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:22:40.444732   75086 main.go:141] libmachine: Using API Version  1
	I0920 18:22:40.444750   75086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:22:40.445016   75086 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:22:40.445110   75086 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:22:40.445584   75086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:22:40.445634   75086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:22:40.445652   75086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:22:40.445654   75086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38771
	I0920 18:22:40.445684   75086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:22:40.446227   75086 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:22:40.446872   75086 main.go:141] libmachine: Using API Version  1
	I0920 18:22:40.446892   75086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:22:40.447280   75086 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:22:40.447692   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetState
	I0920 18:22:40.451332   75086 addons.go:234] Setting addon default-storageclass=true in "embed-certs-768431"
	W0920 18:22:40.451355   75086 addons.go:243] addon default-storageclass should already be in state true
	I0920 18:22:40.451382   75086 host.go:66] Checking if "embed-certs-768431" exists ...
	I0920 18:22:40.451753   75086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:22:40.451803   75086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:22:40.463317   75086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39353
	I0920 18:22:40.463816   75086 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:22:40.464544   75086 main.go:141] libmachine: Using API Version  1
	I0920 18:22:40.464577   75086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:22:40.464961   75086 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:22:40.467445   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetState
	I0920 18:22:40.469290   75086 main.go:141] libmachine: (embed-certs-768431) Calling .DriverName
	I0920 18:22:40.471212   75086 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0920 18:22:40.471862   75086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40689
	I0920 18:22:40.472254   75086 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:22:40.472515   75086 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 18:22:40.472535   75086 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 18:22:40.472557   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHHostname
	I0920 18:22:40.472718   75086 main.go:141] libmachine: Using API Version  1
	I0920 18:22:40.472741   75086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:22:40.473259   75086 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:22:40.473477   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetState
	I0920 18:22:40.473753   75086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36495
	I0920 18:22:40.474173   75086 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:22:40.474778   75086 main.go:141] libmachine: Using API Version  1
	I0920 18:22:40.474796   75086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:22:40.475166   75086 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:22:40.475726   75086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:22:40.475766   75086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:22:40.475984   75086 main.go:141] libmachine: (embed-certs-768431) Calling .DriverName
	I0920 18:22:40.476244   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:22:40.476583   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:22:40.476605   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:22:40.476773   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHPort
	I0920 18:22:40.476953   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:22:40.477053   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHUsername
	I0920 18:22:40.477196   75086 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/embed-certs-768431/id_rsa Username:docker}
	I0920 18:22:40.478244   75086 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:22:40.479443   75086 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 18:22:40.479459   75086 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 18:22:40.479476   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHHostname
	I0920 18:22:40.482119   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:22:40.482400   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:22:40.482423   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:22:40.482639   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHPort
	I0920 18:22:40.482840   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:22:40.482968   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHUsername
	I0920 18:22:40.483145   75086 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/embed-certs-768431/id_rsa Username:docker}
	I0920 18:22:40.494454   75086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35971
	I0920 18:22:40.494948   75086 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:22:40.495425   75086 main.go:141] libmachine: Using API Version  1
	I0920 18:22:40.495444   75086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:22:40.495798   75086 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:22:40.495990   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetState
	I0920 18:22:40.497591   75086 main.go:141] libmachine: (embed-certs-768431) Calling .DriverName
	I0920 18:22:40.497878   75086 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 18:22:40.497894   75086 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 18:22:40.497913   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHHostname
	I0920 18:22:40.500755   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:22:40.501130   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:22:40.501153   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:22:40.501355   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHPort
	I0920 18:22:40.501548   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:22:40.501725   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHUsername
	I0920 18:22:40.502091   75086 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/embed-certs-768431/id_rsa Username:docker}
	I0920 18:22:40.646738   75086 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:22:40.673285   75086 node_ready.go:35] waiting up to 6m0s for node "embed-certs-768431" to be "Ready" ...
	I0920 18:22:40.684618   75086 node_ready.go:49] node "embed-certs-768431" has status "Ready":"True"
	I0920 18:22:40.684643   75086 node_ready.go:38] duration metric: took 11.298825ms for node "embed-certs-768431" to be "Ready" ...
	I0920 18:22:40.684653   75086 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:22:40.689151   75086 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:22:40.734882   75086 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 18:22:40.744479   75086 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 18:22:40.884534   75086 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 18:22:40.884556   75086 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0920 18:22:41.018287   75086 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 18:22:41.018313   75086 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 18:22:41.091216   75086 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 18:22:41.091247   75086 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 18:22:41.132887   75086 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 18:22:41.538445   75086 main.go:141] libmachine: Making call to close driver server
	I0920 18:22:41.538472   75086 main.go:141] libmachine: (embed-certs-768431) Calling .Close
	I0920 18:22:41.538774   75086 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:22:41.538795   75086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:22:41.538806   75086 main.go:141] libmachine: Making call to close driver server
	I0920 18:22:41.538815   75086 main.go:141] libmachine: (embed-certs-768431) Calling .Close
	I0920 18:22:41.539010   75086 main.go:141] libmachine: Making call to close driver server
	I0920 18:22:41.539027   75086 main.go:141] libmachine: (embed-certs-768431) Calling .Close
	I0920 18:22:41.539118   75086 main.go:141] libmachine: (embed-certs-768431) DBG | Closing plugin on server side
	I0920 18:22:41.539137   75086 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:22:41.539205   75086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:22:41.539273   75086 main.go:141] libmachine: (embed-certs-768431) DBG | Closing plugin on server side
	I0920 18:22:41.539286   75086 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:22:41.539300   75086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:22:41.539309   75086 main.go:141] libmachine: Making call to close driver server
	I0920 18:22:41.539348   75086 main.go:141] libmachine: (embed-certs-768431) Calling .Close
	I0920 18:22:41.539629   75086 main.go:141] libmachine: (embed-certs-768431) DBG | Closing plugin on server side
	I0920 18:22:41.539693   75086 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:22:41.539712   75086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:22:41.559164   75086 main.go:141] libmachine: Making call to close driver server
	I0920 18:22:41.559191   75086 main.go:141] libmachine: (embed-certs-768431) Calling .Close
	I0920 18:22:41.559517   75086 main.go:141] libmachine: (embed-certs-768431) DBG | Closing plugin on server side
	I0920 18:22:41.559580   75086 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:22:41.559596   75086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:22:41.880601   75086 main.go:141] libmachine: Making call to close driver server
	I0920 18:22:41.880640   75086 main.go:141] libmachine: (embed-certs-768431) Calling .Close
	I0920 18:22:41.880998   75086 main.go:141] libmachine: (embed-certs-768431) DBG | Closing plugin on server side
	I0920 18:22:41.881026   75086 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:22:41.881039   75086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:22:41.881047   75086 main.go:141] libmachine: Making call to close driver server
	I0920 18:22:41.881052   75086 main.go:141] libmachine: (embed-certs-768431) Calling .Close
	I0920 18:22:41.881304   75086 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:22:41.881322   75086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:22:41.881336   75086 addons.go:475] Verifying addon metrics-server=true in "embed-certs-768431"
	I0920 18:22:41.883315   75086 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0920 18:22:39.302453   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:41.802428   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:41.885367   75086 addons.go:510] duration metric: took 1.45891561s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0920 18:22:42.696266   75086 pod_ready.go:103] pod "etcd-embed-certs-768431" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:44.300965   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:46.301969   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:45.195160   75086 pod_ready.go:103] pod "etcd-embed-certs-768431" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:46.200142   75086 pod_ready.go:93] pod "etcd-embed-certs-768431" in "kube-system" namespace has status "Ready":"True"
	I0920 18:22:46.200166   75086 pod_ready.go:82] duration metric: took 5.510989222s for pod "etcd-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:22:46.200175   75086 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:22:46.205619   75086 pod_ready.go:93] pod "kube-apiserver-embed-certs-768431" in "kube-system" namespace has status "Ready":"True"
	I0920 18:22:46.205647   75086 pod_ready.go:82] duration metric: took 5.464252ms for pod "kube-apiserver-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:22:46.205659   75086 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:22:46.210040   75086 pod_ready.go:93] pod "kube-controller-manager-embed-certs-768431" in "kube-system" namespace has status "Ready":"True"
	I0920 18:22:46.210066   75086 pod_ready.go:82] duration metric: took 4.397776ms for pod "kube-controller-manager-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:22:46.210079   75086 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:22:48.216587   75086 pod_ready.go:103] pod "kube-scheduler-embed-certs-768431" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:49.217996   75086 pod_ready.go:93] pod "kube-scheduler-embed-certs-768431" in "kube-system" namespace has status "Ready":"True"
	I0920 18:22:49.218019   75086 pod_ready.go:82] duration metric: took 3.007931797s for pod "kube-scheduler-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:22:49.218025   75086 pod_ready.go:39] duration metric: took 8.533358652s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:22:49.218039   75086 api_server.go:52] waiting for apiserver process to appear ...
	I0920 18:22:49.218100   75086 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:22:49.235456   75086 api_server.go:72] duration metric: took 8.809056301s to wait for apiserver process to appear ...
	I0920 18:22:49.235480   75086 api_server.go:88] waiting for apiserver healthz status ...
	I0920 18:22:49.235499   75086 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0920 18:22:49.239562   75086 api_server.go:279] https://192.168.61.202:8443/healthz returned 200:
	ok
	I0920 18:22:49.240999   75086 api_server.go:141] control plane version: v1.31.1
	I0920 18:22:49.241024   75086 api_server.go:131] duration metric: took 5.537177ms to wait for apiserver health ...
	I0920 18:22:49.241033   75086 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 18:22:49.246297   75086 system_pods.go:59] 9 kube-system pods found
	I0920 18:22:49.246335   75086 system_pods.go:61] "coredns-7c65d6cfc9-g5tkc" [8877c0e8-6c8f-4a62-94bd-508982faee3c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 18:22:49.246342   75086 system_pods.go:61] "coredns-7c65d6cfc9-jkkdn" [f64288e4-009c-4ba8-93e7-30ca5296af46] Running
	I0920 18:22:49.246348   75086 system_pods.go:61] "etcd-embed-certs-768431" [f0ba7c9a-cc8b-43d4-aa48-a8f923d3c515] Running
	I0920 18:22:49.246352   75086 system_pods.go:61] "kube-apiserver-embed-certs-768431" [dba3b6cc-95db-47a0-ba41-3aa3ed3e8adb] Running
	I0920 18:22:49.246356   75086 system_pods.go:61] "kube-controller-manager-embed-certs-768431" [ff933e33-4c5c-4a8c-8ee4-6f0cb3ea4c3a] Running
	I0920 18:22:49.246359   75086 system_pods.go:61] "kube-proxy-c4527" [2e2d5102-0c42-4b87-8a27-dd53b8eb41f9] Running
	I0920 18:22:49.246362   75086 system_pods.go:61] "kube-scheduler-embed-certs-768431" [8cca3406-a395-4dcd-97ac-d8ce3e8deac6] Running
	I0920 18:22:49.246367   75086 system_pods.go:61] "metrics-server-6867b74b74-9snmf" [5fb654f5-5e73-436e-bc9d-04ef5077deb4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 18:22:49.246373   75086 system_pods.go:61] "storage-provisioner" [09227b45-ea2a-4fcc-b082-9978e5f00a4b] Running
	I0920 18:22:49.246379   75086 system_pods.go:74] duration metric: took 5.340615ms to wait for pod list to return data ...
	I0920 18:22:49.246388   75086 default_sa.go:34] waiting for default service account to be created ...
	I0920 18:22:49.249593   75086 default_sa.go:45] found service account: "default"
	I0920 18:22:49.249617   75086 default_sa.go:55] duration metric: took 3.222486ms for default service account to be created ...
	I0920 18:22:49.249626   75086 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 18:22:49.255747   75086 system_pods.go:86] 9 kube-system pods found
	I0920 18:22:49.255780   75086 system_pods.go:89] "coredns-7c65d6cfc9-g5tkc" [8877c0e8-6c8f-4a62-94bd-508982faee3c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 18:22:49.255790   75086 system_pods.go:89] "coredns-7c65d6cfc9-jkkdn" [f64288e4-009c-4ba8-93e7-30ca5296af46] Running
	I0920 18:22:49.255799   75086 system_pods.go:89] "etcd-embed-certs-768431" [f0ba7c9a-cc8b-43d4-aa48-a8f923d3c515] Running
	I0920 18:22:49.255805   75086 system_pods.go:89] "kube-apiserver-embed-certs-768431" [dba3b6cc-95db-47a0-ba41-3aa3ed3e8adb] Running
	I0920 18:22:49.255811   75086 system_pods.go:89] "kube-controller-manager-embed-certs-768431" [ff933e33-4c5c-4a8c-8ee4-6f0cb3ea4c3a] Running
	I0920 18:22:49.255817   75086 system_pods.go:89] "kube-proxy-c4527" [2e2d5102-0c42-4b87-8a27-dd53b8eb41f9] Running
	I0920 18:22:49.255826   75086 system_pods.go:89] "kube-scheduler-embed-certs-768431" [8cca3406-a395-4dcd-97ac-d8ce3e8deac6] Running
	I0920 18:22:49.255834   75086 system_pods.go:89] "metrics-server-6867b74b74-9snmf" [5fb654f5-5e73-436e-bc9d-04ef5077deb4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 18:22:49.255845   75086 system_pods.go:89] "storage-provisioner" [09227b45-ea2a-4fcc-b082-9978e5f00a4b] Running
	I0920 18:22:49.255852   75086 system_pods.go:126] duration metric: took 6.220419ms to wait for k8s-apps to be running ...
	I0920 18:22:49.255863   75086 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 18:22:49.255918   75086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:22:49.271916   75086 system_svc.go:56] duration metric: took 16.046857ms WaitForService to wait for kubelet
	I0920 18:22:49.271946   75086 kubeadm.go:582] duration metric: took 8.845551531s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:22:49.271962   75086 node_conditions.go:102] verifying NodePressure condition ...
	I0920 18:22:49.277150   75086 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 18:22:49.277190   75086 node_conditions.go:123] node cpu capacity is 2
	I0920 18:22:49.277203   75086 node_conditions.go:105] duration metric: took 5.234938ms to run NodePressure ...
	I0920 18:22:49.277216   75086 start.go:241] waiting for startup goroutines ...
	I0920 18:22:49.277226   75086 start.go:246] waiting for cluster config update ...
	I0920 18:22:49.277240   75086 start.go:255] writing updated cluster config ...
	I0920 18:22:49.278062   75086 ssh_runner.go:195] Run: rm -f paused
	I0920 18:22:49.329596   75086 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 18:22:49.331593   75086 out.go:177] * Done! kubectl is now configured to use "embed-certs-768431" cluster and "default" namespace by default
	I0920 18:22:48.802304   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:51.301288   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:53.801957   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:56.301310   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:58.302165   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:23:00.801931   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:23:03.301289   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:23:05.801777   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:23:08.301539   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:23:08.801990   74753 pod_ready.go:82] duration metric: took 4m0.006972987s for pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace to be "Ready" ...
	E0920 18:23:08.802010   74753 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0920 18:23:08.802019   74753 pod_ready.go:39] duration metric: took 4m2.40678087s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:23:08.802035   74753 api_server.go:52] waiting for apiserver process to appear ...
	I0920 18:23:08.802062   74753 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:23:08.802110   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:23:08.848687   74753 cri.go:89] found id: "334e4df5baa4f8f7d58af2308fab7f78de22f7e07d6dfbcbeaa0df2a52f6b207"
	I0920 18:23:08.848709   74753 cri.go:89] found id: ""
	I0920 18:23:08.848716   74753 logs.go:276] 1 containers: [334e4df5baa4f8f7d58af2308fab7f78de22f7e07d6dfbcbeaa0df2a52f6b207]
	I0920 18:23:08.848764   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:08.853180   74753 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:23:08.853239   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:23:08.889768   74753 cri.go:89] found id: "98aa96314cf8f863121b9953ad6335628481f536fcab49b7c00505a17eb35ff2"
	I0920 18:23:08.889793   74753 cri.go:89] found id: ""
	I0920 18:23:08.889803   74753 logs.go:276] 1 containers: [98aa96314cf8f863121b9953ad6335628481f536fcab49b7c00505a17eb35ff2]
	I0920 18:23:08.889869   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:08.893865   74753 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:23:08.893937   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:23:08.936592   74753 cri.go:89] found id: "35f0d8dd053d4c376ccf21215492d8509ff701e7d09b345fdbe06d505ba2d6b4"
	I0920 18:23:08.936619   74753 cri.go:89] found id: ""
	I0920 18:23:08.936629   74753 logs.go:276] 1 containers: [35f0d8dd053d4c376ccf21215492d8509ff701e7d09b345fdbe06d505ba2d6b4]
	I0920 18:23:08.936768   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:08.941222   74753 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:23:08.941311   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:23:08.978894   74753 cri.go:89] found id: "8153479cebb05353bec31c41778746c485cb294bf41c3f98e30559ad7da64c64"
	I0920 18:23:08.978918   74753 cri.go:89] found id: ""
	I0920 18:23:08.978929   74753 logs.go:276] 1 containers: [8153479cebb05353bec31c41778746c485cb294bf41c3f98e30559ad7da64c64]
	I0920 18:23:08.978977   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:08.982783   74753 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:23:08.982845   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:23:09.018400   74753 cri.go:89] found id: "6df198ca54e8004a532de9b15cfe7a48a659ae06623c00a78cad528762944497"
	I0920 18:23:09.018422   74753 cri.go:89] found id: ""
	I0920 18:23:09.018430   74753 logs.go:276] 1 containers: [6df198ca54e8004a532de9b15cfe7a48a659ae06623c00a78cad528762944497]
	I0920 18:23:09.018485   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:09.022403   74753 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:23:09.022470   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:23:09.062315   74753 cri.go:89] found id: "3ebf4c520d684127cd0b6146d44e863090b9c2db3ac866e9ce3ef2d10d2d6da1"
	I0920 18:23:09.062385   74753 cri.go:89] found id: ""
	I0920 18:23:09.062397   74753 logs.go:276] 1 containers: [3ebf4c520d684127cd0b6146d44e863090b9c2db3ac866e9ce3ef2d10d2d6da1]
	I0920 18:23:09.062453   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:09.066976   74753 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:23:09.067049   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:23:09.107467   74753 cri.go:89] found id: ""
	I0920 18:23:09.107504   74753 logs.go:276] 0 containers: []
	W0920 18:23:09.107515   74753 logs.go:278] No container was found matching "kindnet"
	I0920 18:23:09.107523   74753 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0920 18:23:09.107583   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0920 18:23:09.150489   74753 cri.go:89] found id: "179e4a02f3459a613d3307806b477d54581555b18babca62d0c5e553b9562442"
	I0920 18:23:09.150519   74753 cri.go:89] found id: "3eb9abdf57de51979118cfa77b613c22e28122fcb1da9e72fbf6f3a946ed3531"
	I0920 18:23:09.150526   74753 cri.go:89] found id: ""
	I0920 18:23:09.150536   74753 logs.go:276] 2 containers: [179e4a02f3459a613d3307806b477d54581555b18babca62d0c5e553b9562442 3eb9abdf57de51979118cfa77b613c22e28122fcb1da9e72fbf6f3a946ed3531]
	I0920 18:23:09.150589   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:09.154618   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:09.158682   74753 logs.go:123] Gathering logs for container status ...
	I0920 18:23:09.158708   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:23:09.198393   74753 logs.go:123] Gathering logs for kubelet ...
	I0920 18:23:09.198420   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:23:09.266161   74753 logs.go:123] Gathering logs for kube-apiserver [334e4df5baa4f8f7d58af2308fab7f78de22f7e07d6dfbcbeaa0df2a52f6b207] ...
	I0920 18:23:09.266197   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 334e4df5baa4f8f7d58af2308fab7f78de22f7e07d6dfbcbeaa0df2a52f6b207"
	I0920 18:23:09.311406   74753 logs.go:123] Gathering logs for etcd [98aa96314cf8f863121b9953ad6335628481f536fcab49b7c00505a17eb35ff2] ...
	I0920 18:23:09.311438   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 98aa96314cf8f863121b9953ad6335628481f536fcab49b7c00505a17eb35ff2"
	I0920 18:23:09.353233   74753 logs.go:123] Gathering logs for kube-controller-manager [3ebf4c520d684127cd0b6146d44e863090b9c2db3ac866e9ce3ef2d10d2d6da1] ...
	I0920 18:23:09.353271   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ebf4c520d684127cd0b6146d44e863090b9c2db3ac866e9ce3ef2d10d2d6da1"
	I0920 18:23:09.415966   74753 logs.go:123] Gathering logs for storage-provisioner [3eb9abdf57de51979118cfa77b613c22e28122fcb1da9e72fbf6f3a946ed3531] ...
	I0920 18:23:09.416010   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3eb9abdf57de51979118cfa77b613c22e28122fcb1da9e72fbf6f3a946ed3531"
	I0920 18:23:09.462018   74753 logs.go:123] Gathering logs for storage-provisioner [179e4a02f3459a613d3307806b477d54581555b18babca62d0c5e553b9562442] ...
	I0920 18:23:09.462054   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 179e4a02f3459a613d3307806b477d54581555b18babca62d0c5e553b9562442"
	I0920 18:23:09.499851   74753 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:23:09.499883   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:23:10.018536   74753 logs.go:123] Gathering logs for dmesg ...
	I0920 18:23:10.018586   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:23:10.033607   74753 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:23:10.033639   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 18:23:10.167598   74753 logs.go:123] Gathering logs for coredns [35f0d8dd053d4c376ccf21215492d8509ff701e7d09b345fdbe06d505ba2d6b4] ...
	I0920 18:23:10.167645   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35f0d8dd053d4c376ccf21215492d8509ff701e7d09b345fdbe06d505ba2d6b4"
	I0920 18:23:10.201819   74753 logs.go:123] Gathering logs for kube-scheduler [8153479cebb05353bec31c41778746c485cb294bf41c3f98e30559ad7da64c64] ...
	I0920 18:23:10.201856   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8153479cebb05353bec31c41778746c485cb294bf41c3f98e30559ad7da64c64"
	I0920 18:23:10.237079   74753 logs.go:123] Gathering logs for kube-proxy [6df198ca54e8004a532de9b15cfe7a48a659ae06623c00a78cad528762944497] ...
	I0920 18:23:10.237108   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6df198ca54e8004a532de9b15cfe7a48a659ae06623c00a78cad528762944497"
	I0920 18:23:12.778360   74753 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:23:12.796411   74753 api_server.go:72] duration metric: took 4m13.647076581s to wait for apiserver process to appear ...
	I0920 18:23:12.796438   74753 api_server.go:88] waiting for apiserver healthz status ...
	I0920 18:23:12.796475   74753 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:23:12.796525   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:23:12.833259   74753 cri.go:89] found id: "334e4df5baa4f8f7d58af2308fab7f78de22f7e07d6dfbcbeaa0df2a52f6b207"
	I0920 18:23:12.833279   74753 cri.go:89] found id: ""
	I0920 18:23:12.833287   74753 logs.go:276] 1 containers: [334e4df5baa4f8f7d58af2308fab7f78de22f7e07d6dfbcbeaa0df2a52f6b207]
	I0920 18:23:12.833334   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:12.837497   74753 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:23:12.837568   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:23:12.877602   74753 cri.go:89] found id: "98aa96314cf8f863121b9953ad6335628481f536fcab49b7c00505a17eb35ff2"
	I0920 18:23:12.877628   74753 cri.go:89] found id: ""
	I0920 18:23:12.877639   74753 logs.go:276] 1 containers: [98aa96314cf8f863121b9953ad6335628481f536fcab49b7c00505a17eb35ff2]
	I0920 18:23:12.877688   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:12.881869   74753 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:23:12.881951   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:23:12.924617   74753 cri.go:89] found id: "35f0d8dd053d4c376ccf21215492d8509ff701e7d09b345fdbe06d505ba2d6b4"
	I0920 18:23:12.924641   74753 cri.go:89] found id: ""
	I0920 18:23:12.924650   74753 logs.go:276] 1 containers: [35f0d8dd053d4c376ccf21215492d8509ff701e7d09b345fdbe06d505ba2d6b4]
	I0920 18:23:12.924710   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:12.929317   74753 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:23:12.929381   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:23:12.968642   74753 cri.go:89] found id: "8153479cebb05353bec31c41778746c485cb294bf41c3f98e30559ad7da64c64"
	I0920 18:23:12.968672   74753 cri.go:89] found id: ""
	I0920 18:23:12.968682   74753 logs.go:276] 1 containers: [8153479cebb05353bec31c41778746c485cb294bf41c3f98e30559ad7da64c64]
	I0920 18:23:12.968745   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:12.974588   74753 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:23:12.974665   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:23:13.010036   74753 cri.go:89] found id: "6df198ca54e8004a532de9b15cfe7a48a659ae06623c00a78cad528762944497"
	I0920 18:23:13.010067   74753 cri.go:89] found id: ""
	I0920 18:23:13.010078   74753 logs.go:276] 1 containers: [6df198ca54e8004a532de9b15cfe7a48a659ae06623c00a78cad528762944497]
	I0920 18:23:13.010136   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:13.014300   74753 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:23:13.014358   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:23:13.055560   74753 cri.go:89] found id: "3ebf4c520d684127cd0b6146d44e863090b9c2db3ac866e9ce3ef2d10d2d6da1"
	I0920 18:23:13.055591   74753 cri.go:89] found id: ""
	I0920 18:23:13.055601   74753 logs.go:276] 1 containers: [3ebf4c520d684127cd0b6146d44e863090b9c2db3ac866e9ce3ef2d10d2d6da1]
	I0920 18:23:13.055663   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:13.059828   74753 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:23:13.059887   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:23:13.100161   74753 cri.go:89] found id: ""
	I0920 18:23:13.100189   74753 logs.go:276] 0 containers: []
	W0920 18:23:13.100199   74753 logs.go:278] No container was found matching "kindnet"
	I0920 18:23:13.100207   74753 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0920 18:23:13.100269   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0920 18:23:13.136788   74753 cri.go:89] found id: "179e4a02f3459a613d3307806b477d54581555b18babca62d0c5e553b9562442"
	I0920 18:23:13.136818   74753 cri.go:89] found id: "3eb9abdf57de51979118cfa77b613c22e28122fcb1da9e72fbf6f3a946ed3531"
	I0920 18:23:13.136824   74753 cri.go:89] found id: ""
	I0920 18:23:13.136831   74753 logs.go:276] 2 containers: [179e4a02f3459a613d3307806b477d54581555b18babca62d0c5e553b9562442 3eb9abdf57de51979118cfa77b613c22e28122fcb1da9e72fbf6f3a946ed3531]
	I0920 18:23:13.136894   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:13.140956   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:13.145991   74753 logs.go:123] Gathering logs for container status ...
	I0920 18:23:13.146023   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:23:13.187274   74753 logs.go:123] Gathering logs for kubelet ...
	I0920 18:23:13.187303   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:23:13.257111   74753 logs.go:123] Gathering logs for dmesg ...
	I0920 18:23:13.257147   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:23:13.271678   74753 logs.go:123] Gathering logs for kube-controller-manager [3ebf4c520d684127cd0b6146d44e863090b9c2db3ac866e9ce3ef2d10d2d6da1] ...
	I0920 18:23:13.271704   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ebf4c520d684127cd0b6146d44e863090b9c2db3ac866e9ce3ef2d10d2d6da1"
	I0920 18:23:13.327163   74753 logs.go:123] Gathering logs for storage-provisioner [179e4a02f3459a613d3307806b477d54581555b18babca62d0c5e553b9562442] ...
	I0920 18:23:13.327191   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 179e4a02f3459a613d3307806b477d54581555b18babca62d0c5e553b9562442"
	I0920 18:23:13.362002   74753 logs.go:123] Gathering logs for storage-provisioner [3eb9abdf57de51979118cfa77b613c22e28122fcb1da9e72fbf6f3a946ed3531] ...
	I0920 18:23:13.362039   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3eb9abdf57de51979118cfa77b613c22e28122fcb1da9e72fbf6f3a946ed3531"
	I0920 18:23:13.409907   74753 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:23:13.409938   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:23:13.840233   74753 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:23:13.840275   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 18:23:13.956466   74753 logs.go:123] Gathering logs for kube-apiserver [334e4df5baa4f8f7d58af2308fab7f78de22f7e07d6dfbcbeaa0df2a52f6b207] ...
	I0920 18:23:13.956499   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 334e4df5baa4f8f7d58af2308fab7f78de22f7e07d6dfbcbeaa0df2a52f6b207"
	I0920 18:23:14.006028   74753 logs.go:123] Gathering logs for etcd [98aa96314cf8f863121b9953ad6335628481f536fcab49b7c00505a17eb35ff2] ...
	I0920 18:23:14.006063   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 98aa96314cf8f863121b9953ad6335628481f536fcab49b7c00505a17eb35ff2"
	I0920 18:23:14.051354   74753 logs.go:123] Gathering logs for coredns [35f0d8dd053d4c376ccf21215492d8509ff701e7d09b345fdbe06d505ba2d6b4] ...
	I0920 18:23:14.051382   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35f0d8dd053d4c376ccf21215492d8509ff701e7d09b345fdbe06d505ba2d6b4"
	I0920 18:23:14.086486   74753 logs.go:123] Gathering logs for kube-scheduler [8153479cebb05353bec31c41778746c485cb294bf41c3f98e30559ad7da64c64] ...
	I0920 18:23:14.086518   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8153479cebb05353bec31c41778746c485cb294bf41c3f98e30559ad7da64c64"
	I0920 18:23:14.128119   74753 logs.go:123] Gathering logs for kube-proxy [6df198ca54e8004a532de9b15cfe7a48a659ae06623c00a78cad528762944497] ...
	I0920 18:23:14.128149   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6df198ca54e8004a532de9b15cfe7a48a659ae06623c00a78cad528762944497"
	I0920 18:23:16.668901   74753 api_server.go:253] Checking apiserver healthz at https://192.168.50.47:8443/healthz ...
	I0920 18:23:16.674235   74753 api_server.go:279] https://192.168.50.47:8443/healthz returned 200:
	ok
	I0920 18:23:16.675431   74753 api_server.go:141] control plane version: v1.31.1
	I0920 18:23:16.675449   74753 api_server.go:131] duration metric: took 3.879005012s to wait for apiserver health ...
	I0920 18:23:16.675456   74753 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 18:23:16.675481   74753 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:23:16.675527   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:23:16.711645   74753 cri.go:89] found id: "334e4df5baa4f8f7d58af2308fab7f78de22f7e07d6dfbcbeaa0df2a52f6b207"
	I0920 18:23:16.711673   74753 cri.go:89] found id: ""
	I0920 18:23:16.711683   74753 logs.go:276] 1 containers: [334e4df5baa4f8f7d58af2308fab7f78de22f7e07d6dfbcbeaa0df2a52f6b207]
	I0920 18:23:16.711743   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:16.716466   74753 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:23:16.716536   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:23:16.763061   74753 cri.go:89] found id: "98aa96314cf8f863121b9953ad6335628481f536fcab49b7c00505a17eb35ff2"
	I0920 18:23:16.763085   74753 cri.go:89] found id: ""
	I0920 18:23:16.763095   74753 logs.go:276] 1 containers: [98aa96314cf8f863121b9953ad6335628481f536fcab49b7c00505a17eb35ff2]
	I0920 18:23:16.763155   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:16.767018   74753 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:23:16.767075   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:23:16.801111   74753 cri.go:89] found id: "35f0d8dd053d4c376ccf21215492d8509ff701e7d09b345fdbe06d505ba2d6b4"
	I0920 18:23:16.801131   74753 cri.go:89] found id: ""
	I0920 18:23:16.801138   74753 logs.go:276] 1 containers: [35f0d8dd053d4c376ccf21215492d8509ff701e7d09b345fdbe06d505ba2d6b4]
	I0920 18:23:16.801192   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:16.805372   74753 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:23:16.805436   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:23:16.843368   74753 cri.go:89] found id: "8153479cebb05353bec31c41778746c485cb294bf41c3f98e30559ad7da64c64"
	I0920 18:23:16.843399   74753 cri.go:89] found id: ""
	I0920 18:23:16.843410   74753 logs.go:276] 1 containers: [8153479cebb05353bec31c41778746c485cb294bf41c3f98e30559ad7da64c64]
	I0920 18:23:16.843461   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:16.847284   74753 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:23:16.847341   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:23:16.882749   74753 cri.go:89] found id: "6df198ca54e8004a532de9b15cfe7a48a659ae06623c00a78cad528762944497"
	I0920 18:23:16.882770   74753 cri.go:89] found id: ""
	I0920 18:23:16.882777   74753 logs.go:276] 1 containers: [6df198ca54e8004a532de9b15cfe7a48a659ae06623c00a78cad528762944497]
	I0920 18:23:16.882838   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:16.887003   74753 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:23:16.887083   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:23:16.923261   74753 cri.go:89] found id: "3ebf4c520d684127cd0b6146d44e863090b9c2db3ac866e9ce3ef2d10d2d6da1"
	I0920 18:23:16.923289   74753 cri.go:89] found id: ""
	I0920 18:23:16.923303   74753 logs.go:276] 1 containers: [3ebf4c520d684127cd0b6146d44e863090b9c2db3ac866e9ce3ef2d10d2d6da1]
	I0920 18:23:16.923357   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:16.927722   74753 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:23:16.927782   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:23:16.963753   74753 cri.go:89] found id: ""
	I0920 18:23:16.963780   74753 logs.go:276] 0 containers: []
	W0920 18:23:16.963791   74753 logs.go:278] No container was found matching "kindnet"
	I0920 18:23:16.963799   74753 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0920 18:23:16.963866   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0920 18:23:17.000548   74753 cri.go:89] found id: "179e4a02f3459a613d3307806b477d54581555b18babca62d0c5e553b9562442"
	I0920 18:23:17.000566   74753 cri.go:89] found id: "3eb9abdf57de51979118cfa77b613c22e28122fcb1da9e72fbf6f3a946ed3531"
	I0920 18:23:17.000571   74753 cri.go:89] found id: ""
	I0920 18:23:17.000578   74753 logs.go:276] 2 containers: [179e4a02f3459a613d3307806b477d54581555b18babca62d0c5e553b9562442 3eb9abdf57de51979118cfa77b613c22e28122fcb1da9e72fbf6f3a946ed3531]
	I0920 18:23:17.000627   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:17.004708   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:17.008489   74753 logs.go:123] Gathering logs for coredns [35f0d8dd053d4c376ccf21215492d8509ff701e7d09b345fdbe06d505ba2d6b4] ...
	I0920 18:23:17.008518   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35f0d8dd053d4c376ccf21215492d8509ff701e7d09b345fdbe06d505ba2d6b4"
	I0920 18:23:17.045971   74753 logs.go:123] Gathering logs for kube-scheduler [8153479cebb05353bec31c41778746c485cb294bf41c3f98e30559ad7da64c64] ...
	I0920 18:23:17.046005   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8153479cebb05353bec31c41778746c485cb294bf41c3f98e30559ad7da64c64"
	I0920 18:23:17.083976   74753 logs.go:123] Gathering logs for kube-controller-manager [3ebf4c520d684127cd0b6146d44e863090b9c2db3ac866e9ce3ef2d10d2d6da1] ...
	I0920 18:23:17.084005   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ebf4c520d684127cd0b6146d44e863090b9c2db3ac866e9ce3ef2d10d2d6da1"
	I0920 18:23:17.137192   74753 logs.go:123] Gathering logs for storage-provisioner [179e4a02f3459a613d3307806b477d54581555b18babca62d0c5e553b9562442] ...
	I0920 18:23:17.137228   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 179e4a02f3459a613d3307806b477d54581555b18babca62d0c5e553b9562442"
	I0920 18:23:17.177203   74753 logs.go:123] Gathering logs for dmesg ...
	I0920 18:23:17.177234   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:23:17.192612   74753 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:23:17.192639   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 18:23:17.300681   74753 logs.go:123] Gathering logs for kube-apiserver [334e4df5baa4f8f7d58af2308fab7f78de22f7e07d6dfbcbeaa0df2a52f6b207] ...
	I0920 18:23:17.300714   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 334e4df5baa4f8f7d58af2308fab7f78de22f7e07d6dfbcbeaa0df2a52f6b207"
	I0920 18:23:17.346835   74753 logs.go:123] Gathering logs for storage-provisioner [3eb9abdf57de51979118cfa77b613c22e28122fcb1da9e72fbf6f3a946ed3531] ...
	I0920 18:23:17.346866   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3eb9abdf57de51979118cfa77b613c22e28122fcb1da9e72fbf6f3a946ed3531"
	I0920 18:23:17.382625   74753 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:23:17.382650   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:23:17.750803   74753 logs.go:123] Gathering logs for container status ...
	I0920 18:23:17.750853   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:23:17.801114   74753 logs.go:123] Gathering logs for kubelet ...
	I0920 18:23:17.801142   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:23:17.866547   74753 logs.go:123] Gathering logs for etcd [98aa96314cf8f863121b9953ad6335628481f536fcab49b7c00505a17eb35ff2] ...
	I0920 18:23:17.866589   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 98aa96314cf8f863121b9953ad6335628481f536fcab49b7c00505a17eb35ff2"
	I0920 18:23:17.909489   74753 logs.go:123] Gathering logs for kube-proxy [6df198ca54e8004a532de9b15cfe7a48a659ae06623c00a78cad528762944497] ...
	I0920 18:23:17.909519   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6df198ca54e8004a532de9b15cfe7a48a659ae06623c00a78cad528762944497"
	I0920 18:23:20.456014   74753 system_pods.go:59] 8 kube-system pods found
	I0920 18:23:20.456055   74753 system_pods.go:61] "coredns-7c65d6cfc9-j2t5h" [dd2636f1-3200-4f22-957c-046277c9be8c] Running
	I0920 18:23:20.456063   74753 system_pods.go:61] "etcd-no-preload-956403" [daf84be0-e673-4685-a998-47ec64664484] Running
	I0920 18:23:20.456068   74753 system_pods.go:61] "kube-apiserver-no-preload-956403" [416217e6-3c91-49ec-bd69-812a49b4afd9] Running
	I0920 18:23:20.456074   74753 system_pods.go:61] "kube-controller-manager-no-preload-956403" [30fae740-b073-4619-bca5-010ca90b2667] Running
	I0920 18:23:20.456079   74753 system_pods.go:61] "kube-proxy-sz4bm" [269600fb-ef65-4b17-8c07-76c79e35f5a8] Running
	I0920 18:23:20.456085   74753 system_pods.go:61] "kube-scheduler-no-preload-956403" [46432927-e506-4665-8825-c5f6a2ed0458] Running
	I0920 18:23:20.456095   74753 system_pods.go:61] "metrics-server-6867b74b74-tfsff" [599ba06a-6d4d-483b-b390-a3595a814757] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 18:23:20.456101   74753 system_pods.go:61] "storage-provisioner" [df661627-7d32-4467-9805-1ae65d4fa35c] Running
	I0920 18:23:20.456109   74753 system_pods.go:74] duration metric: took 3.780646412s to wait for pod list to return data ...
	I0920 18:23:20.456116   74753 default_sa.go:34] waiting for default service account to be created ...
	I0920 18:23:20.459571   74753 default_sa.go:45] found service account: "default"
	I0920 18:23:20.459598   74753 default_sa.go:55] duration metric: took 3.475906ms for default service account to be created ...
	I0920 18:23:20.459607   74753 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 18:23:20.464453   74753 system_pods.go:86] 8 kube-system pods found
	I0920 18:23:20.464489   74753 system_pods.go:89] "coredns-7c65d6cfc9-j2t5h" [dd2636f1-3200-4f22-957c-046277c9be8c] Running
	I0920 18:23:20.464495   74753 system_pods.go:89] "etcd-no-preload-956403" [daf84be0-e673-4685-a998-47ec64664484] Running
	I0920 18:23:20.464499   74753 system_pods.go:89] "kube-apiserver-no-preload-956403" [416217e6-3c91-49ec-bd69-812a49b4afd9] Running
	I0920 18:23:20.464544   74753 system_pods.go:89] "kube-controller-manager-no-preload-956403" [30fae740-b073-4619-bca5-010ca90b2667] Running
	I0920 18:23:20.464559   74753 system_pods.go:89] "kube-proxy-sz4bm" [269600fb-ef65-4b17-8c07-76c79e35f5a8] Running
	I0920 18:23:20.464563   74753 system_pods.go:89] "kube-scheduler-no-preload-956403" [46432927-e506-4665-8825-c5f6a2ed0458] Running
	I0920 18:23:20.464570   74753 system_pods.go:89] "metrics-server-6867b74b74-tfsff" [599ba06a-6d4d-483b-b390-a3595a814757] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 18:23:20.464577   74753 system_pods.go:89] "storage-provisioner" [df661627-7d32-4467-9805-1ae65d4fa35c] Running
	I0920 18:23:20.464583   74753 system_pods.go:126] duration metric: took 4.971732ms to wait for k8s-apps to be running ...
	I0920 18:23:20.464592   74753 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 18:23:20.464637   74753 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:23:20.483425   74753 system_svc.go:56] duration metric: took 18.826162ms WaitForService to wait for kubelet
	I0920 18:23:20.483452   74753 kubeadm.go:582] duration metric: took 4m21.334124064s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:23:20.483469   74753 node_conditions.go:102] verifying NodePressure condition ...
	I0920 18:23:20.486703   74753 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 18:23:20.486725   74753 node_conditions.go:123] node cpu capacity is 2
	I0920 18:23:20.486736   74753 node_conditions.go:105] duration metric: took 3.263035ms to run NodePressure ...
	I0920 18:23:20.486745   74753 start.go:241] waiting for startup goroutines ...
	I0920 18:23:20.486752   74753 start.go:246] waiting for cluster config update ...
	I0920 18:23:20.486763   74753 start.go:255] writing updated cluster config ...
	I0920 18:23:20.487023   74753 ssh_runner.go:195] Run: rm -f paused
	I0920 18:23:20.538569   74753 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 18:23:20.540783   74753 out.go:177] * Done! kubectl is now configured to use "no-preload-956403" cluster and "default" namespace by default
	I0920 18:24:23.891635   75577 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0920 18:24:23.891735   75577 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0920 18:24:23.893591   75577 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0920 18:24:23.893681   75577 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 18:24:23.893785   75577 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 18:24:23.893913   75577 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 18:24:23.894056   75577 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0920 18:24:23.894134   75577 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 18:24:23.895929   75577 out.go:235]   - Generating certificates and keys ...
	I0920 18:24:23.896024   75577 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 18:24:23.896127   75577 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 18:24:23.896245   75577 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 18:24:23.896329   75577 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 18:24:23.896426   75577 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 18:24:23.896510   75577 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 18:24:23.896603   75577 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 18:24:23.896686   75577 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 18:24:23.896783   75577 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 18:24:23.896865   75577 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 18:24:23.896916   75577 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 18:24:23.896984   75577 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 18:24:23.897054   75577 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 18:24:23.897112   75577 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 18:24:23.897174   75577 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 18:24:23.897237   75577 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 18:24:23.897367   75577 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 18:24:23.897488   75577 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 18:24:23.897554   75577 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 18:24:23.897660   75577 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 18:24:23.899056   75577 out.go:235]   - Booting up control plane ...
	I0920 18:24:23.899173   75577 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 18:24:23.899287   75577 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 18:24:23.899371   75577 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 18:24:23.899460   75577 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 18:24:23.899639   75577 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0920 18:24:23.899719   75577 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0920 18:24:23.899823   75577 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:24:23.900047   75577 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:24:23.900124   75577 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:24:23.900322   75577 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:24:23.900392   75577 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:24:23.900556   75577 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:24:23.900634   75577 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:24:23.900826   75577 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:24:23.900884   75577 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:24:23.901044   75577 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:24:23.901052   75577 kubeadm.go:310] 
	I0920 18:24:23.901094   75577 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0920 18:24:23.901129   75577 kubeadm.go:310] 		timed out waiting for the condition
	I0920 18:24:23.901135   75577 kubeadm.go:310] 
	I0920 18:24:23.901179   75577 kubeadm.go:310] 	This error is likely caused by:
	I0920 18:24:23.901209   75577 kubeadm.go:310] 		- The kubelet is not running
	I0920 18:24:23.901314   75577 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0920 18:24:23.901335   75577 kubeadm.go:310] 
	I0920 18:24:23.901458   75577 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0920 18:24:23.901515   75577 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0920 18:24:23.901547   75577 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0920 18:24:23.901556   75577 kubeadm.go:310] 
	I0920 18:24:23.901719   75577 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0920 18:24:23.901859   75577 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0920 18:24:23.901870   75577 kubeadm.go:310] 
	I0920 18:24:23.902030   75577 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0920 18:24:23.902122   75577 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0920 18:24:23.902186   75577 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0920 18:24:23.902264   75577 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0920 18:24:23.902299   75577 kubeadm.go:310] 
	W0920 18:24:23.902388   75577 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0920 18:24:23.902435   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0920 18:24:24.370490   75577 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:24:24.385383   75577 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 18:24:24.395443   75577 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 18:24:24.395463   75577 kubeadm.go:157] found existing configuration files:
	
	I0920 18:24:24.395505   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 18:24:24.404947   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 18:24:24.405021   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 18:24:24.415145   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 18:24:24.424199   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 18:24:24.424279   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 18:24:24.435108   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 18:24:24.446684   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 18:24:24.446742   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 18:24:24.457199   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 18:24:24.467240   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 18:24:24.467315   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 18:24:24.479293   75577 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 18:24:24.704601   75577 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 18:26:21.167677   75577 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0920 18:26:21.167760   75577 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0920 18:26:21.170176   75577 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0920 18:26:21.170249   75577 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 18:26:21.170353   75577 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 18:26:21.170485   75577 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 18:26:21.170653   75577 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0920 18:26:21.170778   75577 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 18:26:21.172479   75577 out.go:235]   - Generating certificates and keys ...
	I0920 18:26:21.172559   75577 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 18:26:21.172623   75577 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 18:26:21.172734   75577 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 18:26:21.172820   75577 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 18:26:21.172901   75577 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 18:26:21.172948   75577 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 18:26:21.173054   75577 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 18:26:21.173145   75577 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 18:26:21.173238   75577 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 18:26:21.173325   75577 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 18:26:21.173381   75577 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 18:26:21.173468   75577 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 18:26:21.173517   75577 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 18:26:21.173562   75577 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 18:26:21.173657   75577 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 18:26:21.173753   75577 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 18:26:21.173931   75577 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 18:26:21.174042   75577 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 18:26:21.174120   75577 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 18:26:21.174226   75577 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 18:26:21.175785   75577 out.go:235]   - Booting up control plane ...
	I0920 18:26:21.175897   75577 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 18:26:21.176062   75577 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 18:26:21.176179   75577 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 18:26:21.176283   75577 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 18:26:21.176482   75577 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0920 18:26:21.176567   75577 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0920 18:26:21.176654   75577 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:26:21.176850   75577 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:26:21.176952   75577 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:26:21.177146   75577 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:26:21.177255   75577 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:26:21.177460   75577 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:26:21.177527   75577 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:26:21.177681   75577 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:26:21.177758   75577 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:26:21.177952   75577 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:26:21.177966   75577 kubeadm.go:310] 
	I0920 18:26:21.178001   75577 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0920 18:26:21.178035   75577 kubeadm.go:310] 		timed out waiting for the condition
	I0920 18:26:21.178041   75577 kubeadm.go:310] 
	I0920 18:26:21.178070   75577 kubeadm.go:310] 	This error is likely caused by:
	I0920 18:26:21.178099   75577 kubeadm.go:310] 		- The kubelet is not running
	I0920 18:26:21.178239   75577 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0920 18:26:21.178251   75577 kubeadm.go:310] 
	I0920 18:26:21.178348   75577 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0920 18:26:21.178378   75577 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0920 18:26:21.178415   75577 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0920 18:26:21.178421   75577 kubeadm.go:310] 
	I0920 18:26:21.178505   75577 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0920 18:26:21.178581   75577 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0920 18:26:21.178590   75577 kubeadm.go:310] 
	I0920 18:26:21.178695   75577 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0920 18:26:21.178770   75577 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0920 18:26:21.178832   75577 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0920 18:26:21.178893   75577 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0920 18:26:21.178950   75577 kubeadm.go:394] duration metric: took 7m57.80706374s to StartCluster
	I0920 18:26:21.178961   75577 kubeadm.go:310] 
	I0920 18:26:21.178989   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:26:21.179047   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:26:21.229528   75577 cri.go:89] found id: ""
	I0920 18:26:21.229554   75577 logs.go:276] 0 containers: []
	W0920 18:26:21.229564   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:26:21.229572   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:26:21.229644   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:26:21.265997   75577 cri.go:89] found id: ""
	I0920 18:26:21.266021   75577 logs.go:276] 0 containers: []
	W0920 18:26:21.266029   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:26:21.266034   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:26:21.266081   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:26:21.309358   75577 cri.go:89] found id: ""
	I0920 18:26:21.309391   75577 logs.go:276] 0 containers: []
	W0920 18:26:21.309402   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:26:21.309409   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:26:21.309469   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:26:21.349418   75577 cri.go:89] found id: ""
	I0920 18:26:21.349442   75577 logs.go:276] 0 containers: []
	W0920 18:26:21.349451   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:26:21.349457   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:26:21.349537   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:26:21.388067   75577 cri.go:89] found id: ""
	I0920 18:26:21.388092   75577 logs.go:276] 0 containers: []
	W0920 18:26:21.388105   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:26:21.388111   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:26:21.388165   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:26:21.428474   75577 cri.go:89] found id: ""
	I0920 18:26:21.428501   75577 logs.go:276] 0 containers: []
	W0920 18:26:21.428512   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:26:21.428519   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:26:21.428580   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:26:21.466188   75577 cri.go:89] found id: ""
	I0920 18:26:21.466215   75577 logs.go:276] 0 containers: []
	W0920 18:26:21.466225   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:26:21.466232   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:26:21.466291   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:26:21.504396   75577 cri.go:89] found id: ""
	I0920 18:26:21.504422   75577 logs.go:276] 0 containers: []
	W0920 18:26:21.504432   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:26:21.504443   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:26:21.504457   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:26:21.608057   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:26:21.608098   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:26:21.672536   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:26:21.672564   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:26:21.723219   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:26:21.723251   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:26:21.736436   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:26:21.736463   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:26:21.809055   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0920 18:26:21.809079   75577 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0920 18:26:21.809128   75577 out.go:270] * 
	W0920 18:26:21.809197   75577 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0920 18:26:21.809217   75577 out.go:270] * 
	W0920 18:26:21.810193   75577 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 18:26:21.814385   75577 out.go:201] 
	W0920 18:26:21.816118   75577 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0920 18:26:21.816186   75577 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0920 18:26:21.816225   75577 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0920 18:26:21.818478   75577 out.go:201] 
	
	
	==> CRI-O <==
	Sep 20 18:35:27 old-k8s-version-744025 crio[628]: time="2024-09-20 18:35:27.395990713Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857327395954805,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3f4fd30c-9b4b-4a83-8763-de6187638af9 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:35:27 old-k8s-version-744025 crio[628]: time="2024-09-20 18:35:27.396558755Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=74105360-156d-4859-9ebf-a6b85a3327f7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:35:27 old-k8s-version-744025 crio[628]: time="2024-09-20 18:35:27.396614855Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=74105360-156d-4859-9ebf-a6b85a3327f7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:35:27 old-k8s-version-744025 crio[628]: time="2024-09-20 18:35:27.396661436Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=74105360-156d-4859-9ebf-a6b85a3327f7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:35:27 old-k8s-version-744025 crio[628]: time="2024-09-20 18:35:27.447238932Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c848808f-b84b-4b8e-b2e1-0597e4fb3d96 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:35:27 old-k8s-version-744025 crio[628]: time="2024-09-20 18:35:27.447326205Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c848808f-b84b-4b8e-b2e1-0597e4fb3d96 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:35:27 old-k8s-version-744025 crio[628]: time="2024-09-20 18:35:27.448556650Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=10aa7a0b-c224-41ee-84c3-4c547ee70d55 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:35:27 old-k8s-version-744025 crio[628]: time="2024-09-20 18:35:27.449040151Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857327448991438,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=10aa7a0b-c224-41ee-84c3-4c547ee70d55 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:35:27 old-k8s-version-744025 crio[628]: time="2024-09-20 18:35:27.449555397Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e3375405-03f8-4777-99b5-865a1bbea876 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:35:27 old-k8s-version-744025 crio[628]: time="2024-09-20 18:35:27.449622795Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e3375405-03f8-4777-99b5-865a1bbea876 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:35:27 old-k8s-version-744025 crio[628]: time="2024-09-20 18:35:27.449662496Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=e3375405-03f8-4777-99b5-865a1bbea876 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:35:27 old-k8s-version-744025 crio[628]: time="2024-09-20 18:35:27.481665999Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bbdfc248-ae29-4f82-a7d9-6ad49d334e4d name=/runtime.v1.RuntimeService/Version
	Sep 20 18:35:27 old-k8s-version-744025 crio[628]: time="2024-09-20 18:35:27.481764838Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bbdfc248-ae29-4f82-a7d9-6ad49d334e4d name=/runtime.v1.RuntimeService/Version
	Sep 20 18:35:27 old-k8s-version-744025 crio[628]: time="2024-09-20 18:35:27.482909797Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a4fe2791-53a8-4c93-b9dc-60859a81cc6e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:35:27 old-k8s-version-744025 crio[628]: time="2024-09-20 18:35:27.483446010Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857327483415536,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a4fe2791-53a8-4c93-b9dc-60859a81cc6e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:35:27 old-k8s-version-744025 crio[628]: time="2024-09-20 18:35:27.484177914Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cf9ed54d-6820-4baf-8966-aafa6fea04ba name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:35:27 old-k8s-version-744025 crio[628]: time="2024-09-20 18:35:27.484235604Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cf9ed54d-6820-4baf-8966-aafa6fea04ba name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:35:27 old-k8s-version-744025 crio[628]: time="2024-09-20 18:35:27.484281467Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=cf9ed54d-6820-4baf-8966-aafa6fea04ba name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:35:27 old-k8s-version-744025 crio[628]: time="2024-09-20 18:35:27.515705901Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b8496450-d4a2-4b1e-9f3a-fada84e84efb name=/runtime.v1.RuntimeService/Version
	Sep 20 18:35:27 old-k8s-version-744025 crio[628]: time="2024-09-20 18:35:27.515792880Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b8496450-d4a2-4b1e-9f3a-fada84e84efb name=/runtime.v1.RuntimeService/Version
	Sep 20 18:35:27 old-k8s-version-744025 crio[628]: time="2024-09-20 18:35:27.517273579Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f0176da9-eba7-475c-bb63-bad110c5fed7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:35:27 old-k8s-version-744025 crio[628]: time="2024-09-20 18:35:27.517680303Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857327517653476,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f0176da9-eba7-475c-bb63-bad110c5fed7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:35:27 old-k8s-version-744025 crio[628]: time="2024-09-20 18:35:27.518317147Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5d8aa9f3-1663-4922-b809-18d4517cb6ff name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:35:27 old-k8s-version-744025 crio[628]: time="2024-09-20 18:35:27.518370335Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5d8aa9f3-1663-4922-b809-18d4517cb6ff name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:35:27 old-k8s-version-744025 crio[628]: time="2024-09-20 18:35:27.518408483Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=5d8aa9f3-1663-4922-b809-18d4517cb6ff name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Sep20 18:17] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050746] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037709] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Sep20 18:18] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.985932] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.595098] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.239386] systemd-fstab-generator[555]: Ignoring "noauto" option for root device
	[  +0.063752] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.070306] systemd-fstab-generator[567]: Ignoring "noauto" option for root device
	[  +0.206728] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.121183] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.259648] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +6.744745] systemd-fstab-generator[874]: Ignoring "noauto" option for root device
	[  +0.100047] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.166710] systemd-fstab-generator[998]: Ignoring "noauto" option for root device
	[ +12.315396] kauditd_printk_skb: 46 callbacks suppressed
	[Sep20 18:22] systemd-fstab-generator[4992]: Ignoring "noauto" option for root device
	[Sep20 18:24] systemd-fstab-generator[5273]: Ignoring "noauto" option for root device
	[  +0.069847] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 18:35:27 up 17 min,  0 users,  load average: 0.00, 0.02, 0.04
	Linux old-k8s-version-744025 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Sep 20 18:35:27 old-k8s-version-744025 kubelet[6466]: goroutine 157 [sleep]:
	Sep 20 18:35:27 old-k8s-version-744025 kubelet[6466]: time.Sleep(0xb94ae)
	Sep 20 18:35:27 old-k8s-version-744025 kubelet[6466]:         /usr/local/go/src/runtime/time.go:188 +0xbf
	Sep 20 18:35:27 old-k8s-version-744025 kubelet[6466]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.(*rudimentaryErrorBackoff).OnError(0xc0000acba0, 0x4f04d00, 0xc0003a7730)
	Sep 20 18:35:27 old-k8s-version-744025 kubelet[6466]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:133 +0xfa
	Sep 20 18:35:27 old-k8s-version-744025 kubelet[6466]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleError(0x4f04d00, 0xc0003a7730)
	Sep 20 18:35:27 old-k8s-version-744025 kubelet[6466]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:108 +0x66
	Sep 20 18:35:27 old-k8s-version-744025 kubelet[6466]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.DefaultWatchErrorHandler(0xc000254620, 0x4f04d00, 0xc0003a76d0)
	Sep 20 18:35:27 old-k8s-version-744025 kubelet[6466]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:138 +0x185
	Sep 20 18:35:27 old-k8s-version-744025 kubelet[6466]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run.func1()
	Sep 20 18:35:27 old-k8s-version-744025 kubelet[6466]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:222 +0x70
	Sep 20 18:35:27 old-k8s-version-744025 kubelet[6466]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0006f5ef0)
	Sep 20 18:35:27 old-k8s-version-744025 kubelet[6466]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
	Sep 20 18:35:27 old-k8s-version-744025 kubelet[6466]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0007dbef0, 0x4f0ac20, 0xc00077def0, 0x1, 0xc00009e0c0)
	Sep 20 18:35:27 old-k8s-version-744025 kubelet[6466]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
	Sep 20 18:35:27 old-k8s-version-744025 kubelet[6466]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc000254620, 0xc00009e0c0)
	Sep 20 18:35:27 old-k8s-version-744025 kubelet[6466]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Sep 20 18:35:27 old-k8s-version-744025 kubelet[6466]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Sep 20 18:35:27 old-k8s-version-744025 kubelet[6466]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Sep 20 18:35:27 old-k8s-version-744025 kubelet[6466]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc0006ca6f0, 0xc0006ece00)
	Sep 20 18:35:27 old-k8s-version-744025 kubelet[6466]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Sep 20 18:35:27 old-k8s-version-744025 kubelet[6466]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Sep 20 18:35:27 old-k8s-version-744025 kubelet[6466]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Sep 20 18:35:27 old-k8s-version-744025 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Sep 20 18:35:27 old-k8s-version-744025 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-744025 -n old-k8s-version-744025
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-744025 -n old-k8s-version-744025: exit status 2 (236.337823ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-744025" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.74s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (455.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-553719 -n default-k8s-diff-port-553719
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-09-20 18:39:15.919751306 +0000 UTC m=+6937.563832851
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-553719 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-553719 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.566µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-553719 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-553719 -n default-k8s-diff-port-553719
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-553719 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-553719 logs -n 25: (1.145609734s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p                                                     | default-k8s-diff-port-553719 | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC | 20 Sep 24 18:10 UTC |
	|         | default-k8s-diff-port-553719                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-956403             | no-preload-956403            | jenkins | v1.34.0 | 20 Sep 24 18:10 UTC | 20 Sep 24 18:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-956403                                   | no-preload-956403            | jenkins | v1.34.0 | 20 Sep 24 18:10 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-768431            | embed-certs-768431           | jenkins | v1.34.0 | 20 Sep 24 18:10 UTC | 20 Sep 24 18:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-768431                                  | embed-certs-768431           | jenkins | v1.34.0 | 20 Sep 24 18:10 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-553719  | default-k8s-diff-port-553719 | jenkins | v1.34.0 | 20 Sep 24 18:11 UTC | 20 Sep 24 18:11 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-553719 | jenkins | v1.34.0 | 20 Sep 24 18:11 UTC |                     |
	|         | default-k8s-diff-port-553719                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-956403                  | no-preload-956403            | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-744025        | old-k8s-version-744025       | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p no-preload-956403                                   | no-preload-956403            | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC | 20 Sep 24 18:23 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-768431                 | embed-certs-768431           | jenkins | v1.34.0 | 20 Sep 24 18:13 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-768431                                  | embed-certs-768431           | jenkins | v1.34.0 | 20 Sep 24 18:13 UTC | 20 Sep 24 18:22 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-553719       | default-k8s-diff-port-553719 | jenkins | v1.34.0 | 20 Sep 24 18:13 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-553719 | jenkins | v1.34.0 | 20 Sep 24 18:13 UTC | 20 Sep 24 18:22 UTC |
	|         | default-k8s-diff-port-553719                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-744025                              | old-k8s-version-744025       | jenkins | v1.34.0 | 20 Sep 24 18:14 UTC | 20 Sep 24 18:14 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-744025             | old-k8s-version-744025       | jenkins | v1.34.0 | 20 Sep 24 18:14 UTC | 20 Sep 24 18:14 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-744025                              | old-k8s-version-744025       | jenkins | v1.34.0 | 20 Sep 24 18:14 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-744025                              | old-k8s-version-744025       | jenkins | v1.34.0 | 20 Sep 24 18:37 UTC | 20 Sep 24 18:37 UTC |
	| start   | -p newest-cni-803958 --memory=2200 --alsologtostderr   | newest-cni-803958            | jenkins | v1.34.0 | 20 Sep 24 18:37 UTC | 20 Sep 24 18:38 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-956403                                   | no-preload-956403            | jenkins | v1.34.0 | 20 Sep 24 18:38 UTC | 20 Sep 24 18:38 UTC |
	| addons  | enable metrics-server -p newest-cni-803958             | newest-cni-803958            | jenkins | v1.34.0 | 20 Sep 24 18:38 UTC | 20 Sep 24 18:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-803958                                   | newest-cni-803958            | jenkins | v1.34.0 | 20 Sep 24 18:38 UTC | 20 Sep 24 18:39 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-768431                                  | embed-certs-768431           | jenkins | v1.34.0 | 20 Sep 24 18:39 UTC | 20 Sep 24 18:39 UTC |
	| addons  | enable dashboard -p newest-cni-803958                  | newest-cni-803958            | jenkins | v1.34.0 | 20 Sep 24 18:39 UTC | 20 Sep 24 18:39 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-803958 --memory=2200 --alsologtostderr   | newest-cni-803958            | jenkins | v1.34.0 | 20 Sep 24 18:39 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 18:39:01
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 18:39:01.166256   83216 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:39:01.166535   83216 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:39:01.166545   83216 out.go:358] Setting ErrFile to fd 2...
	I0920 18:39:01.166551   83216 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:39:01.166737   83216 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-8777/.minikube/bin
	I0920 18:39:01.167304   83216 out.go:352] Setting JSON to false
	I0920 18:39:01.168242   83216 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8484,"bootTime":1726849057,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 18:39:01.168388   83216 start.go:139] virtualization: kvm guest
	I0920 18:39:01.170936   83216 out.go:177] * [newest-cni-803958] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 18:39:01.172464   83216 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 18:39:01.172453   83216 notify.go:220] Checking for updates...
	I0920 18:39:01.174171   83216 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 18:39:01.175822   83216 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-8777/kubeconfig
	I0920 18:39:01.177264   83216 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-8777/.minikube
	I0920 18:39:01.178611   83216 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 18:39:01.180211   83216 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 18:39:01.181986   83216 config.go:182] Loaded profile config "newest-cni-803958": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:39:01.182630   83216 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:39:01.182719   83216 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:39:01.200051   83216 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36507
	I0920 18:39:01.200555   83216 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:39:01.201133   83216 main.go:141] libmachine: Using API Version  1
	I0920 18:39:01.201153   83216 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:39:01.201454   83216 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:39:01.201614   83216 main.go:141] libmachine: (newest-cni-803958) Calling .DriverName
	I0920 18:39:01.201842   83216 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 18:39:01.202258   83216 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:39:01.202300   83216 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:39:01.218524   83216 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41483
	I0920 18:39:01.218983   83216 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:39:01.219455   83216 main.go:141] libmachine: Using API Version  1
	I0920 18:39:01.219477   83216 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:39:01.219850   83216 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:39:01.220079   83216 main.go:141] libmachine: (newest-cni-803958) Calling .DriverName
	I0920 18:39:01.700830   83216 out.go:177] * Using the kvm2 driver based on existing profile
	I0920 18:39:01.702043   83216 start.go:297] selected driver: kvm2
	I0920 18:39:01.702060   83216 start.go:901] validating driver "kvm2" against &{Name:newest-cni-803958 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:newest-cni-803958 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.85 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] Sta
rtHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:39:01.702234   83216 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 18:39:01.703220   83216 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:39:01.703313   83216 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19672-8777/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 18:39:01.719413   83216 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 18:39:01.719812   83216 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0920 18:39:01.719843   83216 cni.go:84] Creating CNI manager for ""
	I0920 18:39:01.719886   83216 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:39:01.719924   83216 start.go:340] cluster config:
	{Name:newest-cni-803958 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-803958 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.85 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:
Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:39:01.720034   83216 iso.go:125] acquiring lock: {Name:mkba95ef0488e46f622333e9f317f43def93040b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:39:01.722165   83216 out.go:177] * Starting "newest-cni-803958" primary control-plane node in "newest-cni-803958" cluster
	I0920 18:39:01.723873   83216 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:39:01.723931   83216 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0920 18:39:01.723941   83216 cache.go:56] Caching tarball of preloaded images
	I0920 18:39:01.724031   83216 preload.go:172] Found /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 18:39:01.724042   83216 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 18:39:01.724156   83216 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/newest-cni-803958/config.json ...
	I0920 18:39:01.724376   83216 start.go:360] acquireMachinesLock for newest-cni-803958: {Name:mkfeedb385cf08b5d2aa00913e85815d02a180c2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 18:39:01.724425   83216 start.go:364] duration metric: took 27.037µs to acquireMachinesLock for "newest-cni-803958"
	I0920 18:39:01.724448   83216 start.go:96] Skipping create...Using existing machine configuration
	I0920 18:39:01.724456   83216 fix.go:54] fixHost starting: 
	I0920 18:39:01.724709   83216 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:39:01.724740   83216 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:39:01.740274   83216 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46007
	I0920 18:39:01.740794   83216 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:39:01.741402   83216 main.go:141] libmachine: Using API Version  1
	I0920 18:39:01.741422   83216 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:39:01.741794   83216 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:39:01.742045   83216 main.go:141] libmachine: (newest-cni-803958) Calling .DriverName
	I0920 18:39:01.742211   83216 main.go:141] libmachine: (newest-cni-803958) Calling .GetState
	I0920 18:39:01.744267   83216 fix.go:112] recreateIfNeeded on newest-cni-803958: state=Stopped err=<nil>
	I0920 18:39:01.744296   83216 main.go:141] libmachine: (newest-cni-803958) Calling .DriverName
	W0920 18:39:01.744508   83216 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 18:39:01.746986   83216 out.go:177] * Restarting existing kvm2 VM for "newest-cni-803958" ...
	I0920 18:39:01.748500   83216 main.go:141] libmachine: (newest-cni-803958) Calling .Start
	I0920 18:39:01.748726   83216 main.go:141] libmachine: (newest-cni-803958) Ensuring networks are active...
	I0920 18:39:01.749790   83216 main.go:141] libmachine: (newest-cni-803958) Ensuring network default is active
	I0920 18:39:01.750197   83216 main.go:141] libmachine: (newest-cni-803958) Ensuring network mk-newest-cni-803958 is active
	I0920 18:39:01.750634   83216 main.go:141] libmachine: (newest-cni-803958) Getting domain xml...
	I0920 18:39:01.751381   83216 main.go:141] libmachine: (newest-cni-803958) Creating domain...
	I0920 18:39:03.011411   83216 main.go:141] libmachine: (newest-cni-803958) Waiting to get IP...
	I0920 18:39:03.012452   83216 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:39:03.012889   83216 main.go:141] libmachine: (newest-cni-803958) DBG | unable to find current IP address of domain newest-cni-803958 in network mk-newest-cni-803958
	I0920 18:39:03.012933   83216 main.go:141] libmachine: (newest-cni-803958) DBG | I0920 18:39:03.012854   83265 retry.go:31] will retry after 310.296965ms: waiting for machine to come up
	I0920 18:39:03.324481   83216 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:39:03.324921   83216 main.go:141] libmachine: (newest-cni-803958) DBG | unable to find current IP address of domain newest-cni-803958 in network mk-newest-cni-803958
	I0920 18:39:03.324945   83216 main.go:141] libmachine: (newest-cni-803958) DBG | I0920 18:39:03.324877   83265 retry.go:31] will retry after 321.275042ms: waiting for machine to come up
	I0920 18:39:03.647835   83216 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:39:03.648456   83216 main.go:141] libmachine: (newest-cni-803958) DBG | unable to find current IP address of domain newest-cni-803958 in network mk-newest-cni-803958
	I0920 18:39:03.648487   83216 main.go:141] libmachine: (newest-cni-803958) DBG | I0920 18:39:03.648389   83265 retry.go:31] will retry after 401.205791ms: waiting for machine to come up
	I0920 18:39:04.051138   83216 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:39:04.051930   83216 main.go:141] libmachine: (newest-cni-803958) DBG | unable to find current IP address of domain newest-cni-803958 in network mk-newest-cni-803958
	I0920 18:39:04.051964   83216 main.go:141] libmachine: (newest-cni-803958) DBG | I0920 18:39:04.051844   83265 retry.go:31] will retry after 474.824003ms: waiting for machine to come up
	I0920 18:39:04.528504   83216 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:39:04.529052   83216 main.go:141] libmachine: (newest-cni-803958) DBG | unable to find current IP address of domain newest-cni-803958 in network mk-newest-cni-803958
	I0920 18:39:04.529078   83216 main.go:141] libmachine: (newest-cni-803958) DBG | I0920 18:39:04.529000   83265 retry.go:31] will retry after 467.945818ms: waiting for machine to come up
	I0920 18:39:04.998693   83216 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:39:04.999092   83216 main.go:141] libmachine: (newest-cni-803958) DBG | unable to find current IP address of domain newest-cni-803958 in network mk-newest-cni-803958
	I0920 18:39:04.999150   83216 main.go:141] libmachine: (newest-cni-803958) DBG | I0920 18:39:04.999071   83265 retry.go:31] will retry after 923.76212ms: waiting for machine to come up
	I0920 18:39:05.924079   83216 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:39:05.924628   83216 main.go:141] libmachine: (newest-cni-803958) DBG | unable to find current IP address of domain newest-cni-803958 in network mk-newest-cni-803958
	I0920 18:39:05.924649   83216 main.go:141] libmachine: (newest-cni-803958) DBG | I0920 18:39:05.924589   83265 retry.go:31] will retry after 774.550652ms: waiting for machine to come up
	I0920 18:39:06.700845   83216 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:39:06.701360   83216 main.go:141] libmachine: (newest-cni-803958) DBG | unable to find current IP address of domain newest-cni-803958 in network mk-newest-cni-803958
	I0920 18:39:06.701384   83216 main.go:141] libmachine: (newest-cni-803958) DBG | I0920 18:39:06.701301   83265 retry.go:31] will retry after 1.273967417s: waiting for machine to come up
	I0920 18:39:07.977357   83216 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:39:07.977829   83216 main.go:141] libmachine: (newest-cni-803958) DBG | unable to find current IP address of domain newest-cni-803958 in network mk-newest-cni-803958
	I0920 18:39:07.977884   83216 main.go:141] libmachine: (newest-cni-803958) DBG | I0920 18:39:07.977762   83265 retry.go:31] will retry after 1.159815592s: waiting for machine to come up
	I0920 18:39:09.139176   83216 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:39:09.139763   83216 main.go:141] libmachine: (newest-cni-803958) DBG | unable to find current IP address of domain newest-cni-803958 in network mk-newest-cni-803958
	I0920 18:39:09.139783   83216 main.go:141] libmachine: (newest-cni-803958) DBG | I0920 18:39:09.139710   83265 retry.go:31] will retry after 1.768746271s: waiting for machine to come up
	I0920 18:39:10.909778   83216 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:39:10.910331   83216 main.go:141] libmachine: (newest-cni-803958) DBG | unable to find current IP address of domain newest-cni-803958 in network mk-newest-cni-803958
	I0920 18:39:10.910363   83216 main.go:141] libmachine: (newest-cni-803958) DBG | I0920 18:39:10.910280   83265 retry.go:31] will retry after 2.564472315s: waiting for machine to come up
	I0920 18:39:13.476068   83216 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:39:13.476564   83216 main.go:141] libmachine: (newest-cni-803958) DBG | unable to find current IP address of domain newest-cni-803958 in network mk-newest-cni-803958
	I0920 18:39:13.476593   83216 main.go:141] libmachine: (newest-cni-803958) DBG | I0920 18:39:13.476521   83265 retry.go:31] will retry after 2.3087841s: waiting for machine to come up
	I0920 18:39:15.786902   83216 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:39:15.787445   83216 main.go:141] libmachine: (newest-cni-803958) DBG | unable to find current IP address of domain newest-cni-803958 in network mk-newest-cni-803958
	I0920 18:39:15.787476   83216 main.go:141] libmachine: (newest-cni-803958) DBG | I0920 18:39:15.787397   83265 retry.go:31] will retry after 3.025016647s: waiting for machine to come up
	
	
	==> CRI-O <==
	Sep 20 18:39:16 default-k8s-diff-port-553719 crio[709]: time="2024-09-20 18:39:16.538910603Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857556538883031,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f1e3183a-9eec-400d-81eb-4c1d6b46e1b5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:39:16 default-k8s-diff-port-553719 crio[709]: time="2024-09-20 18:39:16.539610054Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3b879d33-1e67-46c6-8398-e7471c69b482 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:39:16 default-k8s-diff-port-553719 crio[709]: time="2024-09-20 18:39:16.539694605Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3b879d33-1e67-46c6-8398-e7471c69b482 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:39:16 default-k8s-diff-port-553719 crio[709]: time="2024-09-20 18:39:16.539927849Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:001bdc98537f0917f021c14a5d0733bd31cf19be1b4862502458a7a90e4300da,PodSandboxId:f599733da12b5d939c2c63ceb1397a444f3aa903f568890ba5592522225fb5ce,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726856317980466357,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fad2d07-f99e-45ac-9657-bce6d73d7fce,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b63423dedcc6b7a32bfef5f6d1056defef2c63f67e7da30aae922692de71bdf,PodSandboxId:ed8e8a17964af0e9308b4e677d4be559514ab91dea1b7a13613a5ccec8532128,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726856306828572138,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 03376c58-8368-41cb-8d71-ec5f2ff84ab5,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:606f7c8a9a095f499ae422de76f76105607db72e57b9b0a6db3e3c5a81656c5c,PodSandboxId:03bb196d26977bda9b4de0f31d33c8af500a378ed1b27a610d9c2ce2ca86f7d4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726856303441887270,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dmdfb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e4ffdb3-37e0-4498-bdf5-d6e7dbcad020,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:702f7f440eb6016c2c758a6be4e1072274ab6dcd169d7a081e5464869a7c299c,PodSandboxId:8ea9454041ee2bbfdc6b68119211b17ce2e62e4bf161dccd83d58506e3f24b7e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726856287158667233,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p9crq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83e0f53d-6
960-42c4-904d-ea85ba9160f4,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c42201b6e3d55b6fd8beaf04d416658b228bf627e519ddb022dad6d17219cc1c,PodSandboxId:f599733da12b5d939c2c63ceb1397a444f3aa903f568890ba5592522225fb5ce,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726856287149197810,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fad2d07-f99e-45ac-9657
-bce6d73d7fce,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65da0bae1c849a395cc241962bb6cbdf9f3edf11cdc95f4495ac2e427f9602d4,PodSandboxId:3783189e1d427c6344484a14cff656ae057adde4f6c123f5e345f36ce2911459,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726856283616687970,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-553719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ed85fac33b111d1c67965836593508e,},Annotations:map[
string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ba313deffc6185e59dac50051e933f6b2ea70d630cf36b2b8c2c07042f793ba,PodSandboxId:e6416e3470b426953107537063c2a9ee93b73ef1aacf09515dcd20ea9dda51b0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726856283624475103,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-553719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 739557da412bdc7964815ab846378cab,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ca303b795795bd920f261c88630f30fd51a91b9694a9a6e67af8164bab8242b,PodSandboxId:63f1d4cd5ff76e0d41b21c738ea89441a068893bacd49ef216250266d36bae56,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726856283617569700,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-553719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48787ec85035644941355902d7fc
180b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ea0cfbd9902ae8e3797c77592bfe05a5f8162f75a391f5c2dc992eb5154c4ad,PodSandboxId:becbb7375ce81ceda866b0f82a38d38a458ad272af5fa7787c5079ea228a565a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726856283606463117,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-553719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58340272c9cd93d514ace52e4571f9
c1,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3b879d33-1e67-46c6-8398-e7471c69b482 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:39:16 default-k8s-diff-port-553719 crio[709]: time="2024-09-20 18:39:16.576804471Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2c239dfb-efda-4294-b35b-c084f1545868 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:39:16 default-k8s-diff-port-553719 crio[709]: time="2024-09-20 18:39:16.576887762Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2c239dfb-efda-4294-b35b-c084f1545868 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:39:16 default-k8s-diff-port-553719 crio[709]: time="2024-09-20 18:39:16.578141989Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=013aeecd-6523-465d-b4a9-05b3d6504700 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:39:16 default-k8s-diff-port-553719 crio[709]: time="2024-09-20 18:39:16.578608162Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857556578581366,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=013aeecd-6523-465d-b4a9-05b3d6504700 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:39:16 default-k8s-diff-port-553719 crio[709]: time="2024-09-20 18:39:16.579087603Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ca1446c9-35b1-4808-9c4e-8b4de21670c6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:39:16 default-k8s-diff-port-553719 crio[709]: time="2024-09-20 18:39:16.579145222Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ca1446c9-35b1-4808-9c4e-8b4de21670c6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:39:16 default-k8s-diff-port-553719 crio[709]: time="2024-09-20 18:39:16.579354619Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:001bdc98537f0917f021c14a5d0733bd31cf19be1b4862502458a7a90e4300da,PodSandboxId:f599733da12b5d939c2c63ceb1397a444f3aa903f568890ba5592522225fb5ce,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726856317980466357,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fad2d07-f99e-45ac-9657-bce6d73d7fce,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b63423dedcc6b7a32bfef5f6d1056defef2c63f67e7da30aae922692de71bdf,PodSandboxId:ed8e8a17964af0e9308b4e677d4be559514ab91dea1b7a13613a5ccec8532128,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726856306828572138,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 03376c58-8368-41cb-8d71-ec5f2ff84ab5,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:606f7c8a9a095f499ae422de76f76105607db72e57b9b0a6db3e3c5a81656c5c,PodSandboxId:03bb196d26977bda9b4de0f31d33c8af500a378ed1b27a610d9c2ce2ca86f7d4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726856303441887270,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dmdfb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e4ffdb3-37e0-4498-bdf5-d6e7dbcad020,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:702f7f440eb6016c2c758a6be4e1072274ab6dcd169d7a081e5464869a7c299c,PodSandboxId:8ea9454041ee2bbfdc6b68119211b17ce2e62e4bf161dccd83d58506e3f24b7e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726856287158667233,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p9crq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83e0f53d-6
960-42c4-904d-ea85ba9160f4,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c42201b6e3d55b6fd8beaf04d416658b228bf627e519ddb022dad6d17219cc1c,PodSandboxId:f599733da12b5d939c2c63ceb1397a444f3aa903f568890ba5592522225fb5ce,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726856287149197810,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fad2d07-f99e-45ac-9657
-bce6d73d7fce,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65da0bae1c849a395cc241962bb6cbdf9f3edf11cdc95f4495ac2e427f9602d4,PodSandboxId:3783189e1d427c6344484a14cff656ae057adde4f6c123f5e345f36ce2911459,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726856283616687970,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-553719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ed85fac33b111d1c67965836593508e,},Annotations:map[
string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ba313deffc6185e59dac50051e933f6b2ea70d630cf36b2b8c2c07042f793ba,PodSandboxId:e6416e3470b426953107537063c2a9ee93b73ef1aacf09515dcd20ea9dda51b0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726856283624475103,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-553719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 739557da412bdc7964815ab846378cab,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ca303b795795bd920f261c88630f30fd51a91b9694a9a6e67af8164bab8242b,PodSandboxId:63f1d4cd5ff76e0d41b21c738ea89441a068893bacd49ef216250266d36bae56,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726856283617569700,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-553719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48787ec85035644941355902d7fc
180b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ea0cfbd9902ae8e3797c77592bfe05a5f8162f75a391f5c2dc992eb5154c4ad,PodSandboxId:becbb7375ce81ceda866b0f82a38d38a458ad272af5fa7787c5079ea228a565a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726856283606463117,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-553719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58340272c9cd93d514ace52e4571f9
c1,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ca1446c9-35b1-4808-9c4e-8b4de21670c6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:39:16 default-k8s-diff-port-553719 crio[709]: time="2024-09-20 18:39:16.615575581Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=57930af2-201e-4cff-9113-ecf60f12b887 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:39:16 default-k8s-diff-port-553719 crio[709]: time="2024-09-20 18:39:16.615657599Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=57930af2-201e-4cff-9113-ecf60f12b887 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:39:16 default-k8s-diff-port-553719 crio[709]: time="2024-09-20 18:39:16.617046832Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b5d7415b-0202-4168-b746-d99ac1755cb5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:39:16 default-k8s-diff-port-553719 crio[709]: time="2024-09-20 18:39:16.617546771Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857556617520843,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b5d7415b-0202-4168-b746-d99ac1755cb5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:39:16 default-k8s-diff-port-553719 crio[709]: time="2024-09-20 18:39:16.618157202Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c31c5b17-40b3-4a81-a63f-c88244549388 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:39:16 default-k8s-diff-port-553719 crio[709]: time="2024-09-20 18:39:16.618213525Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c31c5b17-40b3-4a81-a63f-c88244549388 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:39:16 default-k8s-diff-port-553719 crio[709]: time="2024-09-20 18:39:16.618477062Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:001bdc98537f0917f021c14a5d0733bd31cf19be1b4862502458a7a90e4300da,PodSandboxId:f599733da12b5d939c2c63ceb1397a444f3aa903f568890ba5592522225fb5ce,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726856317980466357,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fad2d07-f99e-45ac-9657-bce6d73d7fce,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b63423dedcc6b7a32bfef5f6d1056defef2c63f67e7da30aae922692de71bdf,PodSandboxId:ed8e8a17964af0e9308b4e677d4be559514ab91dea1b7a13613a5ccec8532128,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726856306828572138,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 03376c58-8368-41cb-8d71-ec5f2ff84ab5,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:606f7c8a9a095f499ae422de76f76105607db72e57b9b0a6db3e3c5a81656c5c,PodSandboxId:03bb196d26977bda9b4de0f31d33c8af500a378ed1b27a610d9c2ce2ca86f7d4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726856303441887270,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dmdfb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e4ffdb3-37e0-4498-bdf5-d6e7dbcad020,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:702f7f440eb6016c2c758a6be4e1072274ab6dcd169d7a081e5464869a7c299c,PodSandboxId:8ea9454041ee2bbfdc6b68119211b17ce2e62e4bf161dccd83d58506e3f24b7e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726856287158667233,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p9crq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83e0f53d-6
960-42c4-904d-ea85ba9160f4,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c42201b6e3d55b6fd8beaf04d416658b228bf627e519ddb022dad6d17219cc1c,PodSandboxId:f599733da12b5d939c2c63ceb1397a444f3aa903f568890ba5592522225fb5ce,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726856287149197810,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fad2d07-f99e-45ac-9657
-bce6d73d7fce,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65da0bae1c849a395cc241962bb6cbdf9f3edf11cdc95f4495ac2e427f9602d4,PodSandboxId:3783189e1d427c6344484a14cff656ae057adde4f6c123f5e345f36ce2911459,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726856283616687970,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-553719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ed85fac33b111d1c67965836593508e,},Annotations:map[
string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ba313deffc6185e59dac50051e933f6b2ea70d630cf36b2b8c2c07042f793ba,PodSandboxId:e6416e3470b426953107537063c2a9ee93b73ef1aacf09515dcd20ea9dda51b0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726856283624475103,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-553719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 739557da412bdc7964815ab846378cab,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ca303b795795bd920f261c88630f30fd51a91b9694a9a6e67af8164bab8242b,PodSandboxId:63f1d4cd5ff76e0d41b21c738ea89441a068893bacd49ef216250266d36bae56,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726856283617569700,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-553719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48787ec85035644941355902d7fc
180b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ea0cfbd9902ae8e3797c77592bfe05a5f8162f75a391f5c2dc992eb5154c4ad,PodSandboxId:becbb7375ce81ceda866b0f82a38d38a458ad272af5fa7787c5079ea228a565a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726856283606463117,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-553719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58340272c9cd93d514ace52e4571f9
c1,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c31c5b17-40b3-4a81-a63f-c88244549388 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:39:16 default-k8s-diff-port-553719 crio[709]: time="2024-09-20 18:39:16.652708897Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=40e32d30-8df8-4d84-aa42-a01396367e6e name=/runtime.v1.RuntimeService/Version
	Sep 20 18:39:16 default-k8s-diff-port-553719 crio[709]: time="2024-09-20 18:39:16.652793386Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=40e32d30-8df8-4d84-aa42-a01396367e6e name=/runtime.v1.RuntimeService/Version
	Sep 20 18:39:16 default-k8s-diff-port-553719 crio[709]: time="2024-09-20 18:39:16.654188867Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e44305a8-047f-42b3-9e9a-33ccd4037c06 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:39:16 default-k8s-diff-port-553719 crio[709]: time="2024-09-20 18:39:16.654609162Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857556654585583,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e44305a8-047f-42b3-9e9a-33ccd4037c06 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:39:16 default-k8s-diff-port-553719 crio[709]: time="2024-09-20 18:39:16.655347819Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cac54499-3101-4605-8962-6f02629535a4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:39:16 default-k8s-diff-port-553719 crio[709]: time="2024-09-20 18:39:16.655415855Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cac54499-3101-4605-8962-6f02629535a4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:39:16 default-k8s-diff-port-553719 crio[709]: time="2024-09-20 18:39:16.655645705Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:001bdc98537f0917f021c14a5d0733bd31cf19be1b4862502458a7a90e4300da,PodSandboxId:f599733da12b5d939c2c63ceb1397a444f3aa903f568890ba5592522225fb5ce,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726856317980466357,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fad2d07-f99e-45ac-9657-bce6d73d7fce,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b63423dedcc6b7a32bfef5f6d1056defef2c63f67e7da30aae922692de71bdf,PodSandboxId:ed8e8a17964af0e9308b4e677d4be559514ab91dea1b7a13613a5ccec8532128,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726856306828572138,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 03376c58-8368-41cb-8d71-ec5f2ff84ab5,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:606f7c8a9a095f499ae422de76f76105607db72e57b9b0a6db3e3c5a81656c5c,PodSandboxId:03bb196d26977bda9b4de0f31d33c8af500a378ed1b27a610d9c2ce2ca86f7d4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726856303441887270,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dmdfb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e4ffdb3-37e0-4498-bdf5-d6e7dbcad020,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:702f7f440eb6016c2c758a6be4e1072274ab6dcd169d7a081e5464869a7c299c,PodSandboxId:8ea9454041ee2bbfdc6b68119211b17ce2e62e4bf161dccd83d58506e3f24b7e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726856287158667233,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p9crq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83e0f53d-6
960-42c4-904d-ea85ba9160f4,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c42201b6e3d55b6fd8beaf04d416658b228bf627e519ddb022dad6d17219cc1c,PodSandboxId:f599733da12b5d939c2c63ceb1397a444f3aa903f568890ba5592522225fb5ce,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726856287149197810,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fad2d07-f99e-45ac-9657
-bce6d73d7fce,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65da0bae1c849a395cc241962bb6cbdf9f3edf11cdc95f4495ac2e427f9602d4,PodSandboxId:3783189e1d427c6344484a14cff656ae057adde4f6c123f5e345f36ce2911459,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726856283616687970,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-553719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ed85fac33b111d1c67965836593508e,},Annotations:map[
string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ba313deffc6185e59dac50051e933f6b2ea70d630cf36b2b8c2c07042f793ba,PodSandboxId:e6416e3470b426953107537063c2a9ee93b73ef1aacf09515dcd20ea9dda51b0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726856283624475103,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-553719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 739557da412bdc7964815ab846378cab,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ca303b795795bd920f261c88630f30fd51a91b9694a9a6e67af8164bab8242b,PodSandboxId:63f1d4cd5ff76e0d41b21c738ea89441a068893bacd49ef216250266d36bae56,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726856283617569700,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-553719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48787ec85035644941355902d7fc
180b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ea0cfbd9902ae8e3797c77592bfe05a5f8162f75a391f5c2dc992eb5154c4ad,PodSandboxId:becbb7375ce81ceda866b0f82a38d38a458ad272af5fa7787c5079ea228a565a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726856283606463117,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-553719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58340272c9cd93d514ace52e4571f9
c1,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cac54499-3101-4605-8962-6f02629535a4 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	001bdc98537f0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Running             storage-provisioner       2                   f599733da12b5       storage-provisioner
	8b63423dedcc6       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   20 minutes ago      Running             busybox                   1                   ed8e8a17964af       busybox
	606f7c8a9a095       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      20 minutes ago      Running             coredns                   1                   03bb196d26977       coredns-7c65d6cfc9-dmdfb
	702f7f440eb60       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      21 minutes ago      Running             kube-proxy                1                   8ea9454041ee2       kube-proxy-p9crq
	c42201b6e3d55       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      21 minutes ago      Exited              storage-provisioner       1                   f599733da12b5       storage-provisioner
	6ba313deffc61       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      21 minutes ago      Running             kube-scheduler            1                   e6416e3470b42       kube-scheduler-default-k8s-diff-port-553719
	4ca303b795795       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      21 minutes ago      Running             kube-controller-manager   1                   63f1d4cd5ff76       kube-controller-manager-default-k8s-diff-port-553719
	65da0bae1c849       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      21 minutes ago      Running             etcd                      1                   3783189e1d427       etcd-default-k8s-diff-port-553719
	0ea0cfbd9902a       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      21 minutes ago      Running             kube-apiserver            1                   becbb7375ce81       kube-apiserver-default-k8s-diff-port-553719
	
	
	==> coredns [606f7c8a9a095f499ae422de76f76105607db72e57b9b0a6db3e3c5a81656c5c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:57797 - 33834 "HINFO IN 4768601586295699900.8450345759229431803. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01441496s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-553719
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-553719
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0626f22cf0d915d75e291a5bce701f94395056e1
	                    minikube.k8s.io/name=default-k8s-diff-port-553719
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T18_10_14_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 18:10:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-553719
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 18:39:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 18:39:02 +0000   Fri, 20 Sep 2024 18:10:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 18:39:02 +0000   Fri, 20 Sep 2024 18:10:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 18:39:02 +0000   Fri, 20 Sep 2024 18:10:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 18:39:02 +0000   Fri, 20 Sep 2024 18:18:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.190
	  Hostname:    default-k8s-diff-port-553719
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 409942eb36b647cf8b7f3a958cd0853b
	  System UUID:                409942eb-36b6-47cf-8b7f-3a958cd0853b
	  Boot ID:                    2ad31e65-3c83-4e1f-8488-097cec36a556
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 coredns-7c65d6cfc9-dmdfb                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     28m
	  kube-system                 etcd-default-k8s-diff-port-553719                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         29m
	  kube-system                 kube-apiserver-default-k8s-diff-port-553719             250m (12%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-553719    200m (10%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-proxy-p9crq                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-scheduler-default-k8s-diff-port-553719             100m (5%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 metrics-server-6867b74b74-vtl79                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         28m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 21m                kube-proxy       
	  Normal  NodeHasSufficientPID     29m                kubelet          Node default-k8s-diff-port-553719 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node default-k8s-diff-port-553719 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node default-k8s-diff-port-553719 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeReady                29m                kubelet          Node default-k8s-diff-port-553719 status is now: NodeReady
	  Normal  RegisteredNode           28m                node-controller  Node default-k8s-diff-port-553719 event: Registered Node default-k8s-diff-port-553719 in Controller
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-553719 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-553719 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node default-k8s-diff-port-553719 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           21m                node-controller  Node default-k8s-diff-port-553719 event: Registered Node default-k8s-diff-port-553719 in Controller
	
	
	==> dmesg <==
	[Sep20 18:17] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.048616] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038045] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.227511] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.104786] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.698927] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.243018] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +0.067872] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067359] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.239803] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +0.150943] systemd-fstab-generator[669]: Ignoring "noauto" option for root device
	[  +0.305895] systemd-fstab-generator[699]: Ignoring "noauto" option for root device
	[Sep20 18:18] systemd-fstab-generator[792]: Ignoring "noauto" option for root device
	[  +0.064154] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.684288] systemd-fstab-generator[911]: Ignoring "noauto" option for root device
	[  +4.753737] kauditd_printk_skb: 97 callbacks suppressed
	[  +2.199444] systemd-fstab-generator[1527]: Ignoring "noauto" option for root device
	[  +5.516050] kauditd_printk_skb: 64 callbacks suppressed
	[  +7.768687] kauditd_printk_skb: 11 callbacks suppressed
	[ +15.433149] kauditd_printk_skb: 28 callbacks suppressed
	
	
	==> etcd [65da0bae1c849a395cc241962bb6cbdf9f3edf11cdc95f4495ac2e427f9602d4] <==
	{"level":"info","ts":"2024-09-20T18:18:38.303661Z","caller":"traceutil/trace.go:171","msg":"trace[782945959] linearizableReadLoop","detail":"{readStateIndex:680; appliedIndex:679; }","duration":"202.950028ms","start":"2024-09-20T18:18:38.100701Z","end":"2024-09-20T18:18:38.303651Z","steps":["trace[782945959] 'read index received'  (duration: 76.968908ms)","trace[782945959] 'applied index is now lower than readState.Index'  (duration: 125.97844ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-20T18:18:38.303749Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"203.040852ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1128"}
	{"level":"info","ts":"2024-09-20T18:18:38.303773Z","caller":"traceutil/trace.go:171","msg":"trace[695849660] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:632; }","duration":"203.067353ms","start":"2024-09-20T18:18:38.100697Z","end":"2024-09-20T18:18:38.303764Z","steps":["trace[695849660] 'agreement among raft nodes before linearized reading'  (duration: 202.982673ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T18:18:38.303933Z","caller":"traceutil/trace.go:171","msg":"trace[1594204947] transaction","detail":"{read_only:false; response_revision:632; number_of_response:1; }","duration":"206.991814ms","start":"2024-09-20T18:18:38.096926Z","end":"2024-09-20T18:18:38.303918Z","steps":["trace[1594204947] 'process raft request'  (duration: 80.704901ms)","trace[1594204947] 'compare'  (duration: 125.66549ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-20T18:18:44.135475Z","caller":"traceutil/trace.go:171","msg":"trace[319560462] linearizableReadLoop","detail":"{readStateIndex:684; appliedIndex:683; }","duration":"352.987998ms","start":"2024-09-20T18:18:43.782469Z","end":"2024-09-20T18:18:44.135457Z","steps":["trace[319560462] 'read index received'  (duration: 10.508544ms)","trace[319560462] 'applied index is now lower than readState.Index'  (duration: 342.478594ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-20T18:18:44.135695Z","caller":"traceutil/trace.go:171","msg":"trace[1768896368] transaction","detail":"{read_only:false; response_revision:635; number_of_response:1; }","duration":"366.400897ms","start":"2024-09-20T18:18:43.769283Z","end":"2024-09-20T18:18:44.135684Z","steps":["trace[1768896368] 'process raft request'  (duration: 364.968687ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T18:18:44.135803Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T18:18:43.769267Z","time spent":"366.46938ms","remote":"127.0.0.1:60346","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4381,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/metrics-server-6867b74b74-vtl79\" mod_revision:622 > success:<request_put:<key:\"/registry/pods/kube-system/metrics-server-6867b74b74-vtl79\" value_size:4315 >> failure:<request_range:<key:\"/registry/pods/kube-system/metrics-server-6867b74b74-vtl79\" > >"}
	{"level":"warn","ts":"2024-09-20T18:18:44.136003Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"353.506914ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T18:18:44.136118Z","caller":"traceutil/trace.go:171","msg":"trace[693549530] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:635; }","duration":"353.578002ms","start":"2024-09-20T18:18:43.782463Z","end":"2024-09-20T18:18:44.136041Z","steps":["trace[693549530] 'agreement among raft nodes before linearized reading'  (duration: 353.461415ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T18:18:44.136507Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"340.211474ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/metrics-server-6867b74b74-vtl79.17f706a6dd0688ed\" ","response":"range_response_count:1 size:804"}
	{"level":"info","ts":"2024-09-20T18:18:44.136599Z","caller":"traceutil/trace.go:171","msg":"trace[1776183942] range","detail":"{range_begin:/registry/events/kube-system/metrics-server-6867b74b74-vtl79.17f706a6dd0688ed; range_end:; response_count:1; response_revision:635; }","duration":"340.28596ms","start":"2024-09-20T18:18:43.796281Z","end":"2024-09-20T18:18:44.136567Z","steps":["trace[1776183942] 'agreement among raft nodes before linearized reading'  (duration: 340.143831ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T18:18:44.137422Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T18:18:43.796245Z","time spent":"341.16179ms","remote":"127.0.0.1:60234","response type":"/etcdserverpb.KV/Range","request count":0,"request size":79,"response count":1,"response size":826,"request content":"key:\"/registry/events/kube-system/metrics-server-6867b74b74-vtl79.17f706a6dd0688ed\" "}
	{"level":"warn","ts":"2024-09-20T18:18:44.136714Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"138.861532ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-6867b74b74-vtl79\" ","response":"range_response_count:1 size:4396"}
	{"level":"info","ts":"2024-09-20T18:18:44.138517Z","caller":"traceutil/trace.go:171","msg":"trace[1346530390] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-6867b74b74-vtl79; range_end:; response_count:1; response_revision:635; }","duration":"140.662896ms","start":"2024-09-20T18:18:43.997841Z","end":"2024-09-20T18:18:44.138503Z","steps":["trace[1346530390] 'agreement among raft nodes before linearized reading'  (duration: 138.836442ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T18:28:04.997193Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":859}
	{"level":"info","ts":"2024-09-20T18:28:05.008941Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":859,"took":"10.986725ms","hash":59549771,"current-db-size-bytes":2748416,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":2748416,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2024-09-20T18:28:05.009102Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":59549771,"revision":859,"compact-revision":-1}
	{"level":"info","ts":"2024-09-20T18:33:05.004336Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1102}
	{"level":"info","ts":"2024-09-20T18:33:05.009338Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1102,"took":"4.631977ms","hash":387418653,"current-db-size-bytes":2748416,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":1638400,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-09-20T18:33:05.009473Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":387418653,"revision":1102,"compact-revision":859}
	{"level":"info","ts":"2024-09-20T18:38:05.022806Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1345}
	{"level":"info","ts":"2024-09-20T18:38:05.029241Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1345,"took":"5.846161ms","hash":2227569408,"current-db-size-bytes":2748416,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":1638400,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-09-20T18:38:05.029338Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2227569408,"revision":1345,"compact-revision":1102}
	{"level":"warn","ts":"2024-09-20T18:38:33.245539Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"260.163927ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3926736525345642504 > lease_revoke:<id:367e9210a7c757a9>","response":"size:27"}
	{"level":"info","ts":"2024-09-20T18:38:34.068747Z","caller":"traceutil/trace.go:171","msg":"trace[1501374784] transaction","detail":"{read_only:false; response_revision:1613; number_of_response:1; }","duration":"246.060836ms","start":"2024-09-20T18:38:33.822646Z","end":"2024-09-20T18:38:34.068707Z","steps":["trace[1501374784] 'process raft request'  (duration: 245.927614ms)"],"step_count":1}
	
	
	==> kernel <==
	 18:39:16 up 21 min,  0 users,  load average: 0.19, 0.16, 0.10
	Linux default-k8s-diff-port-553719 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [0ea0cfbd9902ae8e3797c77592bfe05a5f8162f75a391f5c2dc992eb5154c4ad] <==
	I0920 18:36:07.437437       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0920 18:36:07.437568       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0920 18:38:06.433659       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 18:38:06.435279       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0920 18:38:07.438170       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 18:38:07.438242       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0920 18:38:07.438313       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 18:38:07.438402       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0920 18:38:07.439407       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0920 18:38:07.439503       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0920 18:39:07.439809       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 18:39:07.440112       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0920 18:39:07.439857       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 18:39:07.440220       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0920 18:39:07.441358       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0920 18:39:07.441434       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [4ca303b795795bd920f261c88630f30fd51a91b9694a9a6e67af8164bab8242b] <==
	E0920 18:34:10.160807       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 18:34:10.666915       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0920 18:34:33.777021       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="232.613µs"
	E0920 18:34:40.167471       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 18:34:40.675133       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0920 18:34:45.775500       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="122.996µs"
	E0920 18:35:10.174420       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 18:35:10.684920       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 18:35:40.182717       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 18:35:40.692979       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 18:36:10.189370       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 18:36:10.700867       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 18:36:40.196173       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 18:36:40.709598       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 18:37:10.202890       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 18:37:10.719411       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 18:37:40.210998       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 18:37:40.728483       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 18:38:10.218736       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 18:38:10.738794       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 18:38:40.227166       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 18:38:40.746853       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0920 18:39:02.367217       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-553719"
	E0920 18:39:10.233204       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 18:39:10.755315       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [702f7f440eb6016c2c758a6be4e1072274ab6dcd169d7a081e5464869a7c299c] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 18:18:07.395130       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0920 18:18:07.406475       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.190"]
	E0920 18:18:07.406555       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 18:18:07.456794       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 18:18:07.456844       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 18:18:07.456867       1 server_linux.go:169] "Using iptables Proxier"
	I0920 18:18:07.463431       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 18:18:07.463681       1 server.go:483] "Version info" version="v1.31.1"
	I0920 18:18:07.463709       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 18:18:07.466301       1 config.go:199] "Starting service config controller"
	I0920 18:18:07.466352       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 18:18:07.466381       1 config.go:105] "Starting endpoint slice config controller"
	I0920 18:18:07.466385       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 18:18:07.466878       1 config.go:328] "Starting node config controller"
	I0920 18:18:07.466908       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 18:18:07.567231       1 shared_informer.go:320] Caches are synced for node config
	I0920 18:18:07.567324       1 shared_informer.go:320] Caches are synced for service config
	I0920 18:18:07.567339       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [6ba313deffc6185e59dac50051e933f6b2ea70d630cf36b2b8c2c07042f793ba] <==
	I0920 18:18:04.966228       1 serving.go:386] Generated self-signed cert in-memory
	W0920 18:18:06.341785       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0920 18:18:06.341824       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0920 18:18:06.341859       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0920 18:18:06.341865       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0920 18:18:06.412319       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0920 18:18:06.412365       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 18:18:06.419170       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0920 18:18:06.419211       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0920 18:18:06.419606       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0920 18:18:06.419719       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0920 18:18:06.519864       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 20 18:38:08 default-k8s-diff-port-553719 kubelet[918]: E0920 18:38:08.760139     918 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-vtl79" podUID="29e0b6eb-22a9-4e37-97f9-83b48cc38193"
	Sep 20 18:38:13 default-k8s-diff-port-553719 kubelet[918]: E0920 18:38:13.108012     918 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857493107495539,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:38:13 default-k8s-diff-port-553719 kubelet[918]: E0920 18:38:13.108402     918 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857493107495539,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:38:20 default-k8s-diff-port-553719 kubelet[918]: E0920 18:38:20.760177     918 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-vtl79" podUID="29e0b6eb-22a9-4e37-97f9-83b48cc38193"
	Sep 20 18:38:23 default-k8s-diff-port-553719 kubelet[918]: E0920 18:38:23.110364     918 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857503110002440,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:38:23 default-k8s-diff-port-553719 kubelet[918]: E0920 18:38:23.110421     918 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857503110002440,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:38:32 default-k8s-diff-port-553719 kubelet[918]: E0920 18:38:32.762872     918 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-vtl79" podUID="29e0b6eb-22a9-4e37-97f9-83b48cc38193"
	Sep 20 18:38:33 default-k8s-diff-port-553719 kubelet[918]: E0920 18:38:33.112835     918 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857513111986425,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:38:33 default-k8s-diff-port-553719 kubelet[918]: E0920 18:38:33.112939     918 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857513111986425,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:38:43 default-k8s-diff-port-553719 kubelet[918]: E0920 18:38:43.116032     918 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857523115476760,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:38:43 default-k8s-diff-port-553719 kubelet[918]: E0920 18:38:43.116488     918 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857523115476760,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:38:43 default-k8s-diff-port-553719 kubelet[918]: E0920 18:38:43.761376     918 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-vtl79" podUID="29e0b6eb-22a9-4e37-97f9-83b48cc38193"
	Sep 20 18:38:53 default-k8s-diff-port-553719 kubelet[918]: E0920 18:38:53.120141     918 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857533119248190,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:38:53 default-k8s-diff-port-553719 kubelet[918]: E0920 18:38:53.120703     918 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857533119248190,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:38:58 default-k8s-diff-port-553719 kubelet[918]: E0920 18:38:58.760663     918 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-vtl79" podUID="29e0b6eb-22a9-4e37-97f9-83b48cc38193"
	Sep 20 18:39:02 default-k8s-diff-port-553719 kubelet[918]: E0920 18:39:02.779668     918 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 20 18:39:02 default-k8s-diff-port-553719 kubelet[918]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 20 18:39:02 default-k8s-diff-port-553719 kubelet[918]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 20 18:39:02 default-k8s-diff-port-553719 kubelet[918]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 20 18:39:02 default-k8s-diff-port-553719 kubelet[918]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 20 18:39:03 default-k8s-diff-port-553719 kubelet[918]: E0920 18:39:03.123814     918 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857543122979317,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:39:03 default-k8s-diff-port-553719 kubelet[918]: E0920 18:39:03.123898     918 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857543122979317,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:39:11 default-k8s-diff-port-553719 kubelet[918]: E0920 18:39:11.761035     918 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-vtl79" podUID="29e0b6eb-22a9-4e37-97f9-83b48cc38193"
	Sep 20 18:39:13 default-k8s-diff-port-553719 kubelet[918]: E0920 18:39:13.125678     918 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857553125291480,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:39:13 default-k8s-diff-port-553719 kubelet[918]: E0920 18:39:13.126125     918 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857553125291480,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [001bdc98537f0917f021c14a5d0733bd31cf19be1b4862502458a7a90e4300da] <==
	I0920 18:18:38.086146       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0920 18:18:38.099020       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0920 18:18:38.099239       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0920 18:18:55.707350       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0920 18:18:55.707572       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-553719_e6ae17bf-1a4f-4f05-8f9f-0545ec95b792!
	I0920 18:18:55.708512       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"112e5b22-6eac-4a3d-bf05-c16d06da4538", APIVersion:"v1", ResourceVersion:"640", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-553719_e6ae17bf-1a4f-4f05-8f9f-0545ec95b792 became leader
	I0920 18:18:55.809359       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-553719_e6ae17bf-1a4f-4f05-8f9f-0545ec95b792!
	
	
	==> storage-provisioner [c42201b6e3d55b6fd8beaf04d416658b228bf627e519ddb022dad6d17219cc1c] <==
	I0920 18:18:07.246782       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0920 18:18:37.250632       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-553719 -n default-k8s-diff-port-553719
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-553719 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-vtl79
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-553719 describe pod metrics-server-6867b74b74-vtl79
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-553719 describe pod metrics-server-6867b74b74-vtl79: exit status 1 (61.365259ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-vtl79" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-553719 describe pod metrics-server-6867b74b74-vtl79: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (455.45s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (426.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-768431 -n embed-certs-768431
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-09-20 18:38:58.519073366 +0000 UTC m=+6920.163154933
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-768431 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-768431 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.585µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-768431 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-768431 -n embed-certs-768431
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-768431 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-768431 logs -n 25: (1.322242405s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-833505 sudo crio                            | flannel-833505               | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC | 20 Sep 24 18:09 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p flannel-833505                                      | flannel-833505               | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC | 20 Sep 24 18:09 UTC |
	| delete  | -p                                                     | disable-driver-mounts-739804 | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC | 20 Sep 24 18:09 UTC |
	|         | disable-driver-mounts-739804                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-553719 | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC | 20 Sep 24 18:10 UTC |
	|         | default-k8s-diff-port-553719                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-956403             | no-preload-956403            | jenkins | v1.34.0 | 20 Sep 24 18:10 UTC | 20 Sep 24 18:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-956403                                   | no-preload-956403            | jenkins | v1.34.0 | 20 Sep 24 18:10 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-768431            | embed-certs-768431           | jenkins | v1.34.0 | 20 Sep 24 18:10 UTC | 20 Sep 24 18:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-768431                                  | embed-certs-768431           | jenkins | v1.34.0 | 20 Sep 24 18:10 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-553719  | default-k8s-diff-port-553719 | jenkins | v1.34.0 | 20 Sep 24 18:11 UTC | 20 Sep 24 18:11 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-553719 | jenkins | v1.34.0 | 20 Sep 24 18:11 UTC |                     |
	|         | default-k8s-diff-port-553719                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-956403                  | no-preload-956403            | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-744025        | old-k8s-version-744025       | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p no-preload-956403                                   | no-preload-956403            | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC | 20 Sep 24 18:23 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-768431                 | embed-certs-768431           | jenkins | v1.34.0 | 20 Sep 24 18:13 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-768431                                  | embed-certs-768431           | jenkins | v1.34.0 | 20 Sep 24 18:13 UTC | 20 Sep 24 18:22 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-553719       | default-k8s-diff-port-553719 | jenkins | v1.34.0 | 20 Sep 24 18:13 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-553719 | jenkins | v1.34.0 | 20 Sep 24 18:13 UTC | 20 Sep 24 18:22 UTC |
	|         | default-k8s-diff-port-553719                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-744025                              | old-k8s-version-744025       | jenkins | v1.34.0 | 20 Sep 24 18:14 UTC | 20 Sep 24 18:14 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-744025             | old-k8s-version-744025       | jenkins | v1.34.0 | 20 Sep 24 18:14 UTC | 20 Sep 24 18:14 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-744025                              | old-k8s-version-744025       | jenkins | v1.34.0 | 20 Sep 24 18:14 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-744025                              | old-k8s-version-744025       | jenkins | v1.34.0 | 20 Sep 24 18:37 UTC | 20 Sep 24 18:37 UTC |
	| start   | -p newest-cni-803958 --memory=2200 --alsologtostderr   | newest-cni-803958            | jenkins | v1.34.0 | 20 Sep 24 18:37 UTC | 20 Sep 24 18:38 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-956403                                   | no-preload-956403            | jenkins | v1.34.0 | 20 Sep 24 18:38 UTC | 20 Sep 24 18:38 UTC |
	| addons  | enable metrics-server -p newest-cni-803958             | newest-cni-803958            | jenkins | v1.34.0 | 20 Sep 24 18:38 UTC | 20 Sep 24 18:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-803958                                   | newest-cni-803958            | jenkins | v1.34.0 | 20 Sep 24 18:38 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 18:37:56
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 18:37:56.951782   82261 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:37:56.951923   82261 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:37:56.951934   82261 out.go:358] Setting ErrFile to fd 2...
	I0920 18:37:56.951940   82261 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:37:56.952133   82261 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-8777/.minikube/bin
	I0920 18:37:56.952754   82261 out.go:352] Setting JSON to false
	I0920 18:37:56.953897   82261 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8420,"bootTime":1726849057,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 18:37:56.954001   82261 start.go:139] virtualization: kvm guest
	I0920 18:37:56.956508   82261 out.go:177] * [newest-cni-803958] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 18:37:56.958117   82261 notify.go:220] Checking for updates...
	I0920 18:37:56.958122   82261 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 18:37:56.960103   82261 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 18:37:56.961699   82261 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-8777/kubeconfig
	I0920 18:37:56.962987   82261 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-8777/.minikube
	I0920 18:37:56.964528   82261 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 18:37:56.965966   82261 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 18:37:56.968246   82261 config.go:182] Loaded profile config "default-k8s-diff-port-553719": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:37:56.968346   82261 config.go:182] Loaded profile config "embed-certs-768431": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:37:56.968460   82261 config.go:182] Loaded profile config "no-preload-956403": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:37:56.968576   82261 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 18:37:57.007772   82261 out.go:177] * Using the kvm2 driver based on user configuration
	I0920 18:37:57.009026   82261 start.go:297] selected driver: kvm2
	I0920 18:37:57.009042   82261 start.go:901] validating driver "kvm2" against <nil>
	I0920 18:37:57.009054   82261 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 18:37:57.009784   82261 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:37:57.009900   82261 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19672-8777/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 18:37:57.027671   82261 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 18:37:57.027721   82261 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0920 18:37:57.027786   82261 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0920 18:37:57.028015   82261 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0920 18:37:57.028044   82261 cni.go:84] Creating CNI manager for ""
	I0920 18:37:57.028098   82261 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:37:57.028109   82261 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 18:37:57.028163   82261 start.go:340] cluster config:
	{Name:newest-cni-803958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-803958 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:37:57.028270   82261 iso.go:125] acquiring lock: {Name:mkba95ef0488e46f622333e9f317f43def93040b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:37:57.030587   82261 out.go:177] * Starting "newest-cni-803958" primary control-plane node in "newest-cni-803958" cluster
	I0920 18:37:57.031740   82261 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:37:57.031781   82261 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0920 18:37:57.031789   82261 cache.go:56] Caching tarball of preloaded images
	I0920 18:37:57.031894   82261 preload.go:172] Found /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 18:37:57.031908   82261 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 18:37:57.032007   82261 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/newest-cni-803958/config.json ...
	I0920 18:37:57.032031   82261 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/newest-cni-803958/config.json: {Name:mk3e9ea474cd2ad1e5bdf9973a52cf2546e74b56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:37:57.032203   82261 start.go:360] acquireMachinesLock for newest-cni-803958: {Name:mkfeedb385cf08b5d2aa00913e85815d02a180c2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 18:37:57.032237   82261 start.go:364] duration metric: took 17.97µs to acquireMachinesLock for "newest-cni-803958"
	I0920 18:37:57.032260   82261 start.go:93] Provisioning new machine with config: &{Name:newest-cni-803958 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:newest-cni-803958 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:37:57.032339   82261 start.go:125] createHost starting for "" (driver="kvm2")
	I0920 18:37:57.034006   82261 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 18:37:57.034142   82261 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:37:57.034181   82261 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:37:57.049138   82261 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36195
	I0920 18:37:57.049628   82261 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:37:57.050393   82261 main.go:141] libmachine: Using API Version  1
	I0920 18:37:57.050450   82261 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:37:57.050781   82261 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:37:57.050972   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetMachineName
	I0920 18:37:57.051127   82261 main.go:141] libmachine: (newest-cni-803958) Calling .DriverName
	I0920 18:37:57.051299   82261 start.go:159] libmachine.API.Create for "newest-cni-803958" (driver="kvm2")
	I0920 18:37:57.051340   82261 client.go:168] LocalClient.Create starting
	I0920 18:37:57.051376   82261 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem
	I0920 18:37:57.051414   82261 main.go:141] libmachine: Decoding PEM data...
	I0920 18:37:57.051440   82261 main.go:141] libmachine: Parsing certificate...
	I0920 18:37:57.051500   82261 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem
	I0920 18:37:57.051536   82261 main.go:141] libmachine: Decoding PEM data...
	I0920 18:37:57.051548   82261 main.go:141] libmachine: Parsing certificate...
	I0920 18:37:57.051572   82261 main.go:141] libmachine: Running pre-create checks...
	I0920 18:37:57.051578   82261 main.go:141] libmachine: (newest-cni-803958) Calling .PreCreateCheck
	I0920 18:37:57.051932   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetConfigRaw
	I0920 18:37:57.052330   82261 main.go:141] libmachine: Creating machine...
	I0920 18:37:57.052344   82261 main.go:141] libmachine: (newest-cni-803958) Calling .Create
	I0920 18:37:57.052447   82261 main.go:141] libmachine: (newest-cni-803958) Creating KVM machine...
	I0920 18:37:57.053864   82261 main.go:141] libmachine: (newest-cni-803958) DBG | found existing default KVM network
	I0920 18:37:57.055478   82261 main.go:141] libmachine: (newest-cni-803958) DBG | I0920 18:37:57.055355   82284 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0000155d0}
	I0920 18:37:57.055496   82261 main.go:141] libmachine: (newest-cni-803958) DBG | created network xml: 
	I0920 18:37:57.055511   82261 main.go:141] libmachine: (newest-cni-803958) DBG | <network>
	I0920 18:37:57.055520   82261 main.go:141] libmachine: (newest-cni-803958) DBG |   <name>mk-newest-cni-803958</name>
	I0920 18:37:57.055527   82261 main.go:141] libmachine: (newest-cni-803958) DBG |   <dns enable='no'/>
	I0920 18:37:57.055536   82261 main.go:141] libmachine: (newest-cni-803958) DBG |   
	I0920 18:37:57.055543   82261 main.go:141] libmachine: (newest-cni-803958) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0920 18:37:57.055552   82261 main.go:141] libmachine: (newest-cni-803958) DBG |     <dhcp>
	I0920 18:37:57.055558   82261 main.go:141] libmachine: (newest-cni-803958) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0920 18:37:57.055563   82261 main.go:141] libmachine: (newest-cni-803958) DBG |     </dhcp>
	I0920 18:37:57.055570   82261 main.go:141] libmachine: (newest-cni-803958) DBG |   </ip>
	I0920 18:37:57.055578   82261 main.go:141] libmachine: (newest-cni-803958) DBG |   
	I0920 18:37:57.055582   82261 main.go:141] libmachine: (newest-cni-803958) DBG | </network>
	I0920 18:37:57.055591   82261 main.go:141] libmachine: (newest-cni-803958) DBG | 
	I0920 18:37:57.060855   82261 main.go:141] libmachine: (newest-cni-803958) DBG | trying to create private KVM network mk-newest-cni-803958 192.168.39.0/24...
	I0920 18:37:57.140414   82261 main.go:141] libmachine: (newest-cni-803958) DBG | private KVM network mk-newest-cni-803958 192.168.39.0/24 created
	I0920 18:37:57.140448   82261 main.go:141] libmachine: (newest-cni-803958) Setting up store path in /home/jenkins/minikube-integration/19672-8777/.minikube/machines/newest-cni-803958 ...
	I0920 18:37:57.140463   82261 main.go:141] libmachine: (newest-cni-803958) DBG | I0920 18:37:57.140354   82284 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19672-8777/.minikube
	I0920 18:37:57.140482   82261 main.go:141] libmachine: (newest-cni-803958) Building disk image from file:///home/jenkins/minikube-integration/19672-8777/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso
	I0920 18:37:57.140497   82261 main.go:141] libmachine: (newest-cni-803958) Downloading /home/jenkins/minikube-integration/19672-8777/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19672-8777/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso...
	I0920 18:37:57.415083   82261 main.go:141] libmachine: (newest-cni-803958) DBG | I0920 18:37:57.414884   82284 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/newest-cni-803958/id_rsa...
	I0920 18:37:57.764596   82261 main.go:141] libmachine: (newest-cni-803958) DBG | I0920 18:37:57.764450   82284 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/newest-cni-803958/newest-cni-803958.rawdisk...
	I0920 18:37:57.764635   82261 main.go:141] libmachine: (newest-cni-803958) DBG | Writing magic tar header
	I0920 18:37:57.764664   82261 main.go:141] libmachine: (newest-cni-803958) DBG | Writing SSH key tar header
	I0920 18:37:57.764675   82261 main.go:141] libmachine: (newest-cni-803958) DBG | I0920 18:37:57.764570   82284 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19672-8777/.minikube/machines/newest-cni-803958 ...
	I0920 18:37:57.764703   82261 main.go:141] libmachine: (newest-cni-803958) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/newest-cni-803958
	I0920 18:37:57.764749   82261 main.go:141] libmachine: (newest-cni-803958) Setting executable bit set on /home/jenkins/minikube-integration/19672-8777/.minikube/machines/newest-cni-803958 (perms=drwx------)
	I0920 18:37:57.764769   82261 main.go:141] libmachine: (newest-cni-803958) Setting executable bit set on /home/jenkins/minikube-integration/19672-8777/.minikube/machines (perms=drwxr-xr-x)
	I0920 18:37:57.764777   82261 main.go:141] libmachine: (newest-cni-803958) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-8777/.minikube/machines
	I0920 18:37:57.764800   82261 main.go:141] libmachine: (newest-cni-803958) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-8777/.minikube
	I0920 18:37:57.764810   82261 main.go:141] libmachine: (newest-cni-803958) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-8777
	I0920 18:37:57.764825   82261 main.go:141] libmachine: (newest-cni-803958) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0920 18:37:57.764833   82261 main.go:141] libmachine: (newest-cni-803958) DBG | Checking permissions on dir: /home/jenkins
	I0920 18:37:57.764843   82261 main.go:141] libmachine: (newest-cni-803958) DBG | Checking permissions on dir: /home
	I0920 18:37:57.764848   82261 main.go:141] libmachine: (newest-cni-803958) DBG | Skipping /home - not owner
	I0920 18:37:57.764857   82261 main.go:141] libmachine: (newest-cni-803958) Setting executable bit set on /home/jenkins/minikube-integration/19672-8777/.minikube (perms=drwxr-xr-x)
	I0920 18:37:57.764865   82261 main.go:141] libmachine: (newest-cni-803958) Setting executable bit set on /home/jenkins/minikube-integration/19672-8777 (perms=drwxrwxr-x)
	I0920 18:37:57.764875   82261 main.go:141] libmachine: (newest-cni-803958) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0920 18:37:57.764882   82261 main.go:141] libmachine: (newest-cni-803958) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0920 18:37:57.764888   82261 main.go:141] libmachine: (newest-cni-803958) Creating domain...
	I0920 18:37:57.766060   82261 main.go:141] libmachine: (newest-cni-803958) define libvirt domain using xml: 
	I0920 18:37:57.766085   82261 main.go:141] libmachine: (newest-cni-803958) <domain type='kvm'>
	I0920 18:37:57.766095   82261 main.go:141] libmachine: (newest-cni-803958)   <name>newest-cni-803958</name>
	I0920 18:37:57.766104   82261 main.go:141] libmachine: (newest-cni-803958)   <memory unit='MiB'>2200</memory>
	I0920 18:37:57.766113   82261 main.go:141] libmachine: (newest-cni-803958)   <vcpu>2</vcpu>
	I0920 18:37:57.766122   82261 main.go:141] libmachine: (newest-cni-803958)   <features>
	I0920 18:37:57.766144   82261 main.go:141] libmachine: (newest-cni-803958)     <acpi/>
	I0920 18:37:57.766155   82261 main.go:141] libmachine: (newest-cni-803958)     <apic/>
	I0920 18:37:57.766165   82261 main.go:141] libmachine: (newest-cni-803958)     <pae/>
	I0920 18:37:57.766177   82261 main.go:141] libmachine: (newest-cni-803958)     
	I0920 18:37:57.766200   82261 main.go:141] libmachine: (newest-cni-803958)   </features>
	I0920 18:37:57.766224   82261 main.go:141] libmachine: (newest-cni-803958)   <cpu mode='host-passthrough'>
	I0920 18:37:57.766237   82261 main.go:141] libmachine: (newest-cni-803958)   
	I0920 18:37:57.766246   82261 main.go:141] libmachine: (newest-cni-803958)   </cpu>
	I0920 18:37:57.766254   82261 main.go:141] libmachine: (newest-cni-803958)   <os>
	I0920 18:37:57.766279   82261 main.go:141] libmachine: (newest-cni-803958)     <type>hvm</type>
	I0920 18:37:57.766289   82261 main.go:141] libmachine: (newest-cni-803958)     <boot dev='cdrom'/>
	I0920 18:37:57.766299   82261 main.go:141] libmachine: (newest-cni-803958)     <boot dev='hd'/>
	I0920 18:37:57.766313   82261 main.go:141] libmachine: (newest-cni-803958)     <bootmenu enable='no'/>
	I0920 18:37:57.766325   82261 main.go:141] libmachine: (newest-cni-803958)   </os>
	I0920 18:37:57.766334   82261 main.go:141] libmachine: (newest-cni-803958)   <devices>
	I0920 18:37:57.766342   82261 main.go:141] libmachine: (newest-cni-803958)     <disk type='file' device='cdrom'>
	I0920 18:37:57.766354   82261 main.go:141] libmachine: (newest-cni-803958)       <source file='/home/jenkins/minikube-integration/19672-8777/.minikube/machines/newest-cni-803958/boot2docker.iso'/>
	I0920 18:37:57.766363   82261 main.go:141] libmachine: (newest-cni-803958)       <target dev='hdc' bus='scsi'/>
	I0920 18:37:57.766371   82261 main.go:141] libmachine: (newest-cni-803958)       <readonly/>
	I0920 18:37:57.766380   82261 main.go:141] libmachine: (newest-cni-803958)     </disk>
	I0920 18:37:57.766388   82261 main.go:141] libmachine: (newest-cni-803958)     <disk type='file' device='disk'>
	I0920 18:37:57.766403   82261 main.go:141] libmachine: (newest-cni-803958)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0920 18:37:57.766441   82261 main.go:141] libmachine: (newest-cni-803958)       <source file='/home/jenkins/minikube-integration/19672-8777/.minikube/machines/newest-cni-803958/newest-cni-803958.rawdisk'/>
	I0920 18:37:57.766467   82261 main.go:141] libmachine: (newest-cni-803958)       <target dev='hda' bus='virtio'/>
	I0920 18:37:57.766478   82261 main.go:141] libmachine: (newest-cni-803958)     </disk>
	I0920 18:37:57.766489   82261 main.go:141] libmachine: (newest-cni-803958)     <interface type='network'>
	I0920 18:37:57.766503   82261 main.go:141] libmachine: (newest-cni-803958)       <source network='mk-newest-cni-803958'/>
	I0920 18:37:57.766512   82261 main.go:141] libmachine: (newest-cni-803958)       <model type='virtio'/>
	I0920 18:37:57.766520   82261 main.go:141] libmachine: (newest-cni-803958)     </interface>
	I0920 18:37:57.766535   82261 main.go:141] libmachine: (newest-cni-803958)     <interface type='network'>
	I0920 18:37:57.766554   82261 main.go:141] libmachine: (newest-cni-803958)       <source network='default'/>
	I0920 18:37:57.766571   82261 main.go:141] libmachine: (newest-cni-803958)       <model type='virtio'/>
	I0920 18:37:57.766582   82261 main.go:141] libmachine: (newest-cni-803958)     </interface>
	I0920 18:37:57.766592   82261 main.go:141] libmachine: (newest-cni-803958)     <serial type='pty'>
	I0920 18:37:57.766600   82261 main.go:141] libmachine: (newest-cni-803958)       <target port='0'/>
	I0920 18:37:57.766620   82261 main.go:141] libmachine: (newest-cni-803958)     </serial>
	I0920 18:37:57.766631   82261 main.go:141] libmachine: (newest-cni-803958)     <console type='pty'>
	I0920 18:37:57.766643   82261 main.go:141] libmachine: (newest-cni-803958)       <target type='serial' port='0'/>
	I0920 18:37:57.766664   82261 main.go:141] libmachine: (newest-cni-803958)     </console>
	I0920 18:37:57.766679   82261 main.go:141] libmachine: (newest-cni-803958)     <rng model='virtio'>
	I0920 18:37:57.766686   82261 main.go:141] libmachine: (newest-cni-803958)       <backend model='random'>/dev/random</backend>
	I0920 18:37:57.766692   82261 main.go:141] libmachine: (newest-cni-803958)     </rng>
	I0920 18:37:57.766709   82261 main.go:141] libmachine: (newest-cni-803958)     
	I0920 18:37:57.766717   82261 main.go:141] libmachine: (newest-cni-803958)     
	I0920 18:37:57.766727   82261 main.go:141] libmachine: (newest-cni-803958)   </devices>
	I0920 18:37:57.766736   82261 main.go:141] libmachine: (newest-cni-803958) </domain>
	I0920 18:37:57.766769   82261 main.go:141] libmachine: (newest-cni-803958) 
	I0920 18:37:57.770991   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:57:1d:39 in network default
	I0920 18:37:57.771627   82261 main.go:141] libmachine: (newest-cni-803958) Ensuring networks are active...
	I0920 18:37:57.771647   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:37:57.772487   82261 main.go:141] libmachine: (newest-cni-803958) Ensuring network default is active
	I0920 18:37:57.772827   82261 main.go:141] libmachine: (newest-cni-803958) Ensuring network mk-newest-cni-803958 is active
	I0920 18:37:57.773596   82261 main.go:141] libmachine: (newest-cni-803958) Getting domain xml...
	I0920 18:37:57.774426   82261 main.go:141] libmachine: (newest-cni-803958) Creating domain...
	I0920 18:37:59.072376   82261 main.go:141] libmachine: (newest-cni-803958) Waiting to get IP...
	I0920 18:37:59.073132   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:37:59.073572   82261 main.go:141] libmachine: (newest-cni-803958) DBG | unable to find current IP address of domain newest-cni-803958 in network mk-newest-cni-803958
	I0920 18:37:59.073624   82261 main.go:141] libmachine: (newest-cni-803958) DBG | I0920 18:37:59.073567   82284 retry.go:31] will retry after 237.200058ms: waiting for machine to come up
	I0920 18:37:59.312251   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:37:59.312903   82261 main.go:141] libmachine: (newest-cni-803958) DBG | unable to find current IP address of domain newest-cni-803958 in network mk-newest-cni-803958
	I0920 18:37:59.312935   82261 main.go:141] libmachine: (newest-cni-803958) DBG | I0920 18:37:59.312858   82284 retry.go:31] will retry after 299.515801ms: waiting for machine to come up
	I0920 18:37:59.614747   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:37:59.615346   82261 main.go:141] libmachine: (newest-cni-803958) DBG | unable to find current IP address of domain newest-cni-803958 in network mk-newest-cni-803958
	I0920 18:37:59.615387   82261 main.go:141] libmachine: (newest-cni-803958) DBG | I0920 18:37:59.615271   82284 retry.go:31] will retry after 467.17509ms: waiting for machine to come up
	I0920 18:38:00.083674   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:00.084289   82261 main.go:141] libmachine: (newest-cni-803958) DBG | unable to find current IP address of domain newest-cni-803958 in network mk-newest-cni-803958
	I0920 18:38:00.084327   82261 main.go:141] libmachine: (newest-cni-803958) DBG | I0920 18:38:00.084193   82284 retry.go:31] will retry after 553.911509ms: waiting for machine to come up
	I0920 18:38:00.640192   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:00.640741   82261 main.go:141] libmachine: (newest-cni-803958) DBG | unable to find current IP address of domain newest-cni-803958 in network mk-newest-cni-803958
	I0920 18:38:00.640766   82261 main.go:141] libmachine: (newest-cni-803958) DBG | I0920 18:38:00.640660   82284 retry.go:31] will retry after 464.879742ms: waiting for machine to come up
	I0920 18:38:01.106961   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:01.107580   82261 main.go:141] libmachine: (newest-cni-803958) DBG | unable to find current IP address of domain newest-cni-803958 in network mk-newest-cni-803958
	I0920 18:38:01.107608   82261 main.go:141] libmachine: (newest-cni-803958) DBG | I0920 18:38:01.107506   82284 retry.go:31] will retry after 825.510996ms: waiting for machine to come up
	I0920 18:38:01.934403   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:01.934992   82261 main.go:141] libmachine: (newest-cni-803958) DBG | unable to find current IP address of domain newest-cni-803958 in network mk-newest-cni-803958
	I0920 18:38:01.935033   82261 main.go:141] libmachine: (newest-cni-803958) DBG | I0920 18:38:01.934952   82284 retry.go:31] will retry after 1.031655257s: waiting for machine to come up
	I0920 18:38:02.968058   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:02.968485   82261 main.go:141] libmachine: (newest-cni-803958) DBG | unable to find current IP address of domain newest-cni-803958 in network mk-newest-cni-803958
	I0920 18:38:02.968513   82261 main.go:141] libmachine: (newest-cni-803958) DBG | I0920 18:38:02.968432   82284 retry.go:31] will retry after 1.023055382s: waiting for machine to come up
	I0920 18:38:03.993778   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:03.994340   82261 main.go:141] libmachine: (newest-cni-803958) DBG | unable to find current IP address of domain newest-cni-803958 in network mk-newest-cni-803958
	I0920 18:38:03.994374   82261 main.go:141] libmachine: (newest-cni-803958) DBG | I0920 18:38:03.994281   82284 retry.go:31] will retry after 1.777461501s: waiting for machine to come up
	I0920 18:38:05.773880   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:05.774332   82261 main.go:141] libmachine: (newest-cni-803958) DBG | unable to find current IP address of domain newest-cni-803958 in network mk-newest-cni-803958
	I0920 18:38:05.774355   82261 main.go:141] libmachine: (newest-cni-803958) DBG | I0920 18:38:05.774304   82284 retry.go:31] will retry after 1.64509249s: waiting for machine to come up
	I0920 18:38:07.420629   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:07.421163   82261 main.go:141] libmachine: (newest-cni-803958) DBG | unable to find current IP address of domain newest-cni-803958 in network mk-newest-cni-803958
	I0920 18:38:07.421190   82261 main.go:141] libmachine: (newest-cni-803958) DBG | I0920 18:38:07.421109   82284 retry.go:31] will retry after 2.52757328s: waiting for machine to come up
	I0920 18:38:09.951030   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:09.951652   82261 main.go:141] libmachine: (newest-cni-803958) DBG | unable to find current IP address of domain newest-cni-803958 in network mk-newest-cni-803958
	I0920 18:38:09.951683   82261 main.go:141] libmachine: (newest-cni-803958) DBG | I0920 18:38:09.951578   82284 retry.go:31] will retry after 2.321470741s: waiting for machine to come up
	I0920 18:38:12.274645   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:12.275279   82261 main.go:141] libmachine: (newest-cni-803958) DBG | unable to find current IP address of domain newest-cni-803958 in network mk-newest-cni-803958
	I0920 18:38:12.275307   82261 main.go:141] libmachine: (newest-cni-803958) DBG | I0920 18:38:12.275215   82284 retry.go:31] will retry after 3.8979126s: waiting for machine to come up
	I0920 18:38:16.175587   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:16.175982   82261 main.go:141] libmachine: (newest-cni-803958) DBG | unable to find current IP address of domain newest-cni-803958 in network mk-newest-cni-803958
	I0920 18:38:16.176003   82261 main.go:141] libmachine: (newest-cni-803958) DBG | I0920 18:38:16.175948   82284 retry.go:31] will retry after 5.497884921s: waiting for machine to come up
	I0920 18:38:21.679259   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:21.679768   82261 main.go:141] libmachine: (newest-cni-803958) Found IP for machine: 192.168.39.85
	I0920 18:38:21.679820   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has current primary IP address 192.168.39.85 and MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:21.679827   82261 main.go:141] libmachine: (newest-cni-803958) Reserving static IP address...
	I0920 18:38:21.680243   82261 main.go:141] libmachine: (newest-cni-803958) DBG | unable to find host DHCP lease matching {name: "newest-cni-803958", mac: "52:54:00:7e:f0:0f", ip: "192.168.39.85"} in network mk-newest-cni-803958
	I0920 18:38:21.766201   82261 main.go:141] libmachine: (newest-cni-803958) Reserved static IP address: 192.168.39.85
	I0920 18:38:21.766264   82261 main.go:141] libmachine: (newest-cni-803958) Waiting for SSH to be available...
	I0920 18:38:21.766276   82261 main.go:141] libmachine: (newest-cni-803958) DBG | Getting to WaitForSSH function...
	I0920 18:38:21.768692   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:21.768958   82261 main.go:141] libmachine: (newest-cni-803958) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:7e:f0:0f", ip: ""} in network mk-newest-cni-803958
	I0920 18:38:21.768979   82261 main.go:141] libmachine: (newest-cni-803958) DBG | unable to find defined IP address of network mk-newest-cni-803958 interface with MAC address 52:54:00:7e:f0:0f
	I0920 18:38:21.769168   82261 main.go:141] libmachine: (newest-cni-803958) DBG | Using SSH client type: external
	I0920 18:38:21.769192   82261 main.go:141] libmachine: (newest-cni-803958) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/newest-cni-803958/id_rsa (-rw-------)
	I0920 18:38:21.769225   82261 main.go:141] libmachine: (newest-cni-803958) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-8777/.minikube/machines/newest-cni-803958/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 18:38:21.769237   82261 main.go:141] libmachine: (newest-cni-803958) DBG | About to run SSH command:
	I0920 18:38:21.769246   82261 main.go:141] libmachine: (newest-cni-803958) DBG | exit 0
	I0920 18:38:21.773420   82261 main.go:141] libmachine: (newest-cni-803958) DBG | SSH cmd err, output: exit status 255: 
	I0920 18:38:21.773449   82261 main.go:141] libmachine: (newest-cni-803958) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0920 18:38:21.773476   82261 main.go:141] libmachine: (newest-cni-803958) DBG | command : exit 0
	I0920 18:38:21.773484   82261 main.go:141] libmachine: (newest-cni-803958) DBG | err     : exit status 255
	I0920 18:38:21.773520   82261 main.go:141] libmachine: (newest-cni-803958) DBG | output  : 
	I0920 18:38:24.776090   82261 main.go:141] libmachine: (newest-cni-803958) DBG | Getting to WaitForSSH function...
	I0920 18:38:24.778556   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:24.778989   82261 main.go:141] libmachine: (newest-cni-803958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f0:0f", ip: ""} in network mk-newest-cni-803958: {Iface:virbr1 ExpiryTime:2024-09-20 19:38:12 +0000 UTC Type:0 Mac:52:54:00:7e:f0:0f Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:newest-cni-803958 Clientid:01:52:54:00:7e:f0:0f}
	I0920 18:38:24.779037   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined IP address 192.168.39.85 and MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:24.779118   82261 main.go:141] libmachine: (newest-cni-803958) DBG | Using SSH client type: external
	I0920 18:38:24.779142   82261 main.go:141] libmachine: (newest-cni-803958) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/newest-cni-803958/id_rsa (-rw-------)
	I0920 18:38:24.779173   82261 main.go:141] libmachine: (newest-cni-803958) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.85 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-8777/.minikube/machines/newest-cni-803958/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 18:38:24.779194   82261 main.go:141] libmachine: (newest-cni-803958) DBG | About to run SSH command:
	I0920 18:38:24.779208   82261 main.go:141] libmachine: (newest-cni-803958) DBG | exit 0
	I0920 18:38:24.910058   82261 main.go:141] libmachine: (newest-cni-803958) DBG | SSH cmd err, output: <nil>: 
	I0920 18:38:24.910350   82261 main.go:141] libmachine: (newest-cni-803958) KVM machine creation complete!
	I0920 18:38:24.910690   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetConfigRaw
	I0920 18:38:24.911287   82261 main.go:141] libmachine: (newest-cni-803958) Calling .DriverName
	I0920 18:38:24.911488   82261 main.go:141] libmachine: (newest-cni-803958) Calling .DriverName
	I0920 18:38:24.911635   82261 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0920 18:38:24.911655   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetState
	I0920 18:38:24.913258   82261 main.go:141] libmachine: Detecting operating system of created instance...
	I0920 18:38:24.913274   82261 main.go:141] libmachine: Waiting for SSH to be available...
	I0920 18:38:24.913287   82261 main.go:141] libmachine: Getting to WaitForSSH function...
	I0920 18:38:24.913293   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHHostname
	I0920 18:38:24.916198   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:24.916667   82261 main.go:141] libmachine: (newest-cni-803958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f0:0f", ip: ""} in network mk-newest-cni-803958: {Iface:virbr1 ExpiryTime:2024-09-20 19:38:12 +0000 UTC Type:0 Mac:52:54:00:7e:f0:0f Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:newest-cni-803958 Clientid:01:52:54:00:7e:f0:0f}
	I0920 18:38:24.916700   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined IP address 192.168.39.85 and MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:24.916885   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHPort
	I0920 18:38:24.917077   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHKeyPath
	I0920 18:38:24.917286   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHKeyPath
	I0920 18:38:24.917424   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHUsername
	I0920 18:38:24.917596   82261 main.go:141] libmachine: Using SSH client type: native
	I0920 18:38:24.917774   82261 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.85 22 <nil> <nil>}
	I0920 18:38:24.917789   82261 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0920 18:38:25.033274   82261 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:38:25.033302   82261 main.go:141] libmachine: Detecting the provisioner...
	I0920 18:38:25.033313   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHHostname
	I0920 18:38:25.036007   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:25.036374   82261 main.go:141] libmachine: (newest-cni-803958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f0:0f", ip: ""} in network mk-newest-cni-803958: {Iface:virbr1 ExpiryTime:2024-09-20 19:38:12 +0000 UTC Type:0 Mac:52:54:00:7e:f0:0f Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:newest-cni-803958 Clientid:01:52:54:00:7e:f0:0f}
	I0920 18:38:25.036420   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined IP address 192.168.39.85 and MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:25.036591   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHPort
	I0920 18:38:25.036793   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHKeyPath
	I0920 18:38:25.037002   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHKeyPath
	I0920 18:38:25.037186   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHUsername
	I0920 18:38:25.037355   82261 main.go:141] libmachine: Using SSH client type: native
	I0920 18:38:25.037555   82261 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.85 22 <nil> <nil>}
	I0920 18:38:25.037569   82261 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0920 18:38:25.150640   82261 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0920 18:38:25.150710   82261 main.go:141] libmachine: found compatible host: buildroot
	I0920 18:38:25.150720   82261 main.go:141] libmachine: Provisioning with buildroot...
	I0920 18:38:25.150731   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetMachineName
	I0920 18:38:25.150976   82261 buildroot.go:166] provisioning hostname "newest-cni-803958"
	I0920 18:38:25.150999   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetMachineName
	I0920 18:38:25.151167   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHHostname
	I0920 18:38:25.153988   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:25.154385   82261 main.go:141] libmachine: (newest-cni-803958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f0:0f", ip: ""} in network mk-newest-cni-803958: {Iface:virbr1 ExpiryTime:2024-09-20 19:38:12 +0000 UTC Type:0 Mac:52:54:00:7e:f0:0f Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:newest-cni-803958 Clientid:01:52:54:00:7e:f0:0f}
	I0920 18:38:25.154411   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined IP address 192.168.39.85 and MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:25.154515   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHPort
	I0920 18:38:25.154699   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHKeyPath
	I0920 18:38:25.154895   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHKeyPath
	I0920 18:38:25.155025   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHUsername
	I0920 18:38:25.155206   82261 main.go:141] libmachine: Using SSH client type: native
	I0920 18:38:25.155413   82261 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.85 22 <nil> <nil>}
	I0920 18:38:25.155428   82261 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-803958 && echo "newest-cni-803958" | sudo tee /etc/hostname
	I0920 18:38:25.286069   82261 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-803958
	
	I0920 18:38:25.286102   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHHostname
	I0920 18:38:25.289052   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:25.289386   82261 main.go:141] libmachine: (newest-cni-803958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f0:0f", ip: ""} in network mk-newest-cni-803958: {Iface:virbr1 ExpiryTime:2024-09-20 19:38:12 +0000 UTC Type:0 Mac:52:54:00:7e:f0:0f Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:newest-cni-803958 Clientid:01:52:54:00:7e:f0:0f}
	I0920 18:38:25.289408   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined IP address 192.168.39.85 and MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:25.289580   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHPort
	I0920 18:38:25.289768   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHKeyPath
	I0920 18:38:25.289969   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHKeyPath
	I0920 18:38:25.290156   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHUsername
	I0920 18:38:25.290301   82261 main.go:141] libmachine: Using SSH client type: native
	I0920 18:38:25.290482   82261 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.85 22 <nil> <nil>}
	I0920 18:38:25.290500   82261 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-803958' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-803958/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-803958' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 18:38:25.420728   82261 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:38:25.420761   82261 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-8777/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-8777/.minikube}
	I0920 18:38:25.420829   82261 buildroot.go:174] setting up certificates
	I0920 18:38:25.420843   82261 provision.go:84] configureAuth start
	I0920 18:38:25.420869   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetMachineName
	I0920 18:38:25.421184   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetIP
	I0920 18:38:25.424498   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:25.424910   82261 main.go:141] libmachine: (newest-cni-803958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f0:0f", ip: ""} in network mk-newest-cni-803958: {Iface:virbr1 ExpiryTime:2024-09-20 19:38:12 +0000 UTC Type:0 Mac:52:54:00:7e:f0:0f Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:newest-cni-803958 Clientid:01:52:54:00:7e:f0:0f}
	I0920 18:38:25.424938   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined IP address 192.168.39.85 and MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:25.425122   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHHostname
	I0920 18:38:25.427889   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:25.428336   82261 main.go:141] libmachine: (newest-cni-803958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f0:0f", ip: ""} in network mk-newest-cni-803958: {Iface:virbr1 ExpiryTime:2024-09-20 19:38:12 +0000 UTC Type:0 Mac:52:54:00:7e:f0:0f Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:newest-cni-803958 Clientid:01:52:54:00:7e:f0:0f}
	I0920 18:38:25.428364   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined IP address 192.168.39.85 and MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:25.428587   82261 provision.go:143] copyHostCerts
	I0920 18:38:25.428641   82261 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem, removing ...
	I0920 18:38:25.428664   82261 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem
	I0920 18:38:25.428747   82261 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem (1082 bytes)
	I0920 18:38:25.428878   82261 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem, removing ...
	I0920 18:38:25.428890   82261 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem
	I0920 18:38:25.428931   82261 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem (1123 bytes)
	I0920 18:38:25.429022   82261 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem, removing ...
	I0920 18:38:25.429033   82261 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem
	I0920 18:38:25.429065   82261 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem (1675 bytes)
	I0920 18:38:25.429172   82261 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem org=jenkins.newest-cni-803958 san=[127.0.0.1 192.168.39.85 localhost minikube newest-cni-803958]
	I0920 18:38:25.592648   82261 provision.go:177] copyRemoteCerts
	I0920 18:38:25.592719   82261 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 18:38:25.592749   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHHostname
	I0920 18:38:25.595867   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:25.596183   82261 main.go:141] libmachine: (newest-cni-803958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f0:0f", ip: ""} in network mk-newest-cni-803958: {Iface:virbr1 ExpiryTime:2024-09-20 19:38:12 +0000 UTC Type:0 Mac:52:54:00:7e:f0:0f Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:newest-cni-803958 Clientid:01:52:54:00:7e:f0:0f}
	I0920 18:38:25.596206   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined IP address 192.168.39.85 and MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:25.596530   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHPort
	I0920 18:38:25.596762   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHKeyPath
	I0920 18:38:25.596921   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHUsername
	I0920 18:38:25.597069   82261 sshutil.go:53] new ssh client: &{IP:192.168.39.85 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/newest-cni-803958/id_rsa Username:docker}
	I0920 18:38:25.685188   82261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0920 18:38:25.712846   82261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0920 18:38:25.737263   82261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0920 18:38:25.763101   82261 provision.go:87] duration metric: took 342.230185ms to configureAuth
	I0920 18:38:25.763141   82261 buildroot.go:189] setting minikube options for container-runtime
	I0920 18:38:25.763347   82261 config.go:182] Loaded profile config "newest-cni-803958": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:38:25.763471   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHHostname
	I0920 18:38:25.766313   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:25.766586   82261 main.go:141] libmachine: (newest-cni-803958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f0:0f", ip: ""} in network mk-newest-cni-803958: {Iface:virbr1 ExpiryTime:2024-09-20 19:38:12 +0000 UTC Type:0 Mac:52:54:00:7e:f0:0f Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:newest-cni-803958 Clientid:01:52:54:00:7e:f0:0f}
	I0920 18:38:25.766613   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined IP address 192.168.39.85 and MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:25.766801   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHPort
	I0920 18:38:25.767019   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHKeyPath
	I0920 18:38:25.767237   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHKeyPath
	I0920 18:38:25.767425   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHUsername
	I0920 18:38:25.767623   82261 main.go:141] libmachine: Using SSH client type: native
	I0920 18:38:25.767792   82261 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.85 22 <nil> <nil>}
	I0920 18:38:25.767809   82261 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 18:38:26.019706   82261 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 18:38:26.019734   82261 main.go:141] libmachine: Checking connection to Docker...
	I0920 18:38:26.019748   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetURL
	I0920 18:38:26.021030   82261 main.go:141] libmachine: (newest-cni-803958) DBG | Using libvirt version 6000000
	I0920 18:38:26.023358   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:26.023654   82261 main.go:141] libmachine: (newest-cni-803958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f0:0f", ip: ""} in network mk-newest-cni-803958: {Iface:virbr1 ExpiryTime:2024-09-20 19:38:12 +0000 UTC Type:0 Mac:52:54:00:7e:f0:0f Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:newest-cni-803958 Clientid:01:52:54:00:7e:f0:0f}
	I0920 18:38:26.023684   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined IP address 192.168.39.85 and MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:26.023830   82261 main.go:141] libmachine: Docker is up and running!
	I0920 18:38:26.023853   82261 main.go:141] libmachine: Reticulating splines...
	I0920 18:38:26.023860   82261 client.go:171] duration metric: took 28.972508997s to LocalClient.Create
	I0920 18:38:26.023889   82261 start.go:167] duration metric: took 28.972590244s to libmachine.API.Create "newest-cni-803958"
	I0920 18:38:26.023902   82261 start.go:293] postStartSetup for "newest-cni-803958" (driver="kvm2")
	I0920 18:38:26.023920   82261 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 18:38:26.023945   82261 main.go:141] libmachine: (newest-cni-803958) Calling .DriverName
	I0920 18:38:26.024189   82261 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 18:38:26.024213   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHHostname
	I0920 18:38:26.026338   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:26.026670   82261 main.go:141] libmachine: (newest-cni-803958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f0:0f", ip: ""} in network mk-newest-cni-803958: {Iface:virbr1 ExpiryTime:2024-09-20 19:38:12 +0000 UTC Type:0 Mac:52:54:00:7e:f0:0f Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:newest-cni-803958 Clientid:01:52:54:00:7e:f0:0f}
	I0920 18:38:26.026697   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined IP address 192.168.39.85 and MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:26.026891   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHPort
	I0920 18:38:26.027049   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHKeyPath
	I0920 18:38:26.027154   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHUsername
	I0920 18:38:26.027297   82261 sshutil.go:53] new ssh client: &{IP:192.168.39.85 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/newest-cni-803958/id_rsa Username:docker}
	I0920 18:38:26.121614   82261 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 18:38:26.126041   82261 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 18:38:26.126072   82261 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/addons for local assets ...
	I0920 18:38:26.126153   82261 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/files for local assets ...
	I0920 18:38:26.126265   82261 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem -> 159732.pem in /etc/ssl/certs
	I0920 18:38:26.126386   82261 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 18:38:26.137427   82261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /etc/ssl/certs/159732.pem (1708 bytes)
	I0920 18:38:26.162221   82261 start.go:296] duration metric: took 138.302639ms for postStartSetup
	I0920 18:38:26.162282   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetConfigRaw
	I0920 18:38:26.163084   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetIP
	I0920 18:38:26.165826   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:26.166201   82261 main.go:141] libmachine: (newest-cni-803958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f0:0f", ip: ""} in network mk-newest-cni-803958: {Iface:virbr1 ExpiryTime:2024-09-20 19:38:12 +0000 UTC Type:0 Mac:52:54:00:7e:f0:0f Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:newest-cni-803958 Clientid:01:52:54:00:7e:f0:0f}
	I0920 18:38:26.166236   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined IP address 192.168.39.85 and MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:26.166548   82261 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/newest-cni-803958/config.json ...
	I0920 18:38:26.166804   82261 start.go:128] duration metric: took 29.134455512s to createHost
	I0920 18:38:26.166836   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHHostname
	I0920 18:38:26.169598   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:26.169980   82261 main.go:141] libmachine: (newest-cni-803958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f0:0f", ip: ""} in network mk-newest-cni-803958: {Iface:virbr1 ExpiryTime:2024-09-20 19:38:12 +0000 UTC Type:0 Mac:52:54:00:7e:f0:0f Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:newest-cni-803958 Clientid:01:52:54:00:7e:f0:0f}
	I0920 18:38:26.170007   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined IP address 192.168.39.85 and MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:26.170159   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHPort
	I0920 18:38:26.170329   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHKeyPath
	I0920 18:38:26.170491   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHKeyPath
	I0920 18:38:26.170647   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHUsername
	I0920 18:38:26.170838   82261 main.go:141] libmachine: Using SSH client type: native
	I0920 18:38:26.171058   82261 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.85 22 <nil> <nil>}
	I0920 18:38:26.171074   82261 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 18:38:26.290856   82261 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726857506.270942557
	
	I0920 18:38:26.290885   82261 fix.go:216] guest clock: 1726857506.270942557
	I0920 18:38:26.290900   82261 fix.go:229] Guest: 2024-09-20 18:38:26.270942557 +0000 UTC Remote: 2024-09-20 18:38:26.166820782 +0000 UTC m=+29.255343882 (delta=104.121775ms)
	I0920 18:38:26.290956   82261 fix.go:200] guest clock delta is within tolerance: 104.121775ms
	I0920 18:38:26.290965   82261 start.go:83] releasing machines lock for "newest-cni-803958", held for 29.258716585s
	I0920 18:38:26.290995   82261 main.go:141] libmachine: (newest-cni-803958) Calling .DriverName
	I0920 18:38:26.291288   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetIP
	I0920 18:38:26.293955   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:26.294300   82261 main.go:141] libmachine: (newest-cni-803958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f0:0f", ip: ""} in network mk-newest-cni-803958: {Iface:virbr1 ExpiryTime:2024-09-20 19:38:12 +0000 UTC Type:0 Mac:52:54:00:7e:f0:0f Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:newest-cni-803958 Clientid:01:52:54:00:7e:f0:0f}
	I0920 18:38:26.294329   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined IP address 192.168.39.85 and MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:26.294495   82261 main.go:141] libmachine: (newest-cni-803958) Calling .DriverName
	I0920 18:38:26.294975   82261 main.go:141] libmachine: (newest-cni-803958) Calling .DriverName
	I0920 18:38:26.295158   82261 main.go:141] libmachine: (newest-cni-803958) Calling .DriverName
	I0920 18:38:26.295227   82261 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 18:38:26.295292   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHHostname
	I0920 18:38:26.295400   82261 ssh_runner.go:195] Run: cat /version.json
	I0920 18:38:26.295425   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHHostname
	I0920 18:38:26.298391   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:26.298419   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:26.298790   82261 main.go:141] libmachine: (newest-cni-803958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f0:0f", ip: ""} in network mk-newest-cni-803958: {Iface:virbr1 ExpiryTime:2024-09-20 19:38:12 +0000 UTC Type:0 Mac:52:54:00:7e:f0:0f Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:newest-cni-803958 Clientid:01:52:54:00:7e:f0:0f}
	I0920 18:38:26.298814   82261 main.go:141] libmachine: (newest-cni-803958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f0:0f", ip: ""} in network mk-newest-cni-803958: {Iface:virbr1 ExpiryTime:2024-09-20 19:38:12 +0000 UTC Type:0 Mac:52:54:00:7e:f0:0f Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:newest-cni-803958 Clientid:01:52:54:00:7e:f0:0f}
	I0920 18:38:26.298835   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined IP address 192.168.39.85 and MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:26.298851   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined IP address 192.168.39.85 and MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:26.299021   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHPort
	I0920 18:38:26.299156   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHPort
	I0920 18:38:26.299225   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHKeyPath
	I0920 18:38:26.299303   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHKeyPath
	I0920 18:38:26.299377   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHUsername
	I0920 18:38:26.299456   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHUsername
	I0920 18:38:26.299567   82261 sshutil.go:53] new ssh client: &{IP:192.168.39.85 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/newest-cni-803958/id_rsa Username:docker}
	I0920 18:38:26.299675   82261 sshutil.go:53] new ssh client: &{IP:192.168.39.85 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/newest-cni-803958/id_rsa Username:docker}
	I0920 18:38:26.425051   82261 ssh_runner.go:195] Run: systemctl --version
	I0920 18:38:26.431233   82261 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 18:38:26.591767   82261 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 18:38:26.598196   82261 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 18:38:26.598287   82261 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 18:38:26.615629   82261 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 18:38:26.615658   82261 start.go:495] detecting cgroup driver to use...
	I0920 18:38:26.615734   82261 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 18:38:26.634598   82261 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 18:38:26.649412   82261 docker.go:217] disabling cri-docker service (if available) ...
	I0920 18:38:26.649504   82261 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 18:38:26.665910   82261 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 18:38:26.682068   82261 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 18:38:26.800891   82261 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 18:38:26.946298   82261 docker.go:233] disabling docker service ...
	I0920 18:38:26.946365   82261 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 18:38:26.961974   82261 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 18:38:26.976551   82261 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 18:38:27.119201   82261 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 18:38:27.239780   82261 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 18:38:27.255749   82261 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 18:38:27.278193   82261 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 18:38:27.278289   82261 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:38:27.288996   82261 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 18:38:27.289074   82261 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:38:27.299340   82261 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:38:27.309820   82261 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:38:27.320755   82261 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 18:38:27.331653   82261 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:38:27.343288   82261 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:38:27.361186   82261 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:38:27.372287   82261 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 18:38:27.382571   82261 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 18:38:27.382633   82261 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 18:38:27.395555   82261 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 18:38:27.405520   82261 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:38:27.526941   82261 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 18:38:27.632461   82261 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 18:38:27.632559   82261 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 18:38:27.637880   82261 start.go:563] Will wait 60s for crictl version
	I0920 18:38:27.637944   82261 ssh_runner.go:195] Run: which crictl
	I0920 18:38:27.642074   82261 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 18:38:27.680753   82261 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 18:38:27.680834   82261 ssh_runner.go:195] Run: crio --version
	I0920 18:38:27.713153   82261 ssh_runner.go:195] Run: crio --version
	I0920 18:38:27.743920   82261 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 18:38:27.745056   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetIP
	I0920 18:38:27.748131   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:27.748539   82261 main.go:141] libmachine: (newest-cni-803958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f0:0f", ip: ""} in network mk-newest-cni-803958: {Iface:virbr1 ExpiryTime:2024-09-20 19:38:12 +0000 UTC Type:0 Mac:52:54:00:7e:f0:0f Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:newest-cni-803958 Clientid:01:52:54:00:7e:f0:0f}
	I0920 18:38:27.748568   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined IP address 192.168.39.85 and MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:27.748766   82261 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 18:38:27.753220   82261 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:38:27.768091   82261 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0920 18:38:27.769379   82261 kubeadm.go:883] updating cluster {Name:newest-cni-803958 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:newest-cni-803958 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.85 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 18:38:27.769531   82261 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:38:27.769590   82261 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:38:27.802246   82261 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 18:38:27.802320   82261 ssh_runner.go:195] Run: which lz4
	I0920 18:38:27.806895   82261 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 18:38:27.811200   82261 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 18:38:27.811241   82261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0920 18:38:29.229504   82261 crio.go:462] duration metric: took 1.422636323s to copy over tarball
	I0920 18:38:29.229588   82261 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 18:38:31.428260   82261 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.198641757s)
	I0920 18:38:31.428292   82261 crio.go:469] duration metric: took 2.198756082s to extract the tarball
	I0920 18:38:31.428302   82261 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 18:38:31.465017   82261 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:38:31.516128   82261 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 18:38:31.516154   82261 cache_images.go:84] Images are preloaded, skipping loading
	I0920 18:38:31.516166   82261 kubeadm.go:934] updating node { 192.168.39.85 8443 v1.31.1 crio true true} ...
	I0920 18:38:31.516311   82261 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-803958 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.85
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:newest-cni-803958 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 18:38:31.516409   82261 ssh_runner.go:195] Run: crio config
	I0920 18:38:31.568776   82261 cni.go:84] Creating CNI manager for ""
	I0920 18:38:31.568800   82261 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:38:31.568809   82261 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0920 18:38:31.568837   82261 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.39.85 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-803958 NodeName:newest-cni-803958 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.85"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:map[
] NodeIP:192.168.39.85 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 18:38:31.568964   82261 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.85
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-803958"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.85
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.85"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 18:38:31.569019   82261 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 18:38:31.580521   82261 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 18:38:31.580597   82261 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 18:38:31.590482   82261 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (353 bytes)
	I0920 18:38:31.606765   82261 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 18:38:31.625149   82261 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2282 bytes)
	I0920 18:38:31.641805   82261 ssh_runner.go:195] Run: grep 192.168.39.85	control-plane.minikube.internal$ /etc/hosts
	I0920 18:38:31.645653   82261 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.85	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:38:31.659393   82261 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:38:31.796650   82261 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:38:31.817136   82261 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/newest-cni-803958 for IP: 192.168.39.85
	I0920 18:38:31.817158   82261 certs.go:194] generating shared ca certs ...
	I0920 18:38:31.817172   82261 certs.go:226] acquiring lock for ca certs: {Name:mkc7ef6c737c6bdc3fdd9dcff8f57029c020d8f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:38:31.817356   82261 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key
	I0920 18:38:31.817428   82261 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key
	I0920 18:38:31.817443   82261 certs.go:256] generating profile certs ...
	I0920 18:38:31.817512   82261 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/newest-cni-803958/client.key
	I0920 18:38:31.817531   82261 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/newest-cni-803958/client.crt with IP's: []
	I0920 18:38:32.099303   82261 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/newest-cni-803958/client.crt ...
	I0920 18:38:32.099330   82261 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/newest-cni-803958/client.crt: {Name:mkc0509632bac01b37ddfb2e5cb4a2d46207f579 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:38:32.099524   82261 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/newest-cni-803958/client.key ...
	I0920 18:38:32.099538   82261 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/newest-cni-803958/client.key: {Name:mk277425cc0210ea6909cc503c589492fa38e42c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:38:32.099647   82261 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/newest-cni-803958/apiserver.key.9173e10f
	I0920 18:38:32.099663   82261 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/newest-cni-803958/apiserver.crt.9173e10f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.85]
	I0920 18:38:32.381287   82261 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/newest-cni-803958/apiserver.crt.9173e10f ...
	I0920 18:38:32.381326   82261 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/newest-cni-803958/apiserver.crt.9173e10f: {Name:mk95e305edd410ad9b1b1c6dd16892eb1c7adab8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:38:32.381546   82261 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/newest-cni-803958/apiserver.key.9173e10f ...
	I0920 18:38:32.381571   82261 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/newest-cni-803958/apiserver.key.9173e10f: {Name:mk0c0d6ab59ef72540509294642a86bb4920f48d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:38:32.381668   82261 certs.go:381] copying /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/newest-cni-803958/apiserver.crt.9173e10f -> /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/newest-cni-803958/apiserver.crt
	I0920 18:38:32.381780   82261 certs.go:385] copying /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/newest-cni-803958/apiserver.key.9173e10f -> /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/newest-cni-803958/apiserver.key
	I0920 18:38:32.381909   82261 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/newest-cni-803958/proxy-client.key
	I0920 18:38:32.381954   82261 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/newest-cni-803958/proxy-client.crt with IP's: []
	I0920 18:38:32.504014   82261 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/newest-cni-803958/proxy-client.crt ...
	I0920 18:38:32.504054   82261 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/newest-cni-803958/proxy-client.crt: {Name:mkb07a5275c60df8501e1c65b053e29cf1c51d9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:38:32.504242   82261 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/newest-cni-803958/proxy-client.key ...
	I0920 18:38:32.504255   82261 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/newest-cni-803958/proxy-client.key: {Name:mkbca65bcec315b403427144037935f7416c9282 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:38:32.504440   82261 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem (1338 bytes)
	W0920 18:38:32.504488   82261 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973_empty.pem, impossibly tiny 0 bytes
	I0920 18:38:32.504500   82261 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 18:38:32.504526   82261 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem (1082 bytes)
	I0920 18:38:32.504553   82261 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem (1123 bytes)
	I0920 18:38:32.504578   82261 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem (1675 bytes)
	I0920 18:38:32.504620   82261 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem (1708 bytes)
	I0920 18:38:32.505171   82261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 18:38:32.531564   82261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 18:38:32.555630   82261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 18:38:32.580896   82261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0920 18:38:32.609306   82261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/newest-cni-803958/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0920 18:38:32.652518   82261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/newest-cni-803958/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 18:38:32.679239   82261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/newest-cni-803958/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 18:38:32.705161   82261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/newest-cni-803958/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 18:38:32.730458   82261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem --> /usr/share/ca-certificates/15973.pem (1338 bytes)
	I0920 18:38:32.756733   82261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /usr/share/ca-certificates/159732.pem (1708 bytes)
	I0920 18:38:32.786377   82261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 18:38:32.813410   82261 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 18:38:32.832093   82261 ssh_runner.go:195] Run: openssl version
	I0920 18:38:32.838727   82261 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/159732.pem && ln -fs /usr/share/ca-certificates/159732.pem /etc/ssl/certs/159732.pem"
	I0920 18:38:32.852905   82261 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/159732.pem
	I0920 18:38:32.857533   82261 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:04 /usr/share/ca-certificates/159732.pem
	I0920 18:38:32.857602   82261 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/159732.pem
	I0920 18:38:32.864185   82261 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/159732.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 18:38:32.875812   82261 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 18:38:32.888703   82261 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:38:32.893860   82261 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:38:32.893928   82261 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:38:32.900285   82261 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 18:38:32.912099   82261 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15973.pem && ln -fs /usr/share/ca-certificates/15973.pem /etc/ssl/certs/15973.pem"
	I0920 18:38:32.923386   82261 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15973.pem
	I0920 18:38:32.928227   82261 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:04 /usr/share/ca-certificates/15973.pem
	I0920 18:38:32.928302   82261 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15973.pem
	I0920 18:38:32.934528   82261 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15973.pem /etc/ssl/certs/51391683.0"
	I0920 18:38:32.946210   82261 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 18:38:32.950649   82261 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 18:38:32.950720   82261 kubeadm.go:392] StartCluster: {Name:newest-cni-803958 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:newest-cni-803958 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.85 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:38:32.950797   82261 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 18:38:32.950846   82261 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:38:32.993421   82261 cri.go:89] found id: ""
	I0920 18:38:32.993501   82261 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 18:38:33.004345   82261 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 18:38:33.016576   82261 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 18:38:33.027101   82261 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 18:38:33.027126   82261 kubeadm.go:157] found existing configuration files:
	
	I0920 18:38:33.027173   82261 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 18:38:33.037237   82261 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 18:38:33.037308   82261 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 18:38:33.047186   82261 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 18:38:33.057573   82261 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 18:38:33.057644   82261 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 18:38:33.069042   82261 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 18:38:33.080112   82261 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 18:38:33.080179   82261 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 18:38:33.091475   82261 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 18:38:33.101722   82261 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 18:38:33.101798   82261 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 18:38:33.112565   82261 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 18:38:33.219047   82261 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 18:38:33.219171   82261 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 18:38:33.350514   82261 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 18:38:33.350622   82261 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 18:38:33.350758   82261 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 18:38:33.361616   82261 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 18:38:33.613283   82261 out.go:235]   - Generating certificates and keys ...
	I0920 18:38:33.613501   82261 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 18:38:33.613620   82261 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 18:38:33.668035   82261 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0920 18:38:33.838213   82261 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0920 18:38:33.902918   82261 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0920 18:38:33.986075   82261 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0920 18:38:34.190710   82261 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0920 18:38:34.190888   82261 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-803958] and IPs [192.168.39.85 127.0.0.1 ::1]
	I0920 18:38:34.267495   82261 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0920 18:38:34.267692   82261 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-803958] and IPs [192.168.39.85 127.0.0.1 ::1]
	I0920 18:38:34.367171   82261 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0920 18:38:34.592865   82261 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0920 18:38:34.729073   82261 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0920 18:38:34.729205   82261 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 18:38:34.832379   82261 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 18:38:35.157102   82261 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 18:38:35.343426   82261 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 18:38:35.769077   82261 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 18:38:35.964992   82261 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 18:38:35.965665   82261 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 18:38:35.969677   82261 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 18:38:35.971610   82261 out.go:235]   - Booting up control plane ...
	I0920 18:38:35.971748   82261 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 18:38:35.972004   82261 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 18:38:35.973576   82261 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 18:38:35.999498   82261 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 18:38:36.010634   82261 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 18:38:36.010703   82261 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 18:38:36.184547   82261 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 18:38:36.184671   82261 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 18:38:36.686938   82261 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.273542ms
	I0920 18:38:36.687072   82261 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 18:38:42.187029   82261 kubeadm.go:310] [api-check] The API server is healthy after 5.503031979s
	I0920 18:38:42.210855   82261 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 18:38:42.224877   82261 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 18:38:42.259945   82261 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 18:38:42.260170   82261 kubeadm.go:310] [mark-control-plane] Marking the node newest-cni-803958 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 18:38:42.273856   82261 kubeadm.go:310] [bootstrap-token] Using token: vf6hsv.spzkpac5cjlkb3e5
	I0920 18:38:42.275392   82261 out.go:235]   - Configuring RBAC rules ...
	I0920 18:38:42.275533   82261 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 18:38:42.281503   82261 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 18:38:42.290232   82261 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 18:38:42.295249   82261 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 18:38:42.299469   82261 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 18:38:42.305908   82261 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 18:38:42.602609   82261 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 18:38:43.033547   82261 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 18:38:43.595683   82261 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 18:38:43.595712   82261 kubeadm.go:310] 
	I0920 18:38:43.595790   82261 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 18:38:43.595806   82261 kubeadm.go:310] 
	I0920 18:38:43.595971   82261 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 18:38:43.595996   82261 kubeadm.go:310] 
	I0920 18:38:43.596031   82261 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 18:38:43.596131   82261 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 18:38:43.596208   82261 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 18:38:43.596231   82261 kubeadm.go:310] 
	I0920 18:38:43.596322   82261 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 18:38:43.596336   82261 kubeadm.go:310] 
	I0920 18:38:43.596403   82261 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 18:38:43.596414   82261 kubeadm.go:310] 
	I0920 18:38:43.596480   82261 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 18:38:43.596580   82261 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 18:38:43.596682   82261 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 18:38:43.596693   82261 kubeadm.go:310] 
	I0920 18:38:43.596801   82261 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 18:38:43.596917   82261 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 18:38:43.596933   82261 kubeadm.go:310] 
	I0920 18:38:43.597018   82261 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token vf6hsv.spzkpac5cjlkb3e5 \
	I0920 18:38:43.597157   82261 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:736f3fb521855813decb4a7214f2b8beff5f81c3971b60decfa20a8807626a2d \
	I0920 18:38:43.597186   82261 kubeadm.go:310] 	--control-plane 
	I0920 18:38:43.597195   82261 kubeadm.go:310] 
	I0920 18:38:43.597297   82261 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 18:38:43.597307   82261 kubeadm.go:310] 
	I0920 18:38:43.597430   82261 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token vf6hsv.spzkpac5cjlkb3e5 \
	I0920 18:38:43.597570   82261 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:736f3fb521855813decb4a7214f2b8beff5f81c3971b60decfa20a8807626a2d 
	I0920 18:38:43.598694   82261 kubeadm.go:310] W0920 18:38:33.202523     828 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 18:38:43.599120   82261 kubeadm.go:310] W0920 18:38:33.203400     828 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 18:38:43.599268   82261 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 18:38:43.599300   82261 cni.go:84] Creating CNI manager for ""
	I0920 18:38:43.599313   82261 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:38:43.601421   82261 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 18:38:43.602633   82261 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 18:38:43.616071   82261 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 18:38:43.638171   82261 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 18:38:43.638255   82261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:38:43.638266   82261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-803958 minikube.k8s.io/updated_at=2024_09_20T18_38_43_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=0626f22cf0d915d75e291a5bce701f94395056e1 minikube.k8s.io/name=newest-cni-803958 minikube.k8s.io/primary=true
	I0920 18:38:43.657158   82261 ops.go:34] apiserver oom_adj: -16
	I0920 18:38:43.842417   82261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:38:44.343480   82261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:38:44.843428   82261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:38:45.343230   82261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:38:45.843402   82261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:38:46.342854   82261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:38:46.843062   82261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:38:47.343445   82261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:38:47.438286   82261 kubeadm.go:1113] duration metric: took 3.800098195s to wait for elevateKubeSystemPrivileges
	I0920 18:38:47.438327   82261 kubeadm.go:394] duration metric: took 14.487612758s to StartCluster
	I0920 18:38:47.438356   82261 settings.go:142] acquiring lock: {Name:mk2a13c58cdc0faf8cddca5d6716175d45db9bfd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:38:47.438444   82261 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19672-8777/kubeconfig
	I0920 18:38:47.440118   82261 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/kubeconfig: {Name:mkf32a4c736808e023459b2f0e40188618a38db1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:38:47.440369   82261 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.85 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:38:47.440392   82261 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 18:38:47.440482   82261 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-803958"
	I0920 18:38:47.440504   82261 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-803958"
	I0920 18:38:47.440381   82261 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0920 18:38:47.440539   82261 addons.go:69] Setting default-storageclass=true in profile "newest-cni-803958"
	I0920 18:38:47.440572   82261 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-803958"
	I0920 18:38:47.440570   82261 config.go:182] Loaded profile config "newest-cni-803958": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:38:47.440535   82261 host.go:66] Checking if "newest-cni-803958" exists ...
	I0920 18:38:47.441060   82261 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:38:47.441064   82261 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:38:47.441095   82261 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:38:47.441122   82261 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:38:47.441975   82261 out.go:177] * Verifying Kubernetes components...
	I0920 18:38:47.443253   82261 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:38:47.457335   82261 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41323
	I0920 18:38:47.457893   82261 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:38:47.458514   82261 main.go:141] libmachine: Using API Version  1
	I0920 18:38:47.458542   82261 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:38:47.458550   82261 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35105
	I0920 18:38:47.458940   82261 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:38:47.459017   82261 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:38:47.459554   82261 main.go:141] libmachine: Using API Version  1
	I0920 18:38:47.459587   82261 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:38:47.459607   82261 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:38:47.459643   82261 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:38:47.460149   82261 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:38:47.460404   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetState
	I0920 18:38:47.464330   82261 addons.go:234] Setting addon default-storageclass=true in "newest-cni-803958"
	I0920 18:38:47.464374   82261 host.go:66] Checking if "newest-cni-803958" exists ...
	I0920 18:38:47.464696   82261 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:38:47.464721   82261 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:38:47.477333   82261 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46031
	I0920 18:38:47.477859   82261 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:38:47.478436   82261 main.go:141] libmachine: Using API Version  1
	I0920 18:38:47.478456   82261 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:38:47.478817   82261 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:38:47.479014   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetState
	I0920 18:38:47.481319   82261 main.go:141] libmachine: (newest-cni-803958) Calling .DriverName
	I0920 18:38:47.482619   82261 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46605
	I0920 18:38:47.483118   82261 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:38:47.483415   82261 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:38:47.483668   82261 main.go:141] libmachine: Using API Version  1
	I0920 18:38:47.483693   82261 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:38:47.484050   82261 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:38:47.484582   82261 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:38:47.484657   82261 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:38:47.484782   82261 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 18:38:47.484797   82261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 18:38:47.484813   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHHostname
	I0920 18:38:47.488004   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:47.488482   82261 main.go:141] libmachine: (newest-cni-803958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f0:0f", ip: ""} in network mk-newest-cni-803958: {Iface:virbr1 ExpiryTime:2024-09-20 19:38:12 +0000 UTC Type:0 Mac:52:54:00:7e:f0:0f Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:newest-cni-803958 Clientid:01:52:54:00:7e:f0:0f}
	I0920 18:38:47.488507   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined IP address 192.168.39.85 and MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:47.488787   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHPort
	I0920 18:38:47.488939   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHKeyPath
	I0920 18:38:47.489051   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHUsername
	I0920 18:38:47.489202   82261 sshutil.go:53] new ssh client: &{IP:192.168.39.85 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/newest-cni-803958/id_rsa Username:docker}
	I0920 18:38:47.500799   82261 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45549
	I0920 18:38:47.501272   82261 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:38:47.501892   82261 main.go:141] libmachine: Using API Version  1
	I0920 18:38:47.501923   82261 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:38:47.502266   82261 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:38:47.502471   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetState
	I0920 18:38:47.504459   82261 main.go:141] libmachine: (newest-cni-803958) Calling .DriverName
	I0920 18:38:47.504647   82261 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 18:38:47.504666   82261 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 18:38:47.504688   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHHostname
	I0920 18:38:47.507537   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:47.508112   82261 main.go:141] libmachine: (newest-cni-803958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f0:0f", ip: ""} in network mk-newest-cni-803958: {Iface:virbr1 ExpiryTime:2024-09-20 19:38:12 +0000 UTC Type:0 Mac:52:54:00:7e:f0:0f Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:newest-cni-803958 Clientid:01:52:54:00:7e:f0:0f}
	I0920 18:38:47.508142   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined IP address 192.168.39.85 and MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:47.508387   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHPort
	I0920 18:38:47.508564   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHKeyPath
	I0920 18:38:47.508839   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHUsername
	I0920 18:38:47.508979   82261 sshutil.go:53] new ssh client: &{IP:192.168.39.85 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/newest-cni-803958/id_rsa Username:docker}
	I0920 18:38:47.751437   82261 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:38:47.751485   82261 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0920 18:38:47.775023   82261 api_server.go:52] waiting for apiserver process to appear ...
	I0920 18:38:47.775093   82261 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:38:47.868303   82261 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 18:38:47.934010   82261 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 18:38:48.515373   82261 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0920 18:38:48.515409   82261 api_server.go:72] duration metric: took 1.075005663s to wait for apiserver process to appear ...
	I0920 18:38:48.515430   82261 api_server.go:88] waiting for apiserver healthz status ...
	I0920 18:38:48.515445   82261 main.go:141] libmachine: Making call to close driver server
	I0920 18:38:48.515467   82261 main.go:141] libmachine: (newest-cni-803958) Calling .Close
	I0920 18:38:48.515454   82261 api_server.go:253] Checking apiserver healthz at https://192.168.39.85:8443/healthz ...
	I0920 18:38:48.515791   82261 main.go:141] libmachine: (newest-cni-803958) DBG | Closing plugin on server side
	I0920 18:38:48.515838   82261 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:38:48.515852   82261 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:38:48.515895   82261 main.go:141] libmachine: Making call to close driver server
	I0920 18:38:48.515917   82261 main.go:141] libmachine: (newest-cni-803958) Calling .Close
	I0920 18:38:48.516357   82261 main.go:141] libmachine: (newest-cni-803958) DBG | Closing plugin on server side
	I0920 18:38:48.516374   82261 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:38:48.516382   82261 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:38:48.532833   82261 api_server.go:279] https://192.168.39.85:8443/healthz returned 200:
	ok
	I0920 18:38:48.534003   82261 api_server.go:141] control plane version: v1.31.1
	I0920 18:38:48.534023   82261 api_server.go:131] duration metric: took 18.586496ms to wait for apiserver health ...
	I0920 18:38:48.534031   82261 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 18:38:48.546913   82261 system_pods.go:59] 7 kube-system pods found
	I0920 18:38:48.546954   82261 system_pods.go:61] "coredns-7c65d6cfc9-bpk4q" [79d0a6cb-600b-4c81-90e6-7b8740d6fe1f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 18:38:48.546965   82261 system_pods.go:61] "coredns-7c65d6cfc9-d4r9j" [e78d81d6-b2f0-4ecf-97dd-e08c966c61cd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 18:38:48.546976   82261 system_pods.go:61] "etcd-newest-cni-803958" [165dda54-b846-4b0b-9d54-685b1beb16ce] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0920 18:38:48.546986   82261 system_pods.go:61] "kube-apiserver-newest-cni-803958" [03870f3a-2806-485c-9f48-e0a8dc6b1cd6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0920 18:38:48.546995   82261 system_pods.go:61] "kube-controller-manager-newest-cni-803958" [214b301a-af1d-4091-a7a2-8089e5802db8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0920 18:38:48.547005   82261 system_pods.go:61] "kube-proxy-8xcwl" [0fbde85c-c844-4c68-abf2-44ffe35e51c3] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0920 18:38:48.547014   82261 system_pods.go:61] "kube-scheduler-newest-cni-803958" [a9de0e76-911b-4db7-a681-5d51344791bf] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0920 18:38:48.547028   82261 system_pods.go:74] duration metric: took 12.988463ms to wait for pod list to return data ...
	I0920 18:38:48.547042   82261 default_sa.go:34] waiting for default service account to be created ...
	I0920 18:38:48.547328   82261 main.go:141] libmachine: Making call to close driver server
	I0920 18:38:48.547354   82261 main.go:141] libmachine: (newest-cni-803958) Calling .Close
	I0920 18:38:48.547672   82261 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:38:48.547690   82261 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:38:48.573987   82261 default_sa.go:45] found service account: "default"
	I0920 18:38:48.574021   82261 default_sa.go:55] duration metric: took 26.972911ms for default service account to be created ...
	I0920 18:38:48.574035   82261 kubeadm.go:582] duration metric: took 1.133637317s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0920 18:38:48.574060   82261 node_conditions.go:102] verifying NodePressure condition ...
	I0920 18:38:48.591376   82261 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 18:38:48.591412   82261 node_conditions.go:123] node cpu capacity is 2
	I0920 18:38:48.591425   82261 node_conditions.go:105] duration metric: took 17.361083ms to run NodePressure ...
	I0920 18:38:48.591446   82261 start.go:241] waiting for startup goroutines ...
	I0920 18:38:49.039307   82261 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-803958" context rescaled to 1 replicas
	I0920 18:38:49.053581   82261 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.119516542s)
	I0920 18:38:49.053636   82261 main.go:141] libmachine: Making call to close driver server
	I0920 18:38:49.053651   82261 main.go:141] libmachine: (newest-cni-803958) Calling .Close
	I0920 18:38:49.053953   82261 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:38:49.054003   82261 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:38:49.054016   82261 main.go:141] libmachine: Making call to close driver server
	I0920 18:38:49.054024   82261 main.go:141] libmachine: (newest-cni-803958) Calling .Close
	I0920 18:38:49.054259   82261 main.go:141] libmachine: (newest-cni-803958) DBG | Closing plugin on server side
	I0920 18:38:49.054295   82261 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:38:49.054328   82261 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:38:49.057218   82261 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0920 18:38:49.058706   82261 addons.go:510] duration metric: took 1.618303588s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0920 18:38:49.058772   82261 start.go:246] waiting for cluster config update ...
	I0920 18:38:49.058787   82261 start.go:255] writing updated cluster config ...
	I0920 18:38:49.059080   82261 ssh_runner.go:195] Run: rm -f paused
	I0920 18:38:49.157409   82261 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 18:38:49.159293   82261 out.go:177] * Done! kubectl is now configured to use "newest-cni-803958" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 20 18:38:59 embed-certs-768431 crio[711]: time="2024-09-20 18:38:59.151376994Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857539151353938,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=00906714-06e0-4b7f-8d3c-3863d1cdc1bd name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:38:59 embed-certs-768431 crio[711]: time="2024-09-20 18:38:59.152029989Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4cc2a39c-6ee4-47a8-8e7d-59a772bd8770 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:38:59 embed-certs-768431 crio[711]: time="2024-09-20 18:38:59.152097529Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4cc2a39c-6ee4-47a8-8e7d-59a772bd8770 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:38:59 embed-certs-768431 crio[711]: time="2024-09-20 18:38:59.152329669Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:94ae620077d52dd151fad67882e6e730211f6413dee3d748910740faa4183eac,PodSandboxId:eb56e696cc808b397bd7b0318a5de8e65c328a6f94d327f20bec50c7d414fffb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726856563038879412,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jkkdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f64288e4-009c-4ba8-93e7-30ca5296af46,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79bcf7932ed8f05d3557ec43dc4b30eb901ae603c23ec95cdb26fcff309f8bf0,PodSandboxId:94fd14f7e592bbf392bd3d41fb6cc0d0712ba25d0f849314481ee6df7e221be6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726856562997339560,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-g5tkc,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 8877c0e8-6c8f-4a62-94bd-508982faee3c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7de5f69693ad1e4952f1f37d279b3885941bdfe611a578c407409e1f682c233d,PodSandboxId:c6470c7fa90ed125ade07fc6f5a938d9d40009cf95e18feca7f18fe38f1229fa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1726856561945979039,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09227b45-ea2a-4fcc-b082-9978e5f00a4b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0b5138d088184ba891b7358eb30006ce43b5efaaf9d1bdd2683bffd77d6f7a3,PodSandboxId:6998435c3f51dc14a100845aeaf9dfa662b47774b2295de912617af08804ba63,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1726856561286972333,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c4527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e2d5102-0c42-4b87-8a27-dd53b8eb41f9,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95d89e4642aec6217d5404f63a2c94888e755e70ff067dca9c79dbd4d8ff401f,PodSandboxId:1da58d69dc8baa1a260a18189c4e6103d9c9c8511ede32eceea749c375d1f4b8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726856550114256246,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-768431,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dff06b5ea7279b1405d52ac7628b7439,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34a824c120f70ecac74b6f896fb9ee5b8c6212488ff5593479b43fd125e17797,PodSandboxId:d11b4cd4f6d8e14717fd212f4b2d15ca8acef5c22f1ac534c092b1b8de59fa91,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726856550115101066,Labels:map[
string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-768431,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d4e18090a0918ea03f4cdf2c5f86d9c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2f83bd27b1b0afdbcf06db3ba261f996796b94d246b84e0c8a0155a8e789c0b,PodSandboxId:338d7a9191196f3ced0ba70ed6fd9e5039e63752cdeb27ece11219098ded4e8b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726856550084803387,Label
s:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-768431,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71c26280473a70f3d3118a2c74ae46b8,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f67d435a1e16fdbe13c5f48c2f8afb534654e4177178bb9d93b86881fc88d98d,PodSandboxId:436834be7dfc5eb64d8cb3c0af7fc2e5fdab7e70dd273d6dafabf6bf4cccd47e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726856550036607735,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-768431,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bcaa81b6e93977e61634073ae28c68a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4a6e3230e7a6913391ee20bb77bdd367ee8233dfbe564b36049d0b4fdae6268,PodSandboxId:db2e24821ff52de88228e023487aa88fbdfd03452708024d27425c33cf87f9ab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726856263910944947,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-768431,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71c26280473a70f3d3118a2c74ae46b8,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4cc2a39c-6ee4-47a8-8e7d-59a772bd8770 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:38:59 embed-certs-768431 crio[711]: time="2024-09-20 18:38:59.193078545Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1fa6b1f0-4b12-44a9-b3ac-d9ec7c04c008 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:38:59 embed-certs-768431 crio[711]: time="2024-09-20 18:38:59.193205116Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1fa6b1f0-4b12-44a9-b3ac-d9ec7c04c008 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:38:59 embed-certs-768431 crio[711]: time="2024-09-20 18:38:59.194035282Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8d2a91ce-780b-4349-887a-12d2932712c9 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:38:59 embed-certs-768431 crio[711]: time="2024-09-20 18:38:59.194748428Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857539194717279,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8d2a91ce-780b-4349-887a-12d2932712c9 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:38:59 embed-certs-768431 crio[711]: time="2024-09-20 18:38:59.195781209Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=621a951b-7f18-4e00-b68a-1550a618fb41 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:38:59 embed-certs-768431 crio[711]: time="2024-09-20 18:38:59.195892177Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=621a951b-7f18-4e00-b68a-1550a618fb41 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:38:59 embed-certs-768431 crio[711]: time="2024-09-20 18:38:59.196332025Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:94ae620077d52dd151fad67882e6e730211f6413dee3d748910740faa4183eac,PodSandboxId:eb56e696cc808b397bd7b0318a5de8e65c328a6f94d327f20bec50c7d414fffb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726856563038879412,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jkkdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f64288e4-009c-4ba8-93e7-30ca5296af46,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79bcf7932ed8f05d3557ec43dc4b30eb901ae603c23ec95cdb26fcff309f8bf0,PodSandboxId:94fd14f7e592bbf392bd3d41fb6cc0d0712ba25d0f849314481ee6df7e221be6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726856562997339560,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-g5tkc,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 8877c0e8-6c8f-4a62-94bd-508982faee3c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7de5f69693ad1e4952f1f37d279b3885941bdfe611a578c407409e1f682c233d,PodSandboxId:c6470c7fa90ed125ade07fc6f5a938d9d40009cf95e18feca7f18fe38f1229fa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1726856561945979039,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09227b45-ea2a-4fcc-b082-9978e5f00a4b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0b5138d088184ba891b7358eb30006ce43b5efaaf9d1bdd2683bffd77d6f7a3,PodSandboxId:6998435c3f51dc14a100845aeaf9dfa662b47774b2295de912617af08804ba63,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1726856561286972333,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c4527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e2d5102-0c42-4b87-8a27-dd53b8eb41f9,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95d89e4642aec6217d5404f63a2c94888e755e70ff067dca9c79dbd4d8ff401f,PodSandboxId:1da58d69dc8baa1a260a18189c4e6103d9c9c8511ede32eceea749c375d1f4b8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726856550114256246,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-768431,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dff06b5ea7279b1405d52ac7628b7439,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34a824c120f70ecac74b6f896fb9ee5b8c6212488ff5593479b43fd125e17797,PodSandboxId:d11b4cd4f6d8e14717fd212f4b2d15ca8acef5c22f1ac534c092b1b8de59fa91,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726856550115101066,Labels:map[
string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-768431,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d4e18090a0918ea03f4cdf2c5f86d9c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2f83bd27b1b0afdbcf06db3ba261f996796b94d246b84e0c8a0155a8e789c0b,PodSandboxId:338d7a9191196f3ced0ba70ed6fd9e5039e63752cdeb27ece11219098ded4e8b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726856550084803387,Label
s:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-768431,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71c26280473a70f3d3118a2c74ae46b8,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f67d435a1e16fdbe13c5f48c2f8afb534654e4177178bb9d93b86881fc88d98d,PodSandboxId:436834be7dfc5eb64d8cb3c0af7fc2e5fdab7e70dd273d6dafabf6bf4cccd47e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726856550036607735,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-768431,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bcaa81b6e93977e61634073ae28c68a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4a6e3230e7a6913391ee20bb77bdd367ee8233dfbe564b36049d0b4fdae6268,PodSandboxId:db2e24821ff52de88228e023487aa88fbdfd03452708024d27425c33cf87f9ab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726856263910944947,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-768431,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71c26280473a70f3d3118a2c74ae46b8,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=621a951b-7f18-4e00-b68a-1550a618fb41 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:38:59 embed-certs-768431 crio[711]: time="2024-09-20 18:38:59.237117812Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=75de0abb-6238-43e4-b600-f5fdc5970ea1 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:38:59 embed-certs-768431 crio[711]: time="2024-09-20 18:38:59.237255580Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=75de0abb-6238-43e4-b600-f5fdc5970ea1 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:38:59 embed-certs-768431 crio[711]: time="2024-09-20 18:38:59.238506620Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=78b4829d-a851-4697-8792-490125b16df2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:38:59 embed-certs-768431 crio[711]: time="2024-09-20 18:38:59.239112041Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857539239077388,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=78b4829d-a851-4697-8792-490125b16df2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:38:59 embed-certs-768431 crio[711]: time="2024-09-20 18:38:59.239784076Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=176de30f-85db-46cb-8b26-7840ccfcfeb7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:38:59 embed-certs-768431 crio[711]: time="2024-09-20 18:38:59.239868599Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=176de30f-85db-46cb-8b26-7840ccfcfeb7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:38:59 embed-certs-768431 crio[711]: time="2024-09-20 18:38:59.240211811Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:94ae620077d52dd151fad67882e6e730211f6413dee3d748910740faa4183eac,PodSandboxId:eb56e696cc808b397bd7b0318a5de8e65c328a6f94d327f20bec50c7d414fffb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726856563038879412,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jkkdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f64288e4-009c-4ba8-93e7-30ca5296af46,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79bcf7932ed8f05d3557ec43dc4b30eb901ae603c23ec95cdb26fcff309f8bf0,PodSandboxId:94fd14f7e592bbf392bd3d41fb6cc0d0712ba25d0f849314481ee6df7e221be6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726856562997339560,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-g5tkc,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 8877c0e8-6c8f-4a62-94bd-508982faee3c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7de5f69693ad1e4952f1f37d279b3885941bdfe611a578c407409e1f682c233d,PodSandboxId:c6470c7fa90ed125ade07fc6f5a938d9d40009cf95e18feca7f18fe38f1229fa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1726856561945979039,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09227b45-ea2a-4fcc-b082-9978e5f00a4b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0b5138d088184ba891b7358eb30006ce43b5efaaf9d1bdd2683bffd77d6f7a3,PodSandboxId:6998435c3f51dc14a100845aeaf9dfa662b47774b2295de912617af08804ba63,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1726856561286972333,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c4527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e2d5102-0c42-4b87-8a27-dd53b8eb41f9,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95d89e4642aec6217d5404f63a2c94888e755e70ff067dca9c79dbd4d8ff401f,PodSandboxId:1da58d69dc8baa1a260a18189c4e6103d9c9c8511ede32eceea749c375d1f4b8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726856550114256246,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-768431,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dff06b5ea7279b1405d52ac7628b7439,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34a824c120f70ecac74b6f896fb9ee5b8c6212488ff5593479b43fd125e17797,PodSandboxId:d11b4cd4f6d8e14717fd212f4b2d15ca8acef5c22f1ac534c092b1b8de59fa91,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726856550115101066,Labels:map[
string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-768431,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d4e18090a0918ea03f4cdf2c5f86d9c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2f83bd27b1b0afdbcf06db3ba261f996796b94d246b84e0c8a0155a8e789c0b,PodSandboxId:338d7a9191196f3ced0ba70ed6fd9e5039e63752cdeb27ece11219098ded4e8b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726856550084803387,Label
s:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-768431,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71c26280473a70f3d3118a2c74ae46b8,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f67d435a1e16fdbe13c5f48c2f8afb534654e4177178bb9d93b86881fc88d98d,PodSandboxId:436834be7dfc5eb64d8cb3c0af7fc2e5fdab7e70dd273d6dafabf6bf4cccd47e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726856550036607735,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-768431,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bcaa81b6e93977e61634073ae28c68a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4a6e3230e7a6913391ee20bb77bdd367ee8233dfbe564b36049d0b4fdae6268,PodSandboxId:db2e24821ff52de88228e023487aa88fbdfd03452708024d27425c33cf87f9ab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726856263910944947,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-768431,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71c26280473a70f3d3118a2c74ae46b8,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=176de30f-85db-46cb-8b26-7840ccfcfeb7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:38:59 embed-certs-768431 crio[711]: time="2024-09-20 18:38:59.277904890Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7abd77b8-1ef2-4e59-be29-fa6cf38882a8 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:38:59 embed-certs-768431 crio[711]: time="2024-09-20 18:38:59.277991339Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7abd77b8-1ef2-4e59-be29-fa6cf38882a8 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:38:59 embed-certs-768431 crio[711]: time="2024-09-20 18:38:59.280115310Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=102dea2e-2e9d-44fd-ae2c-1b5ae7509b68 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:38:59 embed-certs-768431 crio[711]: time="2024-09-20 18:38:59.280588660Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857539280562939,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=102dea2e-2e9d-44fd-ae2c-1b5ae7509b68 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:38:59 embed-certs-768431 crio[711]: time="2024-09-20 18:38:59.281298339Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b851623e-d809-45d7-8d90-d08617fe7e66 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:38:59 embed-certs-768431 crio[711]: time="2024-09-20 18:38:59.281447427Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b851623e-d809-45d7-8d90-d08617fe7e66 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:38:59 embed-certs-768431 crio[711]: time="2024-09-20 18:38:59.281948894Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:94ae620077d52dd151fad67882e6e730211f6413dee3d748910740faa4183eac,PodSandboxId:eb56e696cc808b397bd7b0318a5de8e65c328a6f94d327f20bec50c7d414fffb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726856563038879412,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jkkdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f64288e4-009c-4ba8-93e7-30ca5296af46,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79bcf7932ed8f05d3557ec43dc4b30eb901ae603c23ec95cdb26fcff309f8bf0,PodSandboxId:94fd14f7e592bbf392bd3d41fb6cc0d0712ba25d0f849314481ee6df7e221be6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726856562997339560,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-g5tkc,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 8877c0e8-6c8f-4a62-94bd-508982faee3c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7de5f69693ad1e4952f1f37d279b3885941bdfe611a578c407409e1f682c233d,PodSandboxId:c6470c7fa90ed125ade07fc6f5a938d9d40009cf95e18feca7f18fe38f1229fa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1726856561945979039,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09227b45-ea2a-4fcc-b082-9978e5f00a4b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0b5138d088184ba891b7358eb30006ce43b5efaaf9d1bdd2683bffd77d6f7a3,PodSandboxId:6998435c3f51dc14a100845aeaf9dfa662b47774b2295de912617af08804ba63,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1726856561286972333,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c4527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e2d5102-0c42-4b87-8a27-dd53b8eb41f9,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95d89e4642aec6217d5404f63a2c94888e755e70ff067dca9c79dbd4d8ff401f,PodSandboxId:1da58d69dc8baa1a260a18189c4e6103d9c9c8511ede32eceea749c375d1f4b8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726856550114256246,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-768431,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dff06b5ea7279b1405d52ac7628b7439,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34a824c120f70ecac74b6f896fb9ee5b8c6212488ff5593479b43fd125e17797,PodSandboxId:d11b4cd4f6d8e14717fd212f4b2d15ca8acef5c22f1ac534c092b1b8de59fa91,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726856550115101066,Labels:map[
string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-768431,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d4e18090a0918ea03f4cdf2c5f86d9c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2f83bd27b1b0afdbcf06db3ba261f996796b94d246b84e0c8a0155a8e789c0b,PodSandboxId:338d7a9191196f3ced0ba70ed6fd9e5039e63752cdeb27ece11219098ded4e8b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726856550084803387,Label
s:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-768431,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71c26280473a70f3d3118a2c74ae46b8,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f67d435a1e16fdbe13c5f48c2f8afb534654e4177178bb9d93b86881fc88d98d,PodSandboxId:436834be7dfc5eb64d8cb3c0af7fc2e5fdab7e70dd273d6dafabf6bf4cccd47e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726856550036607735,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-768431,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bcaa81b6e93977e61634073ae28c68a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4a6e3230e7a6913391ee20bb77bdd367ee8233dfbe564b36049d0b4fdae6268,PodSandboxId:db2e24821ff52de88228e023487aa88fbdfd03452708024d27425c33cf87f9ab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726856263910944947,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-768431,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71c26280473a70f3d3118a2c74ae46b8,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b851623e-d809-45d7-8d90-d08617fe7e66 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	94ae620077d52       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   16 minutes ago      Running             coredns                   0                   eb56e696cc808       coredns-7c65d6cfc9-jkkdn
	79bcf7932ed8f       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   16 minutes ago      Running             coredns                   0                   94fd14f7e592b       coredns-7c65d6cfc9-g5tkc
	7de5f69693ad1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 minutes ago      Running             storage-provisioner       0                   c6470c7fa90ed       storage-provisioner
	f0b5138d08818       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   16 minutes ago      Running             kube-proxy                0                   6998435c3f51d       kube-proxy-c4527
	34a824c120f70       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   16 minutes ago      Running             kube-controller-manager   2                   d11b4cd4f6d8e       kube-controller-manager-embed-certs-768431
	95d89e4642aec       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   16 minutes ago      Running             kube-scheduler            2                   1da58d69dc8ba       kube-scheduler-embed-certs-768431
	d2f83bd27b1b0       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   16 minutes ago      Running             kube-apiserver            2                   338d7a9191196       kube-apiserver-embed-certs-768431
	f67d435a1e16f       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   16 minutes ago      Running             etcd                      2                   436834be7dfc5       etcd-embed-certs-768431
	d4a6e3230e7a6       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   21 minutes ago      Exited              kube-apiserver            1                   db2e24821ff52       kube-apiserver-embed-certs-768431
	
	
	==> coredns [79bcf7932ed8f05d3557ec43dc4b30eb901ae603c23ec95cdb26fcff309f8bf0] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [94ae620077d52dd151fad67882e6e730211f6413dee3d748910740faa4183eac] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               embed-certs-768431
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-768431
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0626f22cf0d915d75e291a5bce701f94395056e1
	                    minikube.k8s.io/name=embed-certs-768431
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T18_22_36_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 18:22:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-768431
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 18:38:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 18:38:04 +0000   Fri, 20 Sep 2024 18:22:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 18:38:04 +0000   Fri, 20 Sep 2024 18:22:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 18:38:04 +0000   Fri, 20 Sep 2024 18:22:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 18:38:04 +0000   Fri, 20 Sep 2024 18:22:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.202
	  Hostname:    embed-certs-768431
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2cf4f703da584eb5a439ae2a45e7a9e9
	  System UUID:                2cf4f703-da58-4eb5-a439-ae2a45e7a9e9
	  Boot ID:                    e3ce8ed5-feb9-44fd-a7f0-77f81b6c7830
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-g5tkc                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 coredns-7c65d6cfc9-jkkdn                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 etcd-embed-certs-768431                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         16m
	  kube-system                 kube-apiserver-embed-certs-768431             250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-embed-certs-768431    200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-c4527                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-embed-certs-768431             100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 metrics-server-6867b74b74-9snmf               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         16m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 16m                kube-proxy       
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  16m (x8 over 16m)  kubelet          Node embed-certs-768431 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m (x8 over 16m)  kubelet          Node embed-certs-768431 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m (x7 over 16m)  kubelet          Node embed-certs-768431 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m                kubelet          Node embed-certs-768431 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m                kubelet          Node embed-certs-768431 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m                kubelet          Node embed-certs-768431 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           16m                node-controller  Node embed-certs-768431 event: Registered Node embed-certs-768431 in Controller
	
	
	==> dmesg <==
	[  +0.053776] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036914] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.023671] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.943145] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.537384] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.243094] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.061102] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058421] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +0.201206] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +0.129814] systemd-fstab-generator[671]: Ignoring "noauto" option for root device
	[  +0.289179] systemd-fstab-generator[701]: Ignoring "noauto" option for root device
	[  +4.224186] systemd-fstab-generator[794]: Ignoring "noauto" option for root device
	[  +2.382925] systemd-fstab-generator[913]: Ignoring "noauto" option for root device
	[  +0.070571] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.052501] kauditd_printk_skb: 92 callbacks suppressed
	[  +6.312397] kauditd_printk_skb: 62 callbacks suppressed
	[Sep20 18:22] kauditd_printk_skb: 6 callbacks suppressed
	[  +1.026058] systemd-fstab-generator[2566]: Ignoring "noauto" option for root device
	[  +4.418753] kauditd_printk_skb: 54 callbacks suppressed
	[  +2.136478] systemd-fstab-generator[2890]: Ignoring "noauto" option for root device
	[  +4.889484] systemd-fstab-generator[3017]: Ignoring "noauto" option for root device
	[  +0.114917] kauditd_printk_skb: 14 callbacks suppressed
	[  +9.211258] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [f67d435a1e16fdbe13c5f48c2f8afb534654e4177178bb9d93b86881fc88d98d] <==
	{"level":"info","ts":"2024-09-20T18:22:30.663375Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"db33251a0b9c6fb3 became leader at term 2"}
	{"level":"info","ts":"2024-09-20T18:22:30.663382Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: db33251a0b9c6fb3 elected leader db33251a0b9c6fb3 at term 2"}
	{"level":"info","ts":"2024-09-20T18:22:30.667635Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T18:22:30.670603Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"db33251a0b9c6fb3","local-member-attributes":"{Name:embed-certs-768431 ClientURLs:[https://192.168.61.202:2379]}","request-path":"/0/members/db33251a0b9c6fb3/attributes","cluster-id":"834577a0a9e3ba88","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-20T18:22:30.670768Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T18:22:30.671482Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T18:22:30.671599Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"834577a0a9e3ba88","local-member-id":"db33251a0b9c6fb3","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T18:22:30.671766Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T18:22:30.671812Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T18:22:30.674217Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-20T18:22:30.674248Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-20T18:22:30.674582Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T18:22:30.677957Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.202:2379"}
	{"level":"info","ts":"2024-09-20T18:22:30.674798Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T18:22:30.680982Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-20T18:32:31.661006Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":722}
	{"level":"info","ts":"2024-09-20T18:32:31.670881Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":722,"took":"9.261811ms","hash":3014203659,"current-db-size-bytes":2289664,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":2289664,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2024-09-20T18:32:31.671001Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3014203659,"revision":722,"compact-revision":-1}
	{"level":"info","ts":"2024-09-20T18:37:31.669345Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":966}
	{"level":"info","ts":"2024-09-20T18:37:31.673937Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":966,"took":"4.187763ms","hash":1622958906,"current-db-size-bytes":2289664,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":1630208,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-09-20T18:37:31.674001Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1622958906,"revision":966,"compact-revision":722}
	{"level":"warn","ts":"2024-09-20T18:38:32.940544Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"180.874582ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ingressclasses/\" range_end:\"/registry/ingressclasses0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T18:38:32.940941Z","caller":"traceutil/trace.go:171","msg":"trace[2037467888] range","detail":"{range_begin:/registry/ingressclasses/; range_end:/registry/ingressclasses0; response_count:0; response_revision:1260; }","duration":"181.372846ms","start":"2024-09-20T18:38:32.759533Z","end":"2024-09-20T18:38:32.940906Z","steps":["trace[2037467888] 'count revisions from in-memory index tree'  (duration: 180.809471ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T18:38:33.711138Z","caller":"traceutil/trace.go:171","msg":"trace[89774927] transaction","detail":"{read_only:false; response_revision:1261; number_of_response:1; }","duration":"470.815836ms","start":"2024-09-20T18:38:33.240237Z","end":"2024-09-20T18:38:33.711053Z","steps":["trace[89774927] 'process raft request'  (duration: 470.427845ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T18:38:33.712128Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T18:38:33.240214Z","time spent":"471.172691ms","remote":"127.0.0.1:36056","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1103,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1259 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1030 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	
	
	==> kernel <==
	 18:38:59 up 21 min,  0 users,  load average: 0.15, 0.11, 0.10
	Linux embed-certs-768431 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [d2f83bd27b1b0afdbcf06db3ba261f996796b94d246b84e0c8a0155a8e789c0b] <==
	 > logger="UnhandledError"
	I0920 18:35:34.044981       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0920 18:37:33.042311       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 18:37:33.042527       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0920 18:37:34.044354       1 handler_proxy.go:99] no RequestInfo found in the context
	W0920 18:37:34.044484       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 18:37:34.044676       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0920 18:37:34.044791       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0920 18:37:34.045862       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0920 18:37:34.045944       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0920 18:38:34.046989       1 handler_proxy.go:99] no RequestInfo found in the context
	W0920 18:38:34.047352       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 18:38:34.047433       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0920 18:38:34.047447       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0920 18:38:34.048608       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0920 18:38:34.048676       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [d4a6e3230e7a6913391ee20bb77bdd367ee8233dfbe564b36049d0b4fdae6268] <==
	W0920 18:22:23.994436       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:22:24.037644       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:22:24.045245       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:22:24.052931       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:22:24.069361       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:22:24.110357       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:22:24.113224       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:22:24.126889       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:22:24.162604       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:22:24.192594       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:22:24.268655       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:22:24.281644       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:22:24.303566       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:22:24.322714       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:22:24.353294       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:22:24.365938       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:22:24.385661       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:22:24.403391       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:22:24.403394       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:22:24.496921       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:22:24.663767       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:22:24.846566       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:22:24.991989       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:22:25.117547       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:22:25.207107       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [34a824c120f70ecac74b6f896fb9ee5b8c6212488ff5593479b43fd125e17797] <==
	I0920 18:33:40.645542       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0920 18:33:48.929253       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="192.492µs"
	E0920 18:34:10.168748       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 18:34:10.661312       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 18:34:40.177729       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 18:34:40.670340       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 18:35:10.183948       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 18:35:10.679592       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 18:35:40.193694       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 18:35:40.687573       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 18:36:10.200247       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 18:36:10.701385       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 18:36:40.208058       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 18:36:40.709852       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 18:37:10.215031       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 18:37:10.718464       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 18:37:40.221633       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 18:37:40.726338       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0920 18:38:04.843733       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-768431"
	E0920 18:38:10.228411       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 18:38:10.743262       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 18:38:40.236102       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 18:38:40.753114       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0920 18:38:40.934695       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="320.492µs"
	I0920 18:38:51.938365       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="107.745µs"
	
	
	==> kube-proxy [f0b5138d088184ba891b7358eb30006ce43b5efaaf9d1bdd2683bffd77d6f7a3] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 18:22:41.618782       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0920 18:22:41.632584       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.202"]
	E0920 18:22:41.632673       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 18:22:41.707326       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 18:22:41.707379       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 18:22:41.707406       1 server_linux.go:169] "Using iptables Proxier"
	I0920 18:22:41.756034       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 18:22:41.766034       1 server.go:483] "Version info" version="v1.31.1"
	I0920 18:22:41.766067       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 18:22:41.768942       1 config.go:199] "Starting service config controller"
	I0920 18:22:41.769038       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 18:22:41.769118       1 config.go:105] "Starting endpoint slice config controller"
	I0920 18:22:41.769135       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 18:22:41.771399       1 config.go:328] "Starting node config controller"
	I0920 18:22:41.771801       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 18:22:41.869610       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 18:22:41.869698       1 shared_informer.go:320] Caches are synced for service config
	I0920 18:22:41.873272       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [95d89e4642aec6217d5404f63a2c94888e755e70ff067dca9c79dbd4d8ff401f] <==
	W0920 18:22:34.030430       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0920 18:22:34.030488       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 18:22:34.140465       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0920 18:22:34.140515       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 18:22:34.194928       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0920 18:22:34.195227       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 18:22:34.229504       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0920 18:22:34.230300       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 18:22:34.258599       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0920 18:22:34.258703       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0920 18:22:34.360720       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0920 18:22:34.360808       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 18:22:34.400548       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0920 18:22:34.400594       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 18:22:34.433905       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0920 18:22:34.433956       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 18:22:34.434011       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0920 18:22:34.434021       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 18:22:34.434038       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0920 18:22:34.434049       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 18:22:34.465961       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0920 18:22:34.466868       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0920 18:22:34.498968       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0920 18:22:34.499017       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0920 18:22:37.282805       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 20 18:37:59 embed-certs-768431 kubelet[2897]: E0920 18:37:59.911585    2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-9snmf" podUID="5fb654f5-5e73-436e-bc9d-04ef5077deb4"
	Sep 20 18:38:06 embed-certs-768431 kubelet[2897]: E0920 18:38:06.177007    2897 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857486176779547,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:38:06 embed-certs-768431 kubelet[2897]: E0920 18:38:06.177048    2897 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857486176779547,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:38:13 embed-certs-768431 kubelet[2897]: E0920 18:38:13.911728    2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-9snmf" podUID="5fb654f5-5e73-436e-bc9d-04ef5077deb4"
	Sep 20 18:38:16 embed-certs-768431 kubelet[2897]: E0920 18:38:16.181694    2897 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857496181252901,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:38:16 embed-certs-768431 kubelet[2897]: E0920 18:38:16.181781    2897 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857496181252901,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:38:25 embed-certs-768431 kubelet[2897]: E0920 18:38:25.925774    2897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Sep 20 18:38:25 embed-certs-768431 kubelet[2897]: E0920 18:38:25.925863    2897 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Sep 20 18:38:25 embed-certs-768431 kubelet[2897]: E0920 18:38:25.926084    2897 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-djxzr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation
:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Std
in:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-6867b74b74-9snmf_kube-system(5fb654f5-5e73-436e-bc9d-04ef5077deb4): ErrImagePull: pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Sep 20 18:38:25 embed-certs-768431 kubelet[2897]: E0920 18:38:25.927679    2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-6867b74b74-9snmf" podUID="5fb654f5-5e73-436e-bc9d-04ef5077deb4"
	Sep 20 18:38:26 embed-certs-768431 kubelet[2897]: E0920 18:38:26.184531    2897 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857506183833438,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:38:26 embed-certs-768431 kubelet[2897]: E0920 18:38:26.184613    2897 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857506183833438,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:38:35 embed-certs-768431 kubelet[2897]: E0920 18:38:35.952103    2897 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 20 18:38:35 embed-certs-768431 kubelet[2897]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 20 18:38:35 embed-certs-768431 kubelet[2897]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 20 18:38:35 embed-certs-768431 kubelet[2897]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 20 18:38:35 embed-certs-768431 kubelet[2897]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 20 18:38:36 embed-certs-768431 kubelet[2897]: E0920 18:38:36.187340    2897 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857516186860704,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:38:36 embed-certs-768431 kubelet[2897]: E0920 18:38:36.187383    2897 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857516186860704,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:38:40 embed-certs-768431 kubelet[2897]: E0920 18:38:40.912098    2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-9snmf" podUID="5fb654f5-5e73-436e-bc9d-04ef5077deb4"
	Sep 20 18:38:46 embed-certs-768431 kubelet[2897]: E0920 18:38:46.189200    2897 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857526188739480,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:38:46 embed-certs-768431 kubelet[2897]: E0920 18:38:46.189625    2897 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857526188739480,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:38:51 embed-certs-768431 kubelet[2897]: E0920 18:38:51.911470    2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-9snmf" podUID="5fb654f5-5e73-436e-bc9d-04ef5077deb4"
	Sep 20 18:38:56 embed-certs-768431 kubelet[2897]: E0920 18:38:56.192357    2897 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857536191804603,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:38:56 embed-certs-768431 kubelet[2897]: E0920 18:38:56.192687    2897 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857536191804603,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [7de5f69693ad1e4952f1f37d279b3885941bdfe611a578c407409e1f682c233d] <==
	I0920 18:22:42.070964       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0920 18:22:42.091225       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0920 18:22:42.091432       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0920 18:22:42.112357       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0920 18:22:42.113365       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"495a6000-21f4-4e58-bb3e-d8c4065c9026", APIVersion:"v1", ResourceVersion:"425", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-768431_34d83d57-5df8-4e34-b342-b5bf32fafb69 became leader
	I0920 18:22:42.121048       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-768431_34d83d57-5df8-4e34-b342-b5bf32fafb69!
	I0920 18:22:42.221966       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-768431_34d83d57-5df8-4e34-b342-b5bf32fafb69!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-768431 -n embed-certs-768431
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-768431 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-9snmf
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-768431 describe pod metrics-server-6867b74b74-9snmf
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-768431 describe pod metrics-server-6867b74b74-9snmf: exit status 1 (117.630231ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-9snmf" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-768431 describe pod metrics-server-6867b74b74-9snmf: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (426.45s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (371.98s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-956403 -n no-preload-956403
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-09-20 18:38:35.100948504 +0000 UTC m=+6896.745030057
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-956403 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-956403 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.943µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-956403 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-956403 -n no-preload-956403
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-956403 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-956403 logs -n 25: (1.27712677s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-833505 sudo                                 | flannel-833505               | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC | 20 Sep 24 18:09 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p flannel-833505 sudo                                 | flannel-833505               | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC | 20 Sep 24 18:09 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p flannel-833505 sudo find                            | flannel-833505               | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC | 20 Sep 24 18:09 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p flannel-833505 sudo crio                            | flannel-833505               | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC | 20 Sep 24 18:09 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p flannel-833505                                      | flannel-833505               | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC | 20 Sep 24 18:09 UTC |
	| delete  | -p                                                     | disable-driver-mounts-739804 | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC | 20 Sep 24 18:09 UTC |
	|         | disable-driver-mounts-739804                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-553719 | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC | 20 Sep 24 18:10 UTC |
	|         | default-k8s-diff-port-553719                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-956403             | no-preload-956403            | jenkins | v1.34.0 | 20 Sep 24 18:10 UTC | 20 Sep 24 18:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-956403                                   | no-preload-956403            | jenkins | v1.34.0 | 20 Sep 24 18:10 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-768431            | embed-certs-768431           | jenkins | v1.34.0 | 20 Sep 24 18:10 UTC | 20 Sep 24 18:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-768431                                  | embed-certs-768431           | jenkins | v1.34.0 | 20 Sep 24 18:10 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-553719  | default-k8s-diff-port-553719 | jenkins | v1.34.0 | 20 Sep 24 18:11 UTC | 20 Sep 24 18:11 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-553719 | jenkins | v1.34.0 | 20 Sep 24 18:11 UTC |                     |
	|         | default-k8s-diff-port-553719                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-956403                  | no-preload-956403            | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-744025        | old-k8s-version-744025       | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p no-preload-956403                                   | no-preload-956403            | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC | 20 Sep 24 18:23 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-768431                 | embed-certs-768431           | jenkins | v1.34.0 | 20 Sep 24 18:13 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-768431                                  | embed-certs-768431           | jenkins | v1.34.0 | 20 Sep 24 18:13 UTC | 20 Sep 24 18:22 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-553719       | default-k8s-diff-port-553719 | jenkins | v1.34.0 | 20 Sep 24 18:13 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-553719 | jenkins | v1.34.0 | 20 Sep 24 18:13 UTC | 20 Sep 24 18:22 UTC |
	|         | default-k8s-diff-port-553719                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-744025                              | old-k8s-version-744025       | jenkins | v1.34.0 | 20 Sep 24 18:14 UTC | 20 Sep 24 18:14 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-744025             | old-k8s-version-744025       | jenkins | v1.34.0 | 20 Sep 24 18:14 UTC | 20 Sep 24 18:14 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-744025                              | old-k8s-version-744025       | jenkins | v1.34.0 | 20 Sep 24 18:14 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-744025                              | old-k8s-version-744025       | jenkins | v1.34.0 | 20 Sep 24 18:37 UTC | 20 Sep 24 18:37 UTC |
	| start   | -p newest-cni-803958 --memory=2200 --alsologtostderr   | newest-cni-803958            | jenkins | v1.34.0 | 20 Sep 24 18:37 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 18:37:56
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 18:37:56.951782   82261 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:37:56.951923   82261 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:37:56.951934   82261 out.go:358] Setting ErrFile to fd 2...
	I0920 18:37:56.951940   82261 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:37:56.952133   82261 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-8777/.minikube/bin
	I0920 18:37:56.952754   82261 out.go:352] Setting JSON to false
	I0920 18:37:56.953897   82261 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8420,"bootTime":1726849057,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 18:37:56.954001   82261 start.go:139] virtualization: kvm guest
	I0920 18:37:56.956508   82261 out.go:177] * [newest-cni-803958] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 18:37:56.958117   82261 notify.go:220] Checking for updates...
	I0920 18:37:56.958122   82261 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 18:37:56.960103   82261 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 18:37:56.961699   82261 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-8777/kubeconfig
	I0920 18:37:56.962987   82261 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-8777/.minikube
	I0920 18:37:56.964528   82261 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 18:37:56.965966   82261 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 18:37:56.968246   82261 config.go:182] Loaded profile config "default-k8s-diff-port-553719": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:37:56.968346   82261 config.go:182] Loaded profile config "embed-certs-768431": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:37:56.968460   82261 config.go:182] Loaded profile config "no-preload-956403": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:37:56.968576   82261 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 18:37:57.007772   82261 out.go:177] * Using the kvm2 driver based on user configuration
	I0920 18:37:57.009026   82261 start.go:297] selected driver: kvm2
	I0920 18:37:57.009042   82261 start.go:901] validating driver "kvm2" against <nil>
	I0920 18:37:57.009054   82261 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 18:37:57.009784   82261 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:37:57.009900   82261 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19672-8777/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 18:37:57.027671   82261 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 18:37:57.027721   82261 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0920 18:37:57.027786   82261 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0920 18:37:57.028015   82261 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0920 18:37:57.028044   82261 cni.go:84] Creating CNI manager for ""
	I0920 18:37:57.028098   82261 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:37:57.028109   82261 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 18:37:57.028163   82261 start.go:340] cluster config:
	{Name:newest-cni-803958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-803958 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:37:57.028270   82261 iso.go:125] acquiring lock: {Name:mkba95ef0488e46f622333e9f317f43def93040b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:37:57.030587   82261 out.go:177] * Starting "newest-cni-803958" primary control-plane node in "newest-cni-803958" cluster
	I0920 18:37:57.031740   82261 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:37:57.031781   82261 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0920 18:37:57.031789   82261 cache.go:56] Caching tarball of preloaded images
	I0920 18:37:57.031894   82261 preload.go:172] Found /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 18:37:57.031908   82261 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 18:37:57.032007   82261 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/newest-cni-803958/config.json ...
	I0920 18:37:57.032031   82261 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/newest-cni-803958/config.json: {Name:mk3e9ea474cd2ad1e5bdf9973a52cf2546e74b56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:37:57.032203   82261 start.go:360] acquireMachinesLock for newest-cni-803958: {Name:mkfeedb385cf08b5d2aa00913e85815d02a180c2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 18:37:57.032237   82261 start.go:364] duration metric: took 17.97µs to acquireMachinesLock for "newest-cni-803958"
	I0920 18:37:57.032260   82261 start.go:93] Provisioning new machine with config: &{Name:newest-cni-803958 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:newest-cni-803958 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:37:57.032339   82261 start.go:125] createHost starting for "" (driver="kvm2")
	I0920 18:37:57.034006   82261 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 18:37:57.034142   82261 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:37:57.034181   82261 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:37:57.049138   82261 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36195
	I0920 18:37:57.049628   82261 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:37:57.050393   82261 main.go:141] libmachine: Using API Version  1
	I0920 18:37:57.050450   82261 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:37:57.050781   82261 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:37:57.050972   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetMachineName
	I0920 18:37:57.051127   82261 main.go:141] libmachine: (newest-cni-803958) Calling .DriverName
	I0920 18:37:57.051299   82261 start.go:159] libmachine.API.Create for "newest-cni-803958" (driver="kvm2")
	I0920 18:37:57.051340   82261 client.go:168] LocalClient.Create starting
	I0920 18:37:57.051376   82261 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem
	I0920 18:37:57.051414   82261 main.go:141] libmachine: Decoding PEM data...
	I0920 18:37:57.051440   82261 main.go:141] libmachine: Parsing certificate...
	I0920 18:37:57.051500   82261 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem
	I0920 18:37:57.051536   82261 main.go:141] libmachine: Decoding PEM data...
	I0920 18:37:57.051548   82261 main.go:141] libmachine: Parsing certificate...
	I0920 18:37:57.051572   82261 main.go:141] libmachine: Running pre-create checks...
	I0920 18:37:57.051578   82261 main.go:141] libmachine: (newest-cni-803958) Calling .PreCreateCheck
	I0920 18:37:57.051932   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetConfigRaw
	I0920 18:37:57.052330   82261 main.go:141] libmachine: Creating machine...
	I0920 18:37:57.052344   82261 main.go:141] libmachine: (newest-cni-803958) Calling .Create
	I0920 18:37:57.052447   82261 main.go:141] libmachine: (newest-cni-803958) Creating KVM machine...
	I0920 18:37:57.053864   82261 main.go:141] libmachine: (newest-cni-803958) DBG | found existing default KVM network
	I0920 18:37:57.055478   82261 main.go:141] libmachine: (newest-cni-803958) DBG | I0920 18:37:57.055355   82284 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0000155d0}
	I0920 18:37:57.055496   82261 main.go:141] libmachine: (newest-cni-803958) DBG | created network xml: 
	I0920 18:37:57.055511   82261 main.go:141] libmachine: (newest-cni-803958) DBG | <network>
	I0920 18:37:57.055520   82261 main.go:141] libmachine: (newest-cni-803958) DBG |   <name>mk-newest-cni-803958</name>
	I0920 18:37:57.055527   82261 main.go:141] libmachine: (newest-cni-803958) DBG |   <dns enable='no'/>
	I0920 18:37:57.055536   82261 main.go:141] libmachine: (newest-cni-803958) DBG |   
	I0920 18:37:57.055543   82261 main.go:141] libmachine: (newest-cni-803958) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0920 18:37:57.055552   82261 main.go:141] libmachine: (newest-cni-803958) DBG |     <dhcp>
	I0920 18:37:57.055558   82261 main.go:141] libmachine: (newest-cni-803958) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0920 18:37:57.055563   82261 main.go:141] libmachine: (newest-cni-803958) DBG |     </dhcp>
	I0920 18:37:57.055570   82261 main.go:141] libmachine: (newest-cni-803958) DBG |   </ip>
	I0920 18:37:57.055578   82261 main.go:141] libmachine: (newest-cni-803958) DBG |   
	I0920 18:37:57.055582   82261 main.go:141] libmachine: (newest-cni-803958) DBG | </network>
	I0920 18:37:57.055591   82261 main.go:141] libmachine: (newest-cni-803958) DBG | 
	I0920 18:37:57.060855   82261 main.go:141] libmachine: (newest-cni-803958) DBG | trying to create private KVM network mk-newest-cni-803958 192.168.39.0/24...
	I0920 18:37:57.140414   82261 main.go:141] libmachine: (newest-cni-803958) DBG | private KVM network mk-newest-cni-803958 192.168.39.0/24 created
	I0920 18:37:57.140448   82261 main.go:141] libmachine: (newest-cni-803958) Setting up store path in /home/jenkins/minikube-integration/19672-8777/.minikube/machines/newest-cni-803958 ...
	I0920 18:37:57.140463   82261 main.go:141] libmachine: (newest-cni-803958) DBG | I0920 18:37:57.140354   82284 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19672-8777/.minikube
	I0920 18:37:57.140482   82261 main.go:141] libmachine: (newest-cni-803958) Building disk image from file:///home/jenkins/minikube-integration/19672-8777/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso
	I0920 18:37:57.140497   82261 main.go:141] libmachine: (newest-cni-803958) Downloading /home/jenkins/minikube-integration/19672-8777/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19672-8777/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso...
	I0920 18:37:57.415083   82261 main.go:141] libmachine: (newest-cni-803958) DBG | I0920 18:37:57.414884   82284 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/newest-cni-803958/id_rsa...
	I0920 18:37:57.764596   82261 main.go:141] libmachine: (newest-cni-803958) DBG | I0920 18:37:57.764450   82284 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/newest-cni-803958/newest-cni-803958.rawdisk...
	I0920 18:37:57.764635   82261 main.go:141] libmachine: (newest-cni-803958) DBG | Writing magic tar header
	I0920 18:37:57.764664   82261 main.go:141] libmachine: (newest-cni-803958) DBG | Writing SSH key tar header
	I0920 18:37:57.764675   82261 main.go:141] libmachine: (newest-cni-803958) DBG | I0920 18:37:57.764570   82284 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19672-8777/.minikube/machines/newest-cni-803958 ...
	I0920 18:37:57.764703   82261 main.go:141] libmachine: (newest-cni-803958) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/newest-cni-803958
	I0920 18:37:57.764749   82261 main.go:141] libmachine: (newest-cni-803958) Setting executable bit set on /home/jenkins/minikube-integration/19672-8777/.minikube/machines/newest-cni-803958 (perms=drwx------)
	I0920 18:37:57.764769   82261 main.go:141] libmachine: (newest-cni-803958) Setting executable bit set on /home/jenkins/minikube-integration/19672-8777/.minikube/machines (perms=drwxr-xr-x)
	I0920 18:37:57.764777   82261 main.go:141] libmachine: (newest-cni-803958) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-8777/.minikube/machines
	I0920 18:37:57.764800   82261 main.go:141] libmachine: (newest-cni-803958) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-8777/.minikube
	I0920 18:37:57.764810   82261 main.go:141] libmachine: (newest-cni-803958) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-8777
	I0920 18:37:57.764825   82261 main.go:141] libmachine: (newest-cni-803958) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0920 18:37:57.764833   82261 main.go:141] libmachine: (newest-cni-803958) DBG | Checking permissions on dir: /home/jenkins
	I0920 18:37:57.764843   82261 main.go:141] libmachine: (newest-cni-803958) DBG | Checking permissions on dir: /home
	I0920 18:37:57.764848   82261 main.go:141] libmachine: (newest-cni-803958) DBG | Skipping /home - not owner
	I0920 18:37:57.764857   82261 main.go:141] libmachine: (newest-cni-803958) Setting executable bit set on /home/jenkins/minikube-integration/19672-8777/.minikube (perms=drwxr-xr-x)
	I0920 18:37:57.764865   82261 main.go:141] libmachine: (newest-cni-803958) Setting executable bit set on /home/jenkins/minikube-integration/19672-8777 (perms=drwxrwxr-x)
	I0920 18:37:57.764875   82261 main.go:141] libmachine: (newest-cni-803958) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0920 18:37:57.764882   82261 main.go:141] libmachine: (newest-cni-803958) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0920 18:37:57.764888   82261 main.go:141] libmachine: (newest-cni-803958) Creating domain...
	I0920 18:37:57.766060   82261 main.go:141] libmachine: (newest-cni-803958) define libvirt domain using xml: 
	I0920 18:37:57.766085   82261 main.go:141] libmachine: (newest-cni-803958) <domain type='kvm'>
	I0920 18:37:57.766095   82261 main.go:141] libmachine: (newest-cni-803958)   <name>newest-cni-803958</name>
	I0920 18:37:57.766104   82261 main.go:141] libmachine: (newest-cni-803958)   <memory unit='MiB'>2200</memory>
	I0920 18:37:57.766113   82261 main.go:141] libmachine: (newest-cni-803958)   <vcpu>2</vcpu>
	I0920 18:37:57.766122   82261 main.go:141] libmachine: (newest-cni-803958)   <features>
	I0920 18:37:57.766144   82261 main.go:141] libmachine: (newest-cni-803958)     <acpi/>
	I0920 18:37:57.766155   82261 main.go:141] libmachine: (newest-cni-803958)     <apic/>
	I0920 18:37:57.766165   82261 main.go:141] libmachine: (newest-cni-803958)     <pae/>
	I0920 18:37:57.766177   82261 main.go:141] libmachine: (newest-cni-803958)     
	I0920 18:37:57.766200   82261 main.go:141] libmachine: (newest-cni-803958)   </features>
	I0920 18:37:57.766224   82261 main.go:141] libmachine: (newest-cni-803958)   <cpu mode='host-passthrough'>
	I0920 18:37:57.766237   82261 main.go:141] libmachine: (newest-cni-803958)   
	I0920 18:37:57.766246   82261 main.go:141] libmachine: (newest-cni-803958)   </cpu>
	I0920 18:37:57.766254   82261 main.go:141] libmachine: (newest-cni-803958)   <os>
	I0920 18:37:57.766279   82261 main.go:141] libmachine: (newest-cni-803958)     <type>hvm</type>
	I0920 18:37:57.766289   82261 main.go:141] libmachine: (newest-cni-803958)     <boot dev='cdrom'/>
	I0920 18:37:57.766299   82261 main.go:141] libmachine: (newest-cni-803958)     <boot dev='hd'/>
	I0920 18:37:57.766313   82261 main.go:141] libmachine: (newest-cni-803958)     <bootmenu enable='no'/>
	I0920 18:37:57.766325   82261 main.go:141] libmachine: (newest-cni-803958)   </os>
	I0920 18:37:57.766334   82261 main.go:141] libmachine: (newest-cni-803958)   <devices>
	I0920 18:37:57.766342   82261 main.go:141] libmachine: (newest-cni-803958)     <disk type='file' device='cdrom'>
	I0920 18:37:57.766354   82261 main.go:141] libmachine: (newest-cni-803958)       <source file='/home/jenkins/minikube-integration/19672-8777/.minikube/machines/newest-cni-803958/boot2docker.iso'/>
	I0920 18:37:57.766363   82261 main.go:141] libmachine: (newest-cni-803958)       <target dev='hdc' bus='scsi'/>
	I0920 18:37:57.766371   82261 main.go:141] libmachine: (newest-cni-803958)       <readonly/>
	I0920 18:37:57.766380   82261 main.go:141] libmachine: (newest-cni-803958)     </disk>
	I0920 18:37:57.766388   82261 main.go:141] libmachine: (newest-cni-803958)     <disk type='file' device='disk'>
	I0920 18:37:57.766403   82261 main.go:141] libmachine: (newest-cni-803958)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0920 18:37:57.766441   82261 main.go:141] libmachine: (newest-cni-803958)       <source file='/home/jenkins/minikube-integration/19672-8777/.minikube/machines/newest-cni-803958/newest-cni-803958.rawdisk'/>
	I0920 18:37:57.766467   82261 main.go:141] libmachine: (newest-cni-803958)       <target dev='hda' bus='virtio'/>
	I0920 18:37:57.766478   82261 main.go:141] libmachine: (newest-cni-803958)     </disk>
	I0920 18:37:57.766489   82261 main.go:141] libmachine: (newest-cni-803958)     <interface type='network'>
	I0920 18:37:57.766503   82261 main.go:141] libmachine: (newest-cni-803958)       <source network='mk-newest-cni-803958'/>
	I0920 18:37:57.766512   82261 main.go:141] libmachine: (newest-cni-803958)       <model type='virtio'/>
	I0920 18:37:57.766520   82261 main.go:141] libmachine: (newest-cni-803958)     </interface>
	I0920 18:37:57.766535   82261 main.go:141] libmachine: (newest-cni-803958)     <interface type='network'>
	I0920 18:37:57.766554   82261 main.go:141] libmachine: (newest-cni-803958)       <source network='default'/>
	I0920 18:37:57.766571   82261 main.go:141] libmachine: (newest-cni-803958)       <model type='virtio'/>
	I0920 18:37:57.766582   82261 main.go:141] libmachine: (newest-cni-803958)     </interface>
	I0920 18:37:57.766592   82261 main.go:141] libmachine: (newest-cni-803958)     <serial type='pty'>
	I0920 18:37:57.766600   82261 main.go:141] libmachine: (newest-cni-803958)       <target port='0'/>
	I0920 18:37:57.766620   82261 main.go:141] libmachine: (newest-cni-803958)     </serial>
	I0920 18:37:57.766631   82261 main.go:141] libmachine: (newest-cni-803958)     <console type='pty'>
	I0920 18:37:57.766643   82261 main.go:141] libmachine: (newest-cni-803958)       <target type='serial' port='0'/>
	I0920 18:37:57.766664   82261 main.go:141] libmachine: (newest-cni-803958)     </console>
	I0920 18:37:57.766679   82261 main.go:141] libmachine: (newest-cni-803958)     <rng model='virtio'>
	I0920 18:37:57.766686   82261 main.go:141] libmachine: (newest-cni-803958)       <backend model='random'>/dev/random</backend>
	I0920 18:37:57.766692   82261 main.go:141] libmachine: (newest-cni-803958)     </rng>
	I0920 18:37:57.766709   82261 main.go:141] libmachine: (newest-cni-803958)     
	I0920 18:37:57.766717   82261 main.go:141] libmachine: (newest-cni-803958)     
	I0920 18:37:57.766727   82261 main.go:141] libmachine: (newest-cni-803958)   </devices>
	I0920 18:37:57.766736   82261 main.go:141] libmachine: (newest-cni-803958) </domain>
	I0920 18:37:57.766769   82261 main.go:141] libmachine: (newest-cni-803958) 
	I0920 18:37:57.770991   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:57:1d:39 in network default
	I0920 18:37:57.771627   82261 main.go:141] libmachine: (newest-cni-803958) Ensuring networks are active...
	I0920 18:37:57.771647   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:37:57.772487   82261 main.go:141] libmachine: (newest-cni-803958) Ensuring network default is active
	I0920 18:37:57.772827   82261 main.go:141] libmachine: (newest-cni-803958) Ensuring network mk-newest-cni-803958 is active
	I0920 18:37:57.773596   82261 main.go:141] libmachine: (newest-cni-803958) Getting domain xml...
	I0920 18:37:57.774426   82261 main.go:141] libmachine: (newest-cni-803958) Creating domain...
	I0920 18:37:59.072376   82261 main.go:141] libmachine: (newest-cni-803958) Waiting to get IP...
	I0920 18:37:59.073132   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:37:59.073572   82261 main.go:141] libmachine: (newest-cni-803958) DBG | unable to find current IP address of domain newest-cni-803958 in network mk-newest-cni-803958
	I0920 18:37:59.073624   82261 main.go:141] libmachine: (newest-cni-803958) DBG | I0920 18:37:59.073567   82284 retry.go:31] will retry after 237.200058ms: waiting for machine to come up
	I0920 18:37:59.312251   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:37:59.312903   82261 main.go:141] libmachine: (newest-cni-803958) DBG | unable to find current IP address of domain newest-cni-803958 in network mk-newest-cni-803958
	I0920 18:37:59.312935   82261 main.go:141] libmachine: (newest-cni-803958) DBG | I0920 18:37:59.312858   82284 retry.go:31] will retry after 299.515801ms: waiting for machine to come up
	I0920 18:37:59.614747   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:37:59.615346   82261 main.go:141] libmachine: (newest-cni-803958) DBG | unable to find current IP address of domain newest-cni-803958 in network mk-newest-cni-803958
	I0920 18:37:59.615387   82261 main.go:141] libmachine: (newest-cni-803958) DBG | I0920 18:37:59.615271   82284 retry.go:31] will retry after 467.17509ms: waiting for machine to come up
	I0920 18:38:00.083674   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:00.084289   82261 main.go:141] libmachine: (newest-cni-803958) DBG | unable to find current IP address of domain newest-cni-803958 in network mk-newest-cni-803958
	I0920 18:38:00.084327   82261 main.go:141] libmachine: (newest-cni-803958) DBG | I0920 18:38:00.084193   82284 retry.go:31] will retry after 553.911509ms: waiting for machine to come up
	I0920 18:38:00.640192   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:00.640741   82261 main.go:141] libmachine: (newest-cni-803958) DBG | unable to find current IP address of domain newest-cni-803958 in network mk-newest-cni-803958
	I0920 18:38:00.640766   82261 main.go:141] libmachine: (newest-cni-803958) DBG | I0920 18:38:00.640660   82284 retry.go:31] will retry after 464.879742ms: waiting for machine to come up
	I0920 18:38:01.106961   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:01.107580   82261 main.go:141] libmachine: (newest-cni-803958) DBG | unable to find current IP address of domain newest-cni-803958 in network mk-newest-cni-803958
	I0920 18:38:01.107608   82261 main.go:141] libmachine: (newest-cni-803958) DBG | I0920 18:38:01.107506   82284 retry.go:31] will retry after 825.510996ms: waiting for machine to come up
	I0920 18:38:01.934403   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:01.934992   82261 main.go:141] libmachine: (newest-cni-803958) DBG | unable to find current IP address of domain newest-cni-803958 in network mk-newest-cni-803958
	I0920 18:38:01.935033   82261 main.go:141] libmachine: (newest-cni-803958) DBG | I0920 18:38:01.934952   82284 retry.go:31] will retry after 1.031655257s: waiting for machine to come up
	I0920 18:38:02.968058   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:02.968485   82261 main.go:141] libmachine: (newest-cni-803958) DBG | unable to find current IP address of domain newest-cni-803958 in network mk-newest-cni-803958
	I0920 18:38:02.968513   82261 main.go:141] libmachine: (newest-cni-803958) DBG | I0920 18:38:02.968432   82284 retry.go:31] will retry after 1.023055382s: waiting for machine to come up
	I0920 18:38:03.993778   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:03.994340   82261 main.go:141] libmachine: (newest-cni-803958) DBG | unable to find current IP address of domain newest-cni-803958 in network mk-newest-cni-803958
	I0920 18:38:03.994374   82261 main.go:141] libmachine: (newest-cni-803958) DBG | I0920 18:38:03.994281   82284 retry.go:31] will retry after 1.777461501s: waiting for machine to come up
	I0920 18:38:05.773880   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:05.774332   82261 main.go:141] libmachine: (newest-cni-803958) DBG | unable to find current IP address of domain newest-cni-803958 in network mk-newest-cni-803958
	I0920 18:38:05.774355   82261 main.go:141] libmachine: (newest-cni-803958) DBG | I0920 18:38:05.774304   82284 retry.go:31] will retry after 1.64509249s: waiting for machine to come up
	I0920 18:38:07.420629   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:07.421163   82261 main.go:141] libmachine: (newest-cni-803958) DBG | unable to find current IP address of domain newest-cni-803958 in network mk-newest-cni-803958
	I0920 18:38:07.421190   82261 main.go:141] libmachine: (newest-cni-803958) DBG | I0920 18:38:07.421109   82284 retry.go:31] will retry after 2.52757328s: waiting for machine to come up
	I0920 18:38:09.951030   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:09.951652   82261 main.go:141] libmachine: (newest-cni-803958) DBG | unable to find current IP address of domain newest-cni-803958 in network mk-newest-cni-803958
	I0920 18:38:09.951683   82261 main.go:141] libmachine: (newest-cni-803958) DBG | I0920 18:38:09.951578   82284 retry.go:31] will retry after 2.321470741s: waiting for machine to come up
	I0920 18:38:12.274645   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:12.275279   82261 main.go:141] libmachine: (newest-cni-803958) DBG | unable to find current IP address of domain newest-cni-803958 in network mk-newest-cni-803958
	I0920 18:38:12.275307   82261 main.go:141] libmachine: (newest-cni-803958) DBG | I0920 18:38:12.275215   82284 retry.go:31] will retry after 3.8979126s: waiting for machine to come up
	I0920 18:38:16.175587   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:16.175982   82261 main.go:141] libmachine: (newest-cni-803958) DBG | unable to find current IP address of domain newest-cni-803958 in network mk-newest-cni-803958
	I0920 18:38:16.176003   82261 main.go:141] libmachine: (newest-cni-803958) DBG | I0920 18:38:16.175948   82284 retry.go:31] will retry after 5.497884921s: waiting for machine to come up
	I0920 18:38:21.679259   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:21.679768   82261 main.go:141] libmachine: (newest-cni-803958) Found IP for machine: 192.168.39.85
	I0920 18:38:21.679820   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has current primary IP address 192.168.39.85 and MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:21.679827   82261 main.go:141] libmachine: (newest-cni-803958) Reserving static IP address...
	I0920 18:38:21.680243   82261 main.go:141] libmachine: (newest-cni-803958) DBG | unable to find host DHCP lease matching {name: "newest-cni-803958", mac: "52:54:00:7e:f0:0f", ip: "192.168.39.85"} in network mk-newest-cni-803958
	I0920 18:38:21.766201   82261 main.go:141] libmachine: (newest-cni-803958) Reserved static IP address: 192.168.39.85
	I0920 18:38:21.766264   82261 main.go:141] libmachine: (newest-cni-803958) Waiting for SSH to be available...
	I0920 18:38:21.766276   82261 main.go:141] libmachine: (newest-cni-803958) DBG | Getting to WaitForSSH function...
	I0920 18:38:21.768692   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:21.768958   82261 main.go:141] libmachine: (newest-cni-803958) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:7e:f0:0f", ip: ""} in network mk-newest-cni-803958
	I0920 18:38:21.768979   82261 main.go:141] libmachine: (newest-cni-803958) DBG | unable to find defined IP address of network mk-newest-cni-803958 interface with MAC address 52:54:00:7e:f0:0f
	I0920 18:38:21.769168   82261 main.go:141] libmachine: (newest-cni-803958) DBG | Using SSH client type: external
	I0920 18:38:21.769192   82261 main.go:141] libmachine: (newest-cni-803958) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/newest-cni-803958/id_rsa (-rw-------)
	I0920 18:38:21.769225   82261 main.go:141] libmachine: (newest-cni-803958) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-8777/.minikube/machines/newest-cni-803958/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 18:38:21.769237   82261 main.go:141] libmachine: (newest-cni-803958) DBG | About to run SSH command:
	I0920 18:38:21.769246   82261 main.go:141] libmachine: (newest-cni-803958) DBG | exit 0
	I0920 18:38:21.773420   82261 main.go:141] libmachine: (newest-cni-803958) DBG | SSH cmd err, output: exit status 255: 
	I0920 18:38:21.773449   82261 main.go:141] libmachine: (newest-cni-803958) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0920 18:38:21.773476   82261 main.go:141] libmachine: (newest-cni-803958) DBG | command : exit 0
	I0920 18:38:21.773484   82261 main.go:141] libmachine: (newest-cni-803958) DBG | err     : exit status 255
	I0920 18:38:21.773520   82261 main.go:141] libmachine: (newest-cni-803958) DBG | output  : 
	I0920 18:38:24.776090   82261 main.go:141] libmachine: (newest-cni-803958) DBG | Getting to WaitForSSH function...
	I0920 18:38:24.778556   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:24.778989   82261 main.go:141] libmachine: (newest-cni-803958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f0:0f", ip: ""} in network mk-newest-cni-803958: {Iface:virbr1 ExpiryTime:2024-09-20 19:38:12 +0000 UTC Type:0 Mac:52:54:00:7e:f0:0f Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:newest-cni-803958 Clientid:01:52:54:00:7e:f0:0f}
	I0920 18:38:24.779037   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined IP address 192.168.39.85 and MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:24.779118   82261 main.go:141] libmachine: (newest-cni-803958) DBG | Using SSH client type: external
	I0920 18:38:24.779142   82261 main.go:141] libmachine: (newest-cni-803958) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/newest-cni-803958/id_rsa (-rw-------)
	I0920 18:38:24.779173   82261 main.go:141] libmachine: (newest-cni-803958) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.85 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-8777/.minikube/machines/newest-cni-803958/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 18:38:24.779194   82261 main.go:141] libmachine: (newest-cni-803958) DBG | About to run SSH command:
	I0920 18:38:24.779208   82261 main.go:141] libmachine: (newest-cni-803958) DBG | exit 0
	I0920 18:38:24.910058   82261 main.go:141] libmachine: (newest-cni-803958) DBG | SSH cmd err, output: <nil>: 
	I0920 18:38:24.910350   82261 main.go:141] libmachine: (newest-cni-803958) KVM machine creation complete!
	I0920 18:38:24.910690   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetConfigRaw
	I0920 18:38:24.911287   82261 main.go:141] libmachine: (newest-cni-803958) Calling .DriverName
	I0920 18:38:24.911488   82261 main.go:141] libmachine: (newest-cni-803958) Calling .DriverName
	I0920 18:38:24.911635   82261 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0920 18:38:24.911655   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetState
	I0920 18:38:24.913258   82261 main.go:141] libmachine: Detecting operating system of created instance...
	I0920 18:38:24.913274   82261 main.go:141] libmachine: Waiting for SSH to be available...
	I0920 18:38:24.913287   82261 main.go:141] libmachine: Getting to WaitForSSH function...
	I0920 18:38:24.913293   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHHostname
	I0920 18:38:24.916198   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:24.916667   82261 main.go:141] libmachine: (newest-cni-803958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f0:0f", ip: ""} in network mk-newest-cni-803958: {Iface:virbr1 ExpiryTime:2024-09-20 19:38:12 +0000 UTC Type:0 Mac:52:54:00:7e:f0:0f Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:newest-cni-803958 Clientid:01:52:54:00:7e:f0:0f}
	I0920 18:38:24.916700   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined IP address 192.168.39.85 and MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:24.916885   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHPort
	I0920 18:38:24.917077   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHKeyPath
	I0920 18:38:24.917286   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHKeyPath
	I0920 18:38:24.917424   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHUsername
	I0920 18:38:24.917596   82261 main.go:141] libmachine: Using SSH client type: native
	I0920 18:38:24.917774   82261 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.85 22 <nil> <nil>}
	I0920 18:38:24.917789   82261 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0920 18:38:25.033274   82261 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:38:25.033302   82261 main.go:141] libmachine: Detecting the provisioner...
	I0920 18:38:25.033313   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHHostname
	I0920 18:38:25.036007   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:25.036374   82261 main.go:141] libmachine: (newest-cni-803958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f0:0f", ip: ""} in network mk-newest-cni-803958: {Iface:virbr1 ExpiryTime:2024-09-20 19:38:12 +0000 UTC Type:0 Mac:52:54:00:7e:f0:0f Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:newest-cni-803958 Clientid:01:52:54:00:7e:f0:0f}
	I0920 18:38:25.036420   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined IP address 192.168.39.85 and MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:25.036591   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHPort
	I0920 18:38:25.036793   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHKeyPath
	I0920 18:38:25.037002   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHKeyPath
	I0920 18:38:25.037186   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHUsername
	I0920 18:38:25.037355   82261 main.go:141] libmachine: Using SSH client type: native
	I0920 18:38:25.037555   82261 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.85 22 <nil> <nil>}
	I0920 18:38:25.037569   82261 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0920 18:38:25.150640   82261 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0920 18:38:25.150710   82261 main.go:141] libmachine: found compatible host: buildroot
	I0920 18:38:25.150720   82261 main.go:141] libmachine: Provisioning with buildroot...
	I0920 18:38:25.150731   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetMachineName
	I0920 18:38:25.150976   82261 buildroot.go:166] provisioning hostname "newest-cni-803958"
	I0920 18:38:25.150999   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetMachineName
	I0920 18:38:25.151167   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHHostname
	I0920 18:38:25.153988   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:25.154385   82261 main.go:141] libmachine: (newest-cni-803958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f0:0f", ip: ""} in network mk-newest-cni-803958: {Iface:virbr1 ExpiryTime:2024-09-20 19:38:12 +0000 UTC Type:0 Mac:52:54:00:7e:f0:0f Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:newest-cni-803958 Clientid:01:52:54:00:7e:f0:0f}
	I0920 18:38:25.154411   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined IP address 192.168.39.85 and MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:25.154515   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHPort
	I0920 18:38:25.154699   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHKeyPath
	I0920 18:38:25.154895   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHKeyPath
	I0920 18:38:25.155025   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHUsername
	I0920 18:38:25.155206   82261 main.go:141] libmachine: Using SSH client type: native
	I0920 18:38:25.155413   82261 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.85 22 <nil> <nil>}
	I0920 18:38:25.155428   82261 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-803958 && echo "newest-cni-803958" | sudo tee /etc/hostname
	I0920 18:38:25.286069   82261 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-803958
	
	I0920 18:38:25.286102   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHHostname
	I0920 18:38:25.289052   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:25.289386   82261 main.go:141] libmachine: (newest-cni-803958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f0:0f", ip: ""} in network mk-newest-cni-803958: {Iface:virbr1 ExpiryTime:2024-09-20 19:38:12 +0000 UTC Type:0 Mac:52:54:00:7e:f0:0f Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:newest-cni-803958 Clientid:01:52:54:00:7e:f0:0f}
	I0920 18:38:25.289408   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined IP address 192.168.39.85 and MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:25.289580   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHPort
	I0920 18:38:25.289768   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHKeyPath
	I0920 18:38:25.289969   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHKeyPath
	I0920 18:38:25.290156   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHUsername
	I0920 18:38:25.290301   82261 main.go:141] libmachine: Using SSH client type: native
	I0920 18:38:25.290482   82261 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.85 22 <nil> <nil>}
	I0920 18:38:25.290500   82261 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-803958' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-803958/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-803958' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 18:38:25.420728   82261 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:38:25.420761   82261 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-8777/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-8777/.minikube}
	I0920 18:38:25.420829   82261 buildroot.go:174] setting up certificates
	I0920 18:38:25.420843   82261 provision.go:84] configureAuth start
	I0920 18:38:25.420869   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetMachineName
	I0920 18:38:25.421184   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetIP
	I0920 18:38:25.424498   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:25.424910   82261 main.go:141] libmachine: (newest-cni-803958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f0:0f", ip: ""} in network mk-newest-cni-803958: {Iface:virbr1 ExpiryTime:2024-09-20 19:38:12 +0000 UTC Type:0 Mac:52:54:00:7e:f0:0f Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:newest-cni-803958 Clientid:01:52:54:00:7e:f0:0f}
	I0920 18:38:25.424938   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined IP address 192.168.39.85 and MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:25.425122   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHHostname
	I0920 18:38:25.427889   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:25.428336   82261 main.go:141] libmachine: (newest-cni-803958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f0:0f", ip: ""} in network mk-newest-cni-803958: {Iface:virbr1 ExpiryTime:2024-09-20 19:38:12 +0000 UTC Type:0 Mac:52:54:00:7e:f0:0f Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:newest-cni-803958 Clientid:01:52:54:00:7e:f0:0f}
	I0920 18:38:25.428364   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined IP address 192.168.39.85 and MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:25.428587   82261 provision.go:143] copyHostCerts
	I0920 18:38:25.428641   82261 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem, removing ...
	I0920 18:38:25.428664   82261 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem
	I0920 18:38:25.428747   82261 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem (1082 bytes)
	I0920 18:38:25.428878   82261 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem, removing ...
	I0920 18:38:25.428890   82261 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem
	I0920 18:38:25.428931   82261 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem (1123 bytes)
	I0920 18:38:25.429022   82261 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem, removing ...
	I0920 18:38:25.429033   82261 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem
	I0920 18:38:25.429065   82261 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem (1675 bytes)
	I0920 18:38:25.429172   82261 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem org=jenkins.newest-cni-803958 san=[127.0.0.1 192.168.39.85 localhost minikube newest-cni-803958]
	I0920 18:38:25.592648   82261 provision.go:177] copyRemoteCerts
	I0920 18:38:25.592719   82261 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 18:38:25.592749   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHHostname
	I0920 18:38:25.595867   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:25.596183   82261 main.go:141] libmachine: (newest-cni-803958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f0:0f", ip: ""} in network mk-newest-cni-803958: {Iface:virbr1 ExpiryTime:2024-09-20 19:38:12 +0000 UTC Type:0 Mac:52:54:00:7e:f0:0f Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:newest-cni-803958 Clientid:01:52:54:00:7e:f0:0f}
	I0920 18:38:25.596206   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined IP address 192.168.39.85 and MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:25.596530   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHPort
	I0920 18:38:25.596762   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHKeyPath
	I0920 18:38:25.596921   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHUsername
	I0920 18:38:25.597069   82261 sshutil.go:53] new ssh client: &{IP:192.168.39.85 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/newest-cni-803958/id_rsa Username:docker}
	I0920 18:38:25.685188   82261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0920 18:38:25.712846   82261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0920 18:38:25.737263   82261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0920 18:38:25.763101   82261 provision.go:87] duration metric: took 342.230185ms to configureAuth
	I0920 18:38:25.763141   82261 buildroot.go:189] setting minikube options for container-runtime
	I0920 18:38:25.763347   82261 config.go:182] Loaded profile config "newest-cni-803958": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:38:25.763471   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHHostname
	I0920 18:38:25.766313   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:25.766586   82261 main.go:141] libmachine: (newest-cni-803958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f0:0f", ip: ""} in network mk-newest-cni-803958: {Iface:virbr1 ExpiryTime:2024-09-20 19:38:12 +0000 UTC Type:0 Mac:52:54:00:7e:f0:0f Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:newest-cni-803958 Clientid:01:52:54:00:7e:f0:0f}
	I0920 18:38:25.766613   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined IP address 192.168.39.85 and MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:25.766801   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHPort
	I0920 18:38:25.767019   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHKeyPath
	I0920 18:38:25.767237   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHKeyPath
	I0920 18:38:25.767425   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHUsername
	I0920 18:38:25.767623   82261 main.go:141] libmachine: Using SSH client type: native
	I0920 18:38:25.767792   82261 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.85 22 <nil> <nil>}
	I0920 18:38:25.767809   82261 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 18:38:26.019706   82261 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 18:38:26.019734   82261 main.go:141] libmachine: Checking connection to Docker...
	I0920 18:38:26.019748   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetURL
	I0920 18:38:26.021030   82261 main.go:141] libmachine: (newest-cni-803958) DBG | Using libvirt version 6000000
	I0920 18:38:26.023358   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:26.023654   82261 main.go:141] libmachine: (newest-cni-803958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f0:0f", ip: ""} in network mk-newest-cni-803958: {Iface:virbr1 ExpiryTime:2024-09-20 19:38:12 +0000 UTC Type:0 Mac:52:54:00:7e:f0:0f Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:newest-cni-803958 Clientid:01:52:54:00:7e:f0:0f}
	I0920 18:38:26.023684   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined IP address 192.168.39.85 and MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:26.023830   82261 main.go:141] libmachine: Docker is up and running!
	I0920 18:38:26.023853   82261 main.go:141] libmachine: Reticulating splines...
	I0920 18:38:26.023860   82261 client.go:171] duration metric: took 28.972508997s to LocalClient.Create
	I0920 18:38:26.023889   82261 start.go:167] duration metric: took 28.972590244s to libmachine.API.Create "newest-cni-803958"
	I0920 18:38:26.023902   82261 start.go:293] postStartSetup for "newest-cni-803958" (driver="kvm2")
	I0920 18:38:26.023920   82261 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 18:38:26.023945   82261 main.go:141] libmachine: (newest-cni-803958) Calling .DriverName
	I0920 18:38:26.024189   82261 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 18:38:26.024213   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHHostname
	I0920 18:38:26.026338   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:26.026670   82261 main.go:141] libmachine: (newest-cni-803958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f0:0f", ip: ""} in network mk-newest-cni-803958: {Iface:virbr1 ExpiryTime:2024-09-20 19:38:12 +0000 UTC Type:0 Mac:52:54:00:7e:f0:0f Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:newest-cni-803958 Clientid:01:52:54:00:7e:f0:0f}
	I0920 18:38:26.026697   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined IP address 192.168.39.85 and MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:26.026891   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHPort
	I0920 18:38:26.027049   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHKeyPath
	I0920 18:38:26.027154   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHUsername
	I0920 18:38:26.027297   82261 sshutil.go:53] new ssh client: &{IP:192.168.39.85 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/newest-cni-803958/id_rsa Username:docker}
	I0920 18:38:26.121614   82261 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 18:38:26.126041   82261 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 18:38:26.126072   82261 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/addons for local assets ...
	I0920 18:38:26.126153   82261 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/files for local assets ...
	I0920 18:38:26.126265   82261 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem -> 159732.pem in /etc/ssl/certs
	I0920 18:38:26.126386   82261 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 18:38:26.137427   82261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /etc/ssl/certs/159732.pem (1708 bytes)
	I0920 18:38:26.162221   82261 start.go:296] duration metric: took 138.302639ms for postStartSetup
	I0920 18:38:26.162282   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetConfigRaw
	I0920 18:38:26.163084   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetIP
	I0920 18:38:26.165826   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:26.166201   82261 main.go:141] libmachine: (newest-cni-803958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f0:0f", ip: ""} in network mk-newest-cni-803958: {Iface:virbr1 ExpiryTime:2024-09-20 19:38:12 +0000 UTC Type:0 Mac:52:54:00:7e:f0:0f Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:newest-cni-803958 Clientid:01:52:54:00:7e:f0:0f}
	I0920 18:38:26.166236   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined IP address 192.168.39.85 and MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:26.166548   82261 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/newest-cni-803958/config.json ...
	I0920 18:38:26.166804   82261 start.go:128] duration metric: took 29.134455512s to createHost
	I0920 18:38:26.166836   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHHostname
	I0920 18:38:26.169598   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:26.169980   82261 main.go:141] libmachine: (newest-cni-803958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f0:0f", ip: ""} in network mk-newest-cni-803958: {Iface:virbr1 ExpiryTime:2024-09-20 19:38:12 +0000 UTC Type:0 Mac:52:54:00:7e:f0:0f Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:newest-cni-803958 Clientid:01:52:54:00:7e:f0:0f}
	I0920 18:38:26.170007   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined IP address 192.168.39.85 and MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:26.170159   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHPort
	I0920 18:38:26.170329   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHKeyPath
	I0920 18:38:26.170491   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHKeyPath
	I0920 18:38:26.170647   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHUsername
	I0920 18:38:26.170838   82261 main.go:141] libmachine: Using SSH client type: native
	I0920 18:38:26.171058   82261 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.85 22 <nil> <nil>}
	I0920 18:38:26.171074   82261 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 18:38:26.290856   82261 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726857506.270942557
	
	I0920 18:38:26.290885   82261 fix.go:216] guest clock: 1726857506.270942557
	I0920 18:38:26.290900   82261 fix.go:229] Guest: 2024-09-20 18:38:26.270942557 +0000 UTC Remote: 2024-09-20 18:38:26.166820782 +0000 UTC m=+29.255343882 (delta=104.121775ms)
	I0920 18:38:26.290956   82261 fix.go:200] guest clock delta is within tolerance: 104.121775ms
	I0920 18:38:26.290965   82261 start.go:83] releasing machines lock for "newest-cni-803958", held for 29.258716585s
	I0920 18:38:26.290995   82261 main.go:141] libmachine: (newest-cni-803958) Calling .DriverName
	I0920 18:38:26.291288   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetIP
	I0920 18:38:26.293955   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:26.294300   82261 main.go:141] libmachine: (newest-cni-803958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f0:0f", ip: ""} in network mk-newest-cni-803958: {Iface:virbr1 ExpiryTime:2024-09-20 19:38:12 +0000 UTC Type:0 Mac:52:54:00:7e:f0:0f Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:newest-cni-803958 Clientid:01:52:54:00:7e:f0:0f}
	I0920 18:38:26.294329   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined IP address 192.168.39.85 and MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:26.294495   82261 main.go:141] libmachine: (newest-cni-803958) Calling .DriverName
	I0920 18:38:26.294975   82261 main.go:141] libmachine: (newest-cni-803958) Calling .DriverName
	I0920 18:38:26.295158   82261 main.go:141] libmachine: (newest-cni-803958) Calling .DriverName
	I0920 18:38:26.295227   82261 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 18:38:26.295292   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHHostname
	I0920 18:38:26.295400   82261 ssh_runner.go:195] Run: cat /version.json
	I0920 18:38:26.295425   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHHostname
	I0920 18:38:26.298391   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:26.298419   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:26.298790   82261 main.go:141] libmachine: (newest-cni-803958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f0:0f", ip: ""} in network mk-newest-cni-803958: {Iface:virbr1 ExpiryTime:2024-09-20 19:38:12 +0000 UTC Type:0 Mac:52:54:00:7e:f0:0f Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:newest-cni-803958 Clientid:01:52:54:00:7e:f0:0f}
	I0920 18:38:26.298814   82261 main.go:141] libmachine: (newest-cni-803958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f0:0f", ip: ""} in network mk-newest-cni-803958: {Iface:virbr1 ExpiryTime:2024-09-20 19:38:12 +0000 UTC Type:0 Mac:52:54:00:7e:f0:0f Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:newest-cni-803958 Clientid:01:52:54:00:7e:f0:0f}
	I0920 18:38:26.298835   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined IP address 192.168.39.85 and MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:26.298851   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined IP address 192.168.39.85 and MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:26.299021   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHPort
	I0920 18:38:26.299156   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHPort
	I0920 18:38:26.299225   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHKeyPath
	I0920 18:38:26.299303   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHKeyPath
	I0920 18:38:26.299377   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHUsername
	I0920 18:38:26.299456   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetSSHUsername
	I0920 18:38:26.299567   82261 sshutil.go:53] new ssh client: &{IP:192.168.39.85 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/newest-cni-803958/id_rsa Username:docker}
	I0920 18:38:26.299675   82261 sshutil.go:53] new ssh client: &{IP:192.168.39.85 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/newest-cni-803958/id_rsa Username:docker}
	I0920 18:38:26.425051   82261 ssh_runner.go:195] Run: systemctl --version
	I0920 18:38:26.431233   82261 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 18:38:26.591767   82261 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 18:38:26.598196   82261 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 18:38:26.598287   82261 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 18:38:26.615629   82261 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 18:38:26.615658   82261 start.go:495] detecting cgroup driver to use...
	I0920 18:38:26.615734   82261 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 18:38:26.634598   82261 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 18:38:26.649412   82261 docker.go:217] disabling cri-docker service (if available) ...
	I0920 18:38:26.649504   82261 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 18:38:26.665910   82261 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 18:38:26.682068   82261 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 18:38:26.800891   82261 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 18:38:26.946298   82261 docker.go:233] disabling docker service ...
	I0920 18:38:26.946365   82261 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 18:38:26.961974   82261 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 18:38:26.976551   82261 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 18:38:27.119201   82261 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 18:38:27.239780   82261 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 18:38:27.255749   82261 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 18:38:27.278193   82261 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 18:38:27.278289   82261 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:38:27.288996   82261 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 18:38:27.289074   82261 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:38:27.299340   82261 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:38:27.309820   82261 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:38:27.320755   82261 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 18:38:27.331653   82261 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:38:27.343288   82261 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:38:27.361186   82261 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:38:27.372287   82261 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 18:38:27.382571   82261 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 18:38:27.382633   82261 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 18:38:27.395555   82261 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 18:38:27.405520   82261 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:38:27.526941   82261 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 18:38:27.632461   82261 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 18:38:27.632559   82261 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 18:38:27.637880   82261 start.go:563] Will wait 60s for crictl version
	I0920 18:38:27.637944   82261 ssh_runner.go:195] Run: which crictl
	I0920 18:38:27.642074   82261 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 18:38:27.680753   82261 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 18:38:27.680834   82261 ssh_runner.go:195] Run: crio --version
	I0920 18:38:27.713153   82261 ssh_runner.go:195] Run: crio --version
	I0920 18:38:27.743920   82261 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 18:38:27.745056   82261 main.go:141] libmachine: (newest-cni-803958) Calling .GetIP
	I0920 18:38:27.748131   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:27.748539   82261 main.go:141] libmachine: (newest-cni-803958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f0:0f", ip: ""} in network mk-newest-cni-803958: {Iface:virbr1 ExpiryTime:2024-09-20 19:38:12 +0000 UTC Type:0 Mac:52:54:00:7e:f0:0f Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:newest-cni-803958 Clientid:01:52:54:00:7e:f0:0f}
	I0920 18:38:27.748568   82261 main.go:141] libmachine: (newest-cni-803958) DBG | domain newest-cni-803958 has defined IP address 192.168.39.85 and MAC address 52:54:00:7e:f0:0f in network mk-newest-cni-803958
	I0920 18:38:27.748766   82261 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 18:38:27.753220   82261 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:38:27.768091   82261 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0920 18:38:27.769379   82261 kubeadm.go:883] updating cluster {Name:newest-cni-803958 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:newest-cni-803958 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.85 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 18:38:27.769531   82261 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:38:27.769590   82261 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:38:27.802246   82261 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 18:38:27.802320   82261 ssh_runner.go:195] Run: which lz4
	I0920 18:38:27.806895   82261 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 18:38:27.811200   82261 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 18:38:27.811241   82261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0920 18:38:29.229504   82261 crio.go:462] duration metric: took 1.422636323s to copy over tarball
	I0920 18:38:29.229588   82261 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 18:38:31.428260   82261 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.198641757s)
	I0920 18:38:31.428292   82261 crio.go:469] duration metric: took 2.198756082s to extract the tarball
	I0920 18:38:31.428302   82261 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 18:38:31.465017   82261 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:38:31.516128   82261 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 18:38:31.516154   82261 cache_images.go:84] Images are preloaded, skipping loading
	I0920 18:38:31.516166   82261 kubeadm.go:934] updating node { 192.168.39.85 8443 v1.31.1 crio true true} ...
	I0920 18:38:31.516311   82261 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-803958 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.85
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:newest-cni-803958 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 18:38:31.516409   82261 ssh_runner.go:195] Run: crio config
	I0920 18:38:31.568776   82261 cni.go:84] Creating CNI manager for ""
	I0920 18:38:31.568800   82261 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:38:31.568809   82261 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0920 18:38:31.568837   82261 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.39.85 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-803958 NodeName:newest-cni-803958 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.85"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:map[
] NodeIP:192.168.39.85 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 18:38:31.568964   82261 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.85
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-803958"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.85
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.85"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 18:38:31.569019   82261 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 18:38:31.580521   82261 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 18:38:31.580597   82261 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 18:38:31.590482   82261 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (353 bytes)
	I0920 18:38:31.606765   82261 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 18:38:31.625149   82261 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2282 bytes)
	I0920 18:38:31.641805   82261 ssh_runner.go:195] Run: grep 192.168.39.85	control-plane.minikube.internal$ /etc/hosts
	I0920 18:38:31.645653   82261 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.85	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:38:31.659393   82261 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:38:31.796650   82261 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:38:31.817136   82261 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/newest-cni-803958 for IP: 192.168.39.85
	I0920 18:38:31.817158   82261 certs.go:194] generating shared ca certs ...
	I0920 18:38:31.817172   82261 certs.go:226] acquiring lock for ca certs: {Name:mkc7ef6c737c6bdc3fdd9dcff8f57029c020d8f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:38:31.817356   82261 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key
	I0920 18:38:31.817428   82261 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key
	I0920 18:38:31.817443   82261 certs.go:256] generating profile certs ...
	I0920 18:38:31.817512   82261 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/newest-cni-803958/client.key
	I0920 18:38:31.817531   82261 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/newest-cni-803958/client.crt with IP's: []
	I0920 18:38:32.099303   82261 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/newest-cni-803958/client.crt ...
	I0920 18:38:32.099330   82261 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/newest-cni-803958/client.crt: {Name:mkc0509632bac01b37ddfb2e5cb4a2d46207f579 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:38:32.099524   82261 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/newest-cni-803958/client.key ...
	I0920 18:38:32.099538   82261 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/newest-cni-803958/client.key: {Name:mk277425cc0210ea6909cc503c589492fa38e42c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:38:32.099647   82261 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/newest-cni-803958/apiserver.key.9173e10f
	I0920 18:38:32.099663   82261 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/newest-cni-803958/apiserver.crt.9173e10f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.85]
	I0920 18:38:32.381287   82261 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/newest-cni-803958/apiserver.crt.9173e10f ...
	I0920 18:38:32.381326   82261 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/newest-cni-803958/apiserver.crt.9173e10f: {Name:mk95e305edd410ad9b1b1c6dd16892eb1c7adab8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:38:32.381546   82261 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/newest-cni-803958/apiserver.key.9173e10f ...
	I0920 18:38:32.381571   82261 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/newest-cni-803958/apiserver.key.9173e10f: {Name:mk0c0d6ab59ef72540509294642a86bb4920f48d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:38:32.381668   82261 certs.go:381] copying /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/newest-cni-803958/apiserver.crt.9173e10f -> /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/newest-cni-803958/apiserver.crt
	I0920 18:38:32.381780   82261 certs.go:385] copying /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/newest-cni-803958/apiserver.key.9173e10f -> /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/newest-cni-803958/apiserver.key
	I0920 18:38:32.381909   82261 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/newest-cni-803958/proxy-client.key
	I0920 18:38:32.381954   82261 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/newest-cni-803958/proxy-client.crt with IP's: []
	I0920 18:38:32.504014   82261 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/newest-cni-803958/proxy-client.crt ...
	I0920 18:38:32.504054   82261 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/newest-cni-803958/proxy-client.crt: {Name:mkb07a5275c60df8501e1c65b053e29cf1c51d9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:38:32.504242   82261 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/newest-cni-803958/proxy-client.key ...
	I0920 18:38:32.504255   82261 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/newest-cni-803958/proxy-client.key: {Name:mkbca65bcec315b403427144037935f7416c9282 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:38:32.504440   82261 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem (1338 bytes)
	W0920 18:38:32.504488   82261 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973_empty.pem, impossibly tiny 0 bytes
	I0920 18:38:32.504500   82261 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 18:38:32.504526   82261 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem (1082 bytes)
	I0920 18:38:32.504553   82261 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem (1123 bytes)
	I0920 18:38:32.504578   82261 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem (1675 bytes)
	I0920 18:38:32.504620   82261 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem (1708 bytes)
	I0920 18:38:32.505171   82261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 18:38:32.531564   82261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 18:38:32.555630   82261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 18:38:32.580896   82261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0920 18:38:32.609306   82261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/newest-cni-803958/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0920 18:38:32.652518   82261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/newest-cni-803958/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 18:38:32.679239   82261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/newest-cni-803958/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 18:38:32.705161   82261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/newest-cni-803958/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 18:38:32.730458   82261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem --> /usr/share/ca-certificates/15973.pem (1338 bytes)
	I0920 18:38:32.756733   82261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /usr/share/ca-certificates/159732.pem (1708 bytes)
	I0920 18:38:32.786377   82261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 18:38:32.813410   82261 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 18:38:32.832093   82261 ssh_runner.go:195] Run: openssl version
	I0920 18:38:32.838727   82261 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/159732.pem && ln -fs /usr/share/ca-certificates/159732.pem /etc/ssl/certs/159732.pem"
	I0920 18:38:32.852905   82261 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/159732.pem
	I0920 18:38:32.857533   82261 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:04 /usr/share/ca-certificates/159732.pem
	I0920 18:38:32.857602   82261 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/159732.pem
	I0920 18:38:32.864185   82261 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/159732.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 18:38:32.875812   82261 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 18:38:32.888703   82261 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:38:32.893860   82261 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:38:32.893928   82261 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:38:32.900285   82261 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 18:38:32.912099   82261 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15973.pem && ln -fs /usr/share/ca-certificates/15973.pem /etc/ssl/certs/15973.pem"
	I0920 18:38:32.923386   82261 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15973.pem
	I0920 18:38:32.928227   82261 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:04 /usr/share/ca-certificates/15973.pem
	I0920 18:38:32.928302   82261 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15973.pem
	I0920 18:38:32.934528   82261 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15973.pem /etc/ssl/certs/51391683.0"
	I0920 18:38:32.946210   82261 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 18:38:32.950649   82261 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 18:38:32.950720   82261 kubeadm.go:392] StartCluster: {Name:newest-cni-803958 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:newest-cni-803958 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.85 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:38:32.950797   82261 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 18:38:32.950846   82261 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:38:32.993421   82261 cri.go:89] found id: ""
	I0920 18:38:32.993501   82261 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 18:38:33.004345   82261 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 18:38:33.016576   82261 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 18:38:33.027101   82261 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 18:38:33.027126   82261 kubeadm.go:157] found existing configuration files:
	
	I0920 18:38:33.027173   82261 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 18:38:33.037237   82261 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 18:38:33.037308   82261 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 18:38:33.047186   82261 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 18:38:33.057573   82261 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 18:38:33.057644   82261 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 18:38:33.069042   82261 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 18:38:33.080112   82261 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 18:38:33.080179   82261 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 18:38:33.091475   82261 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 18:38:33.101722   82261 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 18:38:33.101798   82261 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 18:38:33.112565   82261 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 18:38:33.219047   82261 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 18:38:33.219171   82261 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 18:38:33.350514   82261 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 18:38:33.350622   82261 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 18:38:33.350758   82261 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 18:38:33.361616   82261 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	
	
	==> CRI-O <==
	Sep 20 18:38:35 no-preload-956403 crio[706]: time="2024-09-20 18:38:35.761547711Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e4cb7582-64a1-4eea-84aa-27de10bfd3d9 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:38:35 no-preload-956403 crio[706]: time="2024-09-20 18:38:35.762869866Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cd0c35e8-6932-4a92-91e0-ace72d1e1465 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:38:35 no-preload-956403 crio[706]: time="2024-09-20 18:38:35.763309286Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857515763277033,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cd0c35e8-6932-4a92-91e0-ace72d1e1465 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:38:35 no-preload-956403 crio[706]: time="2024-09-20 18:38:35.763852759Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a1d796bd-4e55-4060-858e-a291449c802a name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:38:35 no-preload-956403 crio[706]: time="2024-09-20 18:38:35.763975441Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a1d796bd-4e55-4060-858e-a291449c802a name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:38:35 no-preload-956403 crio[706]: time="2024-09-20 18:38:35.764199413Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:179e4a02f3459a613d3307806b477d54581555b18babca62d0c5e553b9562442,PodSandboxId:a4aa1bb68d3e80ac2c63b750b5f32d3cd913ebb3f52a99df9ad043df20b620b5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726856367655123101,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df661627-7d32-4467-9805-1ae65d4fa35c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e852082c0f944a19082360b499b94e415a260b9b2546225438d699561fa8f6ff,PodSandboxId:3e897f51edd9c5fbacee8fa85f2e3331df12d5c732ac33d4df5ec153f63f6d3b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726856348135985961,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a15af88e-18f6-4284-b32a-0cb1b432b683,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35f0d8dd053d4c376ccf21215492d8509ff701e7d09b345fdbe06d505ba2d6b4,PodSandboxId:80d9fde65826dd3f259c46475b2d7bbd6f14bf221d19965c3e8f98d281dfd37b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726856344484544735,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-j2t5h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd2636f1-3200-4f22-957c-046277c9be8c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6df198ca54e8004a532de9b15cfe7a48a659ae06623c00a78cad528762944497,PodSandboxId:ff41f43ceaf02185b62083b69ed688d4c4802c48a9f8fbd467db6075f2b1e9ae,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726856336830203740,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sz4bm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 269600fb-ef65-4b17-8c
07-76c79e35f5a8,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3eb9abdf57de51979118cfa77b613c22e28122fcb1da9e72fbf6f3a946ed3531,PodSandboxId:a4aa1bb68d3e80ac2c63b750b5f32d3cd913ebb3f52a99df9ad043df20b620b5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726856336782619509,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df661627-7d32-4467-9805-1ae65d4fa3
5c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98aa96314cf8f863121b9953ad6335628481f536fcab49b7c00505a17eb35ff2,PodSandboxId:55694df2cb78079dd185683bf9c759122274ea694e0c70098abdd859cb24d952,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726856333198659894,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-956403,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b0ec02901547ea388858d214d5765f1,},Annotations:map[string]string{io.kuber
netes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8153479cebb05353bec31c41778746c485cb294bf41c3f98e30559ad7da64c64,PodSandboxId:893589c707d79ed25204db3db6c7982806bb309236c6a94bb75d20f43798f2fa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726856333130561988,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-956403,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9af5b077640c6f0cb975dac7a2663e89,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:334e4df5baa4f8f7d58af2308fab7f78de22f7e07d6dfbcbeaa0df2a52f6b207,PodSandboxId:7a3535de57f415c09f824af177beb4f8f14f1cfb8cb3e4d97f9a738bc0c86574,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726856333100497654,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-956403,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e09fb1ce62fc85d5c8feacad02192a8,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2
713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ebf4c520d684127cd0b6146d44e863090b9c2db3ac866e9ce3ef2d10d2d6da1,PodSandboxId:0ea2e7a0745f6fe89a09475a576924d725ebc018614a00232bfd424cff3b86e9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726856333020591560,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-956403,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c0ae9548860afe245bdc265d4d3d790,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a1d796bd-4e55-4060-858e-a291449c802a name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:38:35 no-preload-956403 crio[706]: time="2024-09-20 18:38:35.776571886Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=59a3cc8b-9303-4b03-89f1-828817281dd0 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 20 18:38:35 no-preload-956403 crio[706]: time="2024-09-20 18:38:35.776840772Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:3e897f51edd9c5fbacee8fa85f2e3331df12d5c732ac33d4df5ec153f63f6d3b,Metadata:&PodSandboxMetadata{Name:busybox,Uid:a15af88e-18f6-4284-b32a-0cb1b432b683,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726856344354260937,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a15af88e-18f6-4284-b32a-0cb1b432b683,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-20T18:18:56.368475401Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:80d9fde65826dd3f259c46475b2d7bbd6f14bf221d19965c3e8f98d281dfd37b,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-j2t5h,Uid:dd2636f1-3200-4f22-957c-046277c9be8c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:17268563442531800
27,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-j2t5h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd2636f1-3200-4f22-957c-046277c9be8c,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-20T18:18:56.368463187Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:08002f3adff535214dd2956e7a17360a3b4d06a56f3b4ff6fb800e50ea84cbd6,Metadata:&PodSandboxMetadata{Name:metrics-server-6867b74b74-tfsff,Uid:599ba06a-6d4d-483b-b390-a3595a814757,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726856342462882436,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-6867b74b74-tfsff,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 599ba06a-6d4d-483b-b390-a3595a814757,k8s-app: metrics-server,pod-template-hash: 6867b74b74,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-20T18:18:56.3
68457270Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ff41f43ceaf02185b62083b69ed688d4c4802c48a9f8fbd467db6075f2b1e9ae,Metadata:&PodSandboxMetadata{Name:kube-proxy-sz4bm,Uid:269600fb-ef65-4b17-8c07-76c79e35f5a8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726856336686434471,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-sz4bm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 269600fb-ef65-4b17-8c07-76c79e35f5a8,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-20T18:18:56.368461838Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a4aa1bb68d3e80ac2c63b750b5f32d3cd913ebb3f52a99df9ad043df20b620b5,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:df661627-7d32-4467-9805-1ae65d4fa35c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726856336684979735,Labels:map[string]
string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df661627-7d32-4467-9805-1ae65d4fa35c,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io
/config.seen: 2024-09-20T18:18:56.368474264Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:55694df2cb78079dd185683bf9c759122274ea694e0c70098abdd859cb24d952,Metadata:&PodSandboxMetadata{Name:etcd-no-preload-956403,Uid:4b0ec02901547ea388858d214d5765f1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726856332892561782,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-no-preload-956403,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b0ec02901547ea388858d214d5765f1,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.47:2379,kubernetes.io/config.hash: 4b0ec02901547ea388858d214d5765f1,kubernetes.io/config.seen: 2024-09-20T18:18:52.423613388Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:893589c707d79ed25204db3db6c7982806bb309236c6a94bb75d20f43798f2fa,Metadata:&PodSandboxMetadata{Name:kube-scheduler-no-preload-956403,U
id:9af5b077640c6f0cb975dac7a2663e89,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726856332882525774,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-no-preload-956403,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9af5b077640c6f0cb975dac7a2663e89,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 9af5b077640c6f0cb975dac7a2663e89,kubernetes.io/config.seen: 2024-09-20T18:18:52.373384317Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0ea2e7a0745f6fe89a09475a576924d725ebc018614a00232bfd424cff3b86e9,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-no-preload-956403,Uid:5c0ae9548860afe245bdc265d4d3d790,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726856332866190077,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-no-preload-956403,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c0ae9548860afe245bdc265d4d3d790,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 5c0ae9548860afe245bdc265d4d3d790,kubernetes.io/config.seen: 2024-09-20T18:18:52.373383168Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:7a3535de57f415c09f824af177beb4f8f14f1cfb8cb3e4d97f9a738bc0c86574,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-956403,Uid:2e09fb1ce62fc85d5c8feacad02192a8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726856332861881650,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-956403,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e09fb1ce62fc85d5c8feacad02192a8,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.47:8443,kubernetes.io/config.hash: 2e09fb1ce62fc85d5c8feacad02192a8,kube
rnetes.io/config.seen: 2024-09-20T18:18:52.373379022Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=59a3cc8b-9303-4b03-89f1-828817281dd0 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 20 18:38:35 no-preload-956403 crio[706]: time="2024-09-20 18:38:35.777643382Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=19bf5f67-ffeb-407a-841a-641adbed6666 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:38:35 no-preload-956403 crio[706]: time="2024-09-20 18:38:35.777721617Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=19bf5f67-ffeb-407a-841a-641adbed6666 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:38:35 no-preload-956403 crio[706]: time="2024-09-20 18:38:35.777979592Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:179e4a02f3459a613d3307806b477d54581555b18babca62d0c5e553b9562442,PodSandboxId:a4aa1bb68d3e80ac2c63b750b5f32d3cd913ebb3f52a99df9ad043df20b620b5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726856367655123101,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df661627-7d32-4467-9805-1ae65d4fa35c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e852082c0f944a19082360b499b94e415a260b9b2546225438d699561fa8f6ff,PodSandboxId:3e897f51edd9c5fbacee8fa85f2e3331df12d5c732ac33d4df5ec153f63f6d3b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726856348135985961,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a15af88e-18f6-4284-b32a-0cb1b432b683,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35f0d8dd053d4c376ccf21215492d8509ff701e7d09b345fdbe06d505ba2d6b4,PodSandboxId:80d9fde65826dd3f259c46475b2d7bbd6f14bf221d19965c3e8f98d281dfd37b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726856344484544735,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-j2t5h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd2636f1-3200-4f22-957c-046277c9be8c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6df198ca54e8004a532de9b15cfe7a48a659ae06623c00a78cad528762944497,PodSandboxId:ff41f43ceaf02185b62083b69ed688d4c4802c48a9f8fbd467db6075f2b1e9ae,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726856336830203740,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sz4bm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 269600fb-ef65-4b17-8c
07-76c79e35f5a8,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3eb9abdf57de51979118cfa77b613c22e28122fcb1da9e72fbf6f3a946ed3531,PodSandboxId:a4aa1bb68d3e80ac2c63b750b5f32d3cd913ebb3f52a99df9ad043df20b620b5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726856336782619509,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df661627-7d32-4467-9805-1ae65d4fa3
5c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98aa96314cf8f863121b9953ad6335628481f536fcab49b7c00505a17eb35ff2,PodSandboxId:55694df2cb78079dd185683bf9c759122274ea694e0c70098abdd859cb24d952,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726856333198659894,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-956403,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b0ec02901547ea388858d214d5765f1,},Annotations:map[string]string{io.kuber
netes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8153479cebb05353bec31c41778746c485cb294bf41c3f98e30559ad7da64c64,PodSandboxId:893589c707d79ed25204db3db6c7982806bb309236c6a94bb75d20f43798f2fa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726856333130561988,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-956403,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9af5b077640c6f0cb975dac7a2663e89,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:334e4df5baa4f8f7d58af2308fab7f78de22f7e07d6dfbcbeaa0df2a52f6b207,PodSandboxId:7a3535de57f415c09f824af177beb4f8f14f1cfb8cb3e4d97f9a738bc0c86574,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726856333100497654,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-956403,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e09fb1ce62fc85d5c8feacad02192a8,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2
713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ebf4c520d684127cd0b6146d44e863090b9c2db3ac866e9ce3ef2d10d2d6da1,PodSandboxId:0ea2e7a0745f6fe89a09475a576924d725ebc018614a00232bfd424cff3b86e9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726856333020591560,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-956403,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c0ae9548860afe245bdc265d4d3d790,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=19bf5f67-ffeb-407a-841a-641adbed6666 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:38:35 no-preload-956403 crio[706]: time="2024-09-20 18:38:35.809295188Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2eed71dd-2b51-4dbd-b14c-9ce75075ccfb name=/runtime.v1.RuntimeService/Version
	Sep 20 18:38:35 no-preload-956403 crio[706]: time="2024-09-20 18:38:35.809433668Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2eed71dd-2b51-4dbd-b14c-9ce75075ccfb name=/runtime.v1.RuntimeService/Version
	Sep 20 18:38:35 no-preload-956403 crio[706]: time="2024-09-20 18:38:35.811139741Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0e6e4e88-f662-4de3-b9b0-dc3c60625100 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:38:35 no-preload-956403 crio[706]: time="2024-09-20 18:38:35.811696954Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857515811659823,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0e6e4e88-f662-4de3-b9b0-dc3c60625100 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:38:35 no-preload-956403 crio[706]: time="2024-09-20 18:38:35.812532823Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0e3a9e7c-7ff8-4ec9-9021-181e7c7b0979 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:38:35 no-preload-956403 crio[706]: time="2024-09-20 18:38:35.812625138Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0e3a9e7c-7ff8-4ec9-9021-181e7c7b0979 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:38:35 no-preload-956403 crio[706]: time="2024-09-20 18:38:35.812849737Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:179e4a02f3459a613d3307806b477d54581555b18babca62d0c5e553b9562442,PodSandboxId:a4aa1bb68d3e80ac2c63b750b5f32d3cd913ebb3f52a99df9ad043df20b620b5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726856367655123101,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df661627-7d32-4467-9805-1ae65d4fa35c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e852082c0f944a19082360b499b94e415a260b9b2546225438d699561fa8f6ff,PodSandboxId:3e897f51edd9c5fbacee8fa85f2e3331df12d5c732ac33d4df5ec153f63f6d3b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726856348135985961,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a15af88e-18f6-4284-b32a-0cb1b432b683,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35f0d8dd053d4c376ccf21215492d8509ff701e7d09b345fdbe06d505ba2d6b4,PodSandboxId:80d9fde65826dd3f259c46475b2d7bbd6f14bf221d19965c3e8f98d281dfd37b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726856344484544735,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-j2t5h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd2636f1-3200-4f22-957c-046277c9be8c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6df198ca54e8004a532de9b15cfe7a48a659ae06623c00a78cad528762944497,PodSandboxId:ff41f43ceaf02185b62083b69ed688d4c4802c48a9f8fbd467db6075f2b1e9ae,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726856336830203740,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sz4bm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 269600fb-ef65-4b17-8c
07-76c79e35f5a8,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3eb9abdf57de51979118cfa77b613c22e28122fcb1da9e72fbf6f3a946ed3531,PodSandboxId:a4aa1bb68d3e80ac2c63b750b5f32d3cd913ebb3f52a99df9ad043df20b620b5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726856336782619509,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df661627-7d32-4467-9805-1ae65d4fa3
5c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98aa96314cf8f863121b9953ad6335628481f536fcab49b7c00505a17eb35ff2,PodSandboxId:55694df2cb78079dd185683bf9c759122274ea694e0c70098abdd859cb24d952,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726856333198659894,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-956403,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b0ec02901547ea388858d214d5765f1,},Annotations:map[string]string{io.kuber
netes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8153479cebb05353bec31c41778746c485cb294bf41c3f98e30559ad7da64c64,PodSandboxId:893589c707d79ed25204db3db6c7982806bb309236c6a94bb75d20f43798f2fa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726856333130561988,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-956403,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9af5b077640c6f0cb975dac7a2663e89,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:334e4df5baa4f8f7d58af2308fab7f78de22f7e07d6dfbcbeaa0df2a52f6b207,PodSandboxId:7a3535de57f415c09f824af177beb4f8f14f1cfb8cb3e4d97f9a738bc0c86574,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726856333100497654,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-956403,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e09fb1ce62fc85d5c8feacad02192a8,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2
713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ebf4c520d684127cd0b6146d44e863090b9c2db3ac866e9ce3ef2d10d2d6da1,PodSandboxId:0ea2e7a0745f6fe89a09475a576924d725ebc018614a00232bfd424cff3b86e9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726856333020591560,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-956403,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c0ae9548860afe245bdc265d4d3d790,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0e3a9e7c-7ff8-4ec9-9021-181e7c7b0979 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:38:35 no-preload-956403 crio[706]: time="2024-09-20 18:38:35.854786568Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7c0adc75-1b8a-404c-8b62-a3227f734c59 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:38:35 no-preload-956403 crio[706]: time="2024-09-20 18:38:35.854888450Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7c0adc75-1b8a-404c-8b62-a3227f734c59 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:38:35 no-preload-956403 crio[706]: time="2024-09-20 18:38:35.856256212Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4f7b3f52-e6df-49dd-95f5-fc4aff2417cf name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:38:35 no-preload-956403 crio[706]: time="2024-09-20 18:38:35.856634170Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857515856609159,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4f7b3f52-e6df-49dd-95f5-fc4aff2417cf name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:38:35 no-preload-956403 crio[706]: time="2024-09-20 18:38:35.857291086Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=380f9ba2-14d0-4853-bd6a-a413dffe0de7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:38:35 no-preload-956403 crio[706]: time="2024-09-20 18:38:35.857412856Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=380f9ba2-14d0-4853-bd6a-a413dffe0de7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:38:35 no-preload-956403 crio[706]: time="2024-09-20 18:38:35.857630887Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:179e4a02f3459a613d3307806b477d54581555b18babca62d0c5e553b9562442,PodSandboxId:a4aa1bb68d3e80ac2c63b750b5f32d3cd913ebb3f52a99df9ad043df20b620b5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726856367655123101,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df661627-7d32-4467-9805-1ae65d4fa35c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e852082c0f944a19082360b499b94e415a260b9b2546225438d699561fa8f6ff,PodSandboxId:3e897f51edd9c5fbacee8fa85f2e3331df12d5c732ac33d4df5ec153f63f6d3b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726856348135985961,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a15af88e-18f6-4284-b32a-0cb1b432b683,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35f0d8dd053d4c376ccf21215492d8509ff701e7d09b345fdbe06d505ba2d6b4,PodSandboxId:80d9fde65826dd3f259c46475b2d7bbd6f14bf221d19965c3e8f98d281dfd37b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726856344484544735,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-j2t5h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd2636f1-3200-4f22-957c-046277c9be8c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6df198ca54e8004a532de9b15cfe7a48a659ae06623c00a78cad528762944497,PodSandboxId:ff41f43ceaf02185b62083b69ed688d4c4802c48a9f8fbd467db6075f2b1e9ae,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726856336830203740,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sz4bm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 269600fb-ef65-4b17-8c
07-76c79e35f5a8,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3eb9abdf57de51979118cfa77b613c22e28122fcb1da9e72fbf6f3a946ed3531,PodSandboxId:a4aa1bb68d3e80ac2c63b750b5f32d3cd913ebb3f52a99df9ad043df20b620b5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726856336782619509,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df661627-7d32-4467-9805-1ae65d4fa3
5c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98aa96314cf8f863121b9953ad6335628481f536fcab49b7c00505a17eb35ff2,PodSandboxId:55694df2cb78079dd185683bf9c759122274ea694e0c70098abdd859cb24d952,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726856333198659894,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-956403,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b0ec02901547ea388858d214d5765f1,},Annotations:map[string]string{io.kuber
netes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8153479cebb05353bec31c41778746c485cb294bf41c3f98e30559ad7da64c64,PodSandboxId:893589c707d79ed25204db3db6c7982806bb309236c6a94bb75d20f43798f2fa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726856333130561988,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-956403,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9af5b077640c6f0cb975dac7a2663e89,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:334e4df5baa4f8f7d58af2308fab7f78de22f7e07d6dfbcbeaa0df2a52f6b207,PodSandboxId:7a3535de57f415c09f824af177beb4f8f14f1cfb8cb3e4d97f9a738bc0c86574,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726856333100497654,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-956403,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e09fb1ce62fc85d5c8feacad02192a8,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2
713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ebf4c520d684127cd0b6146d44e863090b9c2db3ac866e9ce3ef2d10d2d6da1,PodSandboxId:0ea2e7a0745f6fe89a09475a576924d725ebc018614a00232bfd424cff3b86e9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726856333020591560,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-956403,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c0ae9548860afe245bdc265d4d3d790,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=380f9ba2-14d0-4853-bd6a-a413dffe0de7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	179e4a02f3459       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Running             storage-provisioner       2                   a4aa1bb68d3e8       storage-provisioner
	e852082c0f944       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   19 minutes ago      Running             busybox                   1                   3e897f51edd9c       busybox
	35f0d8dd053d4       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      19 minutes ago      Running             coredns                   1                   80d9fde65826d       coredns-7c65d6cfc9-j2t5h
	6df198ca54e80       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      19 minutes ago      Running             kube-proxy                1                   ff41f43ceaf02       kube-proxy-sz4bm
	3eb9abdf57de5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Exited              storage-provisioner       1                   a4aa1bb68d3e8       storage-provisioner
	98aa96314cf8f       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      19 minutes ago      Running             etcd                      1                   55694df2cb780       etcd-no-preload-956403
	8153479cebb05       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      19 minutes ago      Running             kube-scheduler            1                   893589c707d79       kube-scheduler-no-preload-956403
	334e4df5baa4f       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      19 minutes ago      Running             kube-apiserver            1                   7a3535de57f41       kube-apiserver-no-preload-956403
	3ebf4c520d684       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      19 minutes ago      Running             kube-controller-manager   1                   0ea2e7a0745f6       kube-controller-manager-no-preload-956403
	
	
	==> coredns [35f0d8dd053d4c376ccf21215492d8509ff701e7d09b345fdbe06d505ba2d6b4] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:38757 - 54136 "HINFO IN 5523722262679873145.418932425828733990. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.013530786s
	
	
	==> describe nodes <==
	Name:               no-preload-956403
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-956403
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0626f22cf0d915d75e291a5bce701f94395056e1
	                    minikube.k8s.io/name=no-preload-956403
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T18_09_38_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 18:09:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-956403
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 18:38:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 18:34:44 +0000   Fri, 20 Sep 2024 18:09:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 18:34:44 +0000   Fri, 20 Sep 2024 18:09:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 18:34:44 +0000   Fri, 20 Sep 2024 18:09:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 18:34:44 +0000   Fri, 20 Sep 2024 18:19:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.47
	  Hostname:    no-preload-956403
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c66b620f72724471b218dcc813962e67
	  System UUID:                c66b620f-7272-4471-b218-dcc813962e67
	  Boot ID:                    9eeb8437-d501-4de6-aecf-7cdd4dc11582
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 coredns-7c65d6cfc9-j2t5h                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     28m
	  kube-system                 etcd-no-preload-956403                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         29m
	  kube-system                 kube-apiserver-no-preload-956403             250m (12%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-controller-manager-no-preload-956403    200m (10%)    0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-proxy-sz4bm                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-scheduler-no-preload-956403             100m (5%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 metrics-server-6867b74b74-tfsff              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         28m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 19m                kube-proxy       
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29m (x8 over 29m)  kubelet          Node no-preload-956403 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m (x7 over 29m)  kubelet          Node no-preload-956403 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29m (x7 over 29m)  kubelet          Node no-preload-956403 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    28m                kubelet          Node no-preload-956403 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  28m                kubelet          Node no-preload-956403 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     28m                kubelet          Node no-preload-956403 status is now: NodeHasSufficientPID
	  Normal  NodeReady                28m                kubelet          Node no-preload-956403 status is now: NodeReady
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           28m                node-controller  Node no-preload-956403 event: Registered Node no-preload-956403 in Controller
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node no-preload-956403 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node no-preload-956403 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node no-preload-956403 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19m                node-controller  Node no-preload-956403 event: Registered Node no-preload-956403 in Controller
	
	
	==> dmesg <==
	[Sep20 18:18] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.057298] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038706] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.380657] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.005074] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.628523] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.252715] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.067443] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.068029] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.192455] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +0.150068] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +0.301008] systemd-fstab-generator[698]: Ignoring "noauto" option for root device
	[ +15.196476] systemd-fstab-generator[1230]: Ignoring "noauto" option for root device
	[  +0.071531] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.701834] systemd-fstab-generator[1348]: Ignoring "noauto" option for root device
	[  +4.591288] kauditd_printk_skb: 97 callbacks suppressed
	[  +2.465848] systemd-fstab-generator[1979]: Ignoring "noauto" option for root device
	[Sep20 18:19] kauditd_printk_skb: 61 callbacks suppressed
	[  +5.691900] kauditd_printk_skb: 41 callbacks suppressed
	
	
	==> etcd [98aa96314cf8f863121b9953ad6335628481f536fcab49b7c00505a17eb35ff2] <==
	{"level":"info","ts":"2024-09-20T18:18:54.616318Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"63d12f7d015473f3 became candidate at term 3"}
	{"level":"info","ts":"2024-09-20T18:18:54.616343Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"63d12f7d015473f3 received MsgVoteResp from 63d12f7d015473f3 at term 3"}
	{"level":"info","ts":"2024-09-20T18:18:54.616371Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"63d12f7d015473f3 became leader at term 3"}
	{"level":"info","ts":"2024-09-20T18:18:54.616396Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 63d12f7d015473f3 elected leader 63d12f7d015473f3 at term 3"}
	{"level":"info","ts":"2024-09-20T18:18:54.658570Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T18:18:54.659769Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T18:18:54.661119Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.47:2379"}
	{"level":"info","ts":"2024-09-20T18:18:54.661643Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T18:18:54.662761Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T18:18:54.664223Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-20T18:18:54.658527Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"63d12f7d015473f3","local-member-attributes":"{Name:no-preload-956403 ClientURLs:[https://192.168.50.47:2379]}","request-path":"/0/members/63d12f7d015473f3/attributes","cluster-id":"a66a701203d69b1d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-20T18:18:54.671015Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-20T18:18:54.671097Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-20T18:28:54.701082Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":859}
	{"level":"info","ts":"2024-09-20T18:28:54.712841Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":859,"took":"11.366825ms","hash":770300991,"current-db-size-bytes":2764800,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":2764800,"current-db-size-in-use":"2.8 MB"}
	{"level":"info","ts":"2024-09-20T18:28:54.712954Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":770300991,"revision":859,"compact-revision":-1}
	{"level":"info","ts":"2024-09-20T18:33:54.708852Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1101}
	{"level":"info","ts":"2024-09-20T18:33:54.713866Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1101,"took":"4.204417ms","hash":1420206776,"current-db-size-bytes":2764800,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":1630208,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-09-20T18:33:54.714071Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1420206776,"revision":1101,"compact-revision":859}
	{"level":"warn","ts":"2024-09-20T18:38:32.776373Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"145.436698ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8355182333947105237 > lease_revoke:<id:73f39210a8894f75>","response":"size:28"}
	{"level":"info","ts":"2024-09-20T18:38:33.324458Z","caller":"traceutil/trace.go:171","msg":"trace[640501304] linearizableReadLoop","detail":"{readStateIndex:1849; appliedIndex:1848; }","duration":"294.889893ms","start":"2024-09-20T18:38:33.029515Z","end":"2024-09-20T18:38:33.324405Z","steps":["trace[640501304] 'read index received'  (duration: 294.727544ms)","trace[640501304] 'applied index is now lower than readState.Index'  (duration: 161.527µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-20T18:38:33.324695Z","caller":"traceutil/trace.go:171","msg":"trace[799178010] transaction","detail":"{read_only:false; response_revision:1570; number_of_response:1; }","duration":"300.372135ms","start":"2024-09-20T18:38:33.024304Z","end":"2024-09-20T18:38:33.324676Z","steps":["trace[799178010] 'process raft request'  (duration: 299.925434ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T18:38:33.324873Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"295.202674ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T18:38:33.324991Z","caller":"traceutil/trace.go:171","msg":"trace[1057045144] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1570; }","duration":"295.454543ms","start":"2024-09-20T18:38:33.029512Z","end":"2024-09-20T18:38:33.324966Z","steps":["trace[1057045144] 'agreement among raft nodes before linearized reading'  (duration: 295.155836ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T18:38:33.326890Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T18:38:33.024288Z","time spent":"300.463802ms","remote":"127.0.0.1:38106","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1568 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	
	
	==> kernel <==
	 18:38:36 up 20 min,  0 users,  load average: 0.04, 0.11, 0.08
	Linux no-preload-956403 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [334e4df5baa4f8f7d58af2308fab7f78de22f7e07d6dfbcbeaa0df2a52f6b207] <==
	W0920 18:33:57.073592       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 18:33:57.073722       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0920 18:33:57.074677       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0920 18:33:57.074770       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0920 18:34:57.075210       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 18:34:57.075314       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0920 18:34:57.075260       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 18:34:57.075623       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0920 18:34:57.076511       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0920 18:34:57.077684       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0920 18:36:57.077357       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 18:36:57.077740       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0920 18:36:57.077841       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 18:36:57.077985       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0920 18:36:57.079409       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0920 18:36:57.079483       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [3ebf4c520d684127cd0b6146d44e863090b9c2db3ac866e9ce3ef2d10d2d6da1] <==
	E0920 18:33:29.674801       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 18:33:30.250818       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 18:33:59.681021       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 18:34:00.259050       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 18:34:29.687549       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 18:34:30.267564       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0920 18:34:44.797526       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-956403"
	E0920 18:34:59.701137       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 18:35:00.276650       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0920 18:35:07.460127       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="353.01µs"
	I0920 18:35:18.460658       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="61.177µs"
	E0920 18:35:29.708367       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 18:35:30.284443       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 18:35:59.715843       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 18:36:00.292469       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 18:36:29.722823       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 18:36:30.300374       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 18:36:59.733545       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 18:37:00.309687       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 18:37:29.741014       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 18:37:30.318495       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 18:37:59.749143       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 18:38:00.327580       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 18:38:29.757299       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 18:38:30.336378       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [6df198ca54e8004a532de9b15cfe7a48a659ae06623c00a78cad528762944497] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 18:18:57.114870       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0920 18:18:57.125507       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.47"]
	E0920 18:18:57.125590       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 18:18:57.167486       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 18:18:57.167533       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 18:18:57.167558       1 server_linux.go:169] "Using iptables Proxier"
	I0920 18:18:57.170114       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 18:18:57.170539       1 server.go:483] "Version info" version="v1.31.1"
	I0920 18:18:57.170588       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 18:18:57.172124       1 config.go:199] "Starting service config controller"
	I0920 18:18:57.172192       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 18:18:57.172235       1 config.go:105] "Starting endpoint slice config controller"
	I0920 18:18:57.172252       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 18:18:57.172782       1 config.go:328] "Starting node config controller"
	I0920 18:18:57.174613       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 18:18:57.272691       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 18:18:57.272774       1 shared_informer.go:320] Caches are synced for service config
	I0920 18:18:57.274746       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [8153479cebb05353bec31c41778746c485cb294bf41c3f98e30559ad7da64c64] <==
	I0920 18:18:54.351799       1 serving.go:386] Generated self-signed cert in-memory
	W0920 18:18:56.011382       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0920 18:18:56.011435       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0920 18:18:56.011449       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0920 18:18:56.011460       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0920 18:18:56.060855       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0920 18:18:56.063016       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 18:18:56.065953       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0920 18:18:56.066031       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0920 18:18:56.069155       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0920 18:18:56.066050       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0920 18:18:56.169291       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 20 18:37:32 no-preload-956403 kubelet[1355]: E0920 18:37:32.445597    1355 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-tfsff" podUID="599ba06a-6d4d-483b-b390-a3595a814757"
	Sep 20 18:37:32 no-preload-956403 kubelet[1355]: E0920 18:37:32.675306    1355 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857452674843076,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:37:32 no-preload-956403 kubelet[1355]: E0920 18:37:32.675395    1355 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857452674843076,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:37:42 no-preload-956403 kubelet[1355]: E0920 18:37:42.677969    1355 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857462677290876,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:37:42 no-preload-956403 kubelet[1355]: E0920 18:37:42.678053    1355 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857462677290876,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:37:43 no-preload-956403 kubelet[1355]: E0920 18:37:43.444119    1355 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-tfsff" podUID="599ba06a-6d4d-483b-b390-a3595a814757"
	Sep 20 18:37:52 no-preload-956403 kubelet[1355]: E0920 18:37:52.459550    1355 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 20 18:37:52 no-preload-956403 kubelet[1355]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 20 18:37:52 no-preload-956403 kubelet[1355]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 20 18:37:52 no-preload-956403 kubelet[1355]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 20 18:37:52 no-preload-956403 kubelet[1355]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 20 18:37:52 no-preload-956403 kubelet[1355]: E0920 18:37:52.680690    1355 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857472680216612,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:37:52 no-preload-956403 kubelet[1355]: E0920 18:37:52.680717    1355 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857472680216612,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:37:54 no-preload-956403 kubelet[1355]: E0920 18:37:54.446029    1355 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-tfsff" podUID="599ba06a-6d4d-483b-b390-a3595a814757"
	Sep 20 18:38:02 no-preload-956403 kubelet[1355]: E0920 18:38:02.684196    1355 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857482683359249,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:38:02 no-preload-956403 kubelet[1355]: E0920 18:38:02.684523    1355 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857482683359249,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:38:06 no-preload-956403 kubelet[1355]: E0920 18:38:06.443955    1355 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-tfsff" podUID="599ba06a-6d4d-483b-b390-a3595a814757"
	Sep 20 18:38:12 no-preload-956403 kubelet[1355]: E0920 18:38:12.686611    1355 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857492686129254,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:38:12 no-preload-956403 kubelet[1355]: E0920 18:38:12.686674    1355 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857492686129254,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:38:21 no-preload-956403 kubelet[1355]: E0920 18:38:21.445712    1355 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-tfsff" podUID="599ba06a-6d4d-483b-b390-a3595a814757"
	Sep 20 18:38:22 no-preload-956403 kubelet[1355]: E0920 18:38:22.688893    1355 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857502688466710,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:38:22 no-preload-956403 kubelet[1355]: E0920 18:38:22.689379    1355 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857502688466710,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:38:32 no-preload-956403 kubelet[1355]: E0920 18:38:32.445357    1355 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-tfsff" podUID="599ba06a-6d4d-483b-b390-a3595a814757"
	Sep 20 18:38:32 no-preload-956403 kubelet[1355]: E0920 18:38:32.691779    1355 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857512691265539,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:38:32 no-preload-956403 kubelet[1355]: E0920 18:38:32.691828    1355 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857512691265539,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [179e4a02f3459a613d3307806b477d54581555b18babca62d0c5e553b9562442] <==
	I0920 18:19:27.747786       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0920 18:19:27.766313       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0920 18:19:27.766389       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0920 18:19:45.172080       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0920 18:19:45.172463       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-956403_810e057e-8663-44ff-92d3-048d99113c97!
	I0920 18:19:45.173408       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3ef84c84-9c37-400b-af47-aa338eebb9db", APIVersion:"v1", ResourceVersion:"642", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-956403_810e057e-8663-44ff-92d3-048d99113c97 became leader
	I0920 18:19:45.289079       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-956403_810e057e-8663-44ff-92d3-048d99113c97!
	
	
	==> storage-provisioner [3eb9abdf57de51979118cfa77b613c22e28122fcb1da9e72fbf6f3a946ed3531] <==
	I0920 18:18:56.878791       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0920 18:19:26.881581       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-956403 -n no-preload-956403
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-956403 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-tfsff
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-956403 describe pod metrics-server-6867b74b74-tfsff
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-956403 describe pod metrics-server-6867b74b74-tfsff: exit status 1 (75.418752ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-tfsff" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-956403 describe pod metrics-server-6867b74b74-tfsff: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (371.98s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (146.68s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
E0920 18:36:39.932295   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/functional-945494/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
E0920 18:36:44.154926   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/calico-833505/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
E0920 18:37:23.518518   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/custom-flannel-833505/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
E0920 18:37:43.196390   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.207:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.207:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-744025 -n old-k8s-version-744025
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-744025 -n old-k8s-version-744025: exit status 2 (239.81967ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-744025" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-744025 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-744025 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.919µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-744025 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-744025 -n old-k8s-version-744025
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-744025 -n old-k8s-version-744025: exit status 2 (233.499044ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-744025 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-744025 logs -n 25: (1.750625726s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-833505 sudo cat                             | flannel-833505               | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC | 20 Sep 24 18:09 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p flannel-833505 sudo                                 | flannel-833505               | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC | 20 Sep 24 18:09 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p flannel-833505 sudo                                 | flannel-833505               | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC | 20 Sep 24 18:09 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p flannel-833505 sudo                                 | flannel-833505               | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC | 20 Sep 24 18:09 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p flannel-833505 sudo find                            | flannel-833505               | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC | 20 Sep 24 18:09 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p flannel-833505 sudo crio                            | flannel-833505               | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC | 20 Sep 24 18:09 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p flannel-833505                                      | flannel-833505               | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC | 20 Sep 24 18:09 UTC |
	| delete  | -p                                                     | disable-driver-mounts-739804 | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC | 20 Sep 24 18:09 UTC |
	|         | disable-driver-mounts-739804                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-553719 | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC | 20 Sep 24 18:10 UTC |
	|         | default-k8s-diff-port-553719                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-956403             | no-preload-956403            | jenkins | v1.34.0 | 20 Sep 24 18:10 UTC | 20 Sep 24 18:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-956403                                   | no-preload-956403            | jenkins | v1.34.0 | 20 Sep 24 18:10 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-768431            | embed-certs-768431           | jenkins | v1.34.0 | 20 Sep 24 18:10 UTC | 20 Sep 24 18:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-768431                                  | embed-certs-768431           | jenkins | v1.34.0 | 20 Sep 24 18:10 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-553719  | default-k8s-diff-port-553719 | jenkins | v1.34.0 | 20 Sep 24 18:11 UTC | 20 Sep 24 18:11 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-553719 | jenkins | v1.34.0 | 20 Sep 24 18:11 UTC |                     |
	|         | default-k8s-diff-port-553719                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-956403                  | no-preload-956403            | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-744025        | old-k8s-version-744025       | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p no-preload-956403                                   | no-preload-956403            | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC | 20 Sep 24 18:23 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-768431                 | embed-certs-768431           | jenkins | v1.34.0 | 20 Sep 24 18:13 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-768431                                  | embed-certs-768431           | jenkins | v1.34.0 | 20 Sep 24 18:13 UTC | 20 Sep 24 18:22 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-553719       | default-k8s-diff-port-553719 | jenkins | v1.34.0 | 20 Sep 24 18:13 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-553719 | jenkins | v1.34.0 | 20 Sep 24 18:13 UTC | 20 Sep 24 18:22 UTC |
	|         | default-k8s-diff-port-553719                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-744025                              | old-k8s-version-744025       | jenkins | v1.34.0 | 20 Sep 24 18:14 UTC | 20 Sep 24 18:14 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-744025             | old-k8s-version-744025       | jenkins | v1.34.0 | 20 Sep 24 18:14 UTC | 20 Sep 24 18:14 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-744025                              | old-k8s-version-744025       | jenkins | v1.34.0 | 20 Sep 24 18:14 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 18:14:12
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 18:14:12.735020   75577 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:14:12.735385   75577 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:14:12.735400   75577 out.go:358] Setting ErrFile to fd 2...
	I0920 18:14:12.735407   75577 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:14:12.735610   75577 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-8777/.minikube/bin
	I0920 18:14:12.736161   75577 out.go:352] Setting JSON to false
	I0920 18:14:12.737033   75577 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6996,"bootTime":1726849057,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 18:14:12.737131   75577 start.go:139] virtualization: kvm guest
	I0920 18:14:12.739127   75577 out.go:177] * [old-k8s-version-744025] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 18:14:12.740288   75577 notify.go:220] Checking for updates...
	I0920 18:14:12.740310   75577 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 18:14:12.741542   75577 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 18:14:12.742727   75577 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-8777/kubeconfig
	I0920 18:14:12.743806   75577 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-8777/.minikube
	I0920 18:14:12.744863   75577 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 18:14:12.745877   75577 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 18:14:12.747201   75577 config.go:182] Loaded profile config "old-k8s-version-744025": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0920 18:14:12.747636   75577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:14:12.747678   75577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:14:12.762352   75577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42889
	I0920 18:14:12.762734   75577 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:14:12.763321   75577 main.go:141] libmachine: Using API Version  1
	I0920 18:14:12.763348   75577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:14:12.763742   75577 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:14:12.763968   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .DriverName
	I0920 18:14:12.765767   75577 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0920 18:14:12.767036   75577 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 18:14:12.767377   75577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:14:12.767449   75577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:14:12.782473   75577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40777
	I0920 18:14:12.782971   75577 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:14:12.783455   75577 main.go:141] libmachine: Using API Version  1
	I0920 18:14:12.783477   75577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:14:12.783807   75577 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:14:12.784035   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .DriverName
	I0920 18:14:12.820265   75577 out.go:177] * Using the kvm2 driver based on existing profile
	I0920 18:14:12.821397   75577 start.go:297] selected driver: kvm2
	I0920 18:14:12.821409   75577 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-744025 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-744025 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.207 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:14:12.821519   75577 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 18:14:12.822217   75577 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:14:12.822309   75577 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19672-8777/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 18:14:12.837641   75577 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 18:14:12.838107   75577 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:14:12.838143   75577 cni.go:84] Creating CNI manager for ""
	I0920 18:14:12.838193   75577 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:14:12.838231   75577 start.go:340] cluster config:
	{Name:old-k8s-version-744025 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-744025 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.207 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:14:12.838329   75577 iso.go:125] acquiring lock: {Name:mkba95ef0488e46f622333e9f317f43def93040b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:14:12.840059   75577 out.go:177] * Starting "old-k8s-version-744025" primary control-plane node in "old-k8s-version-744025" cluster
	I0920 18:14:15.230099   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:14:12.841339   75577 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0920 18:14:12.841384   75577 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0920 18:14:12.841394   75577 cache.go:56] Caching tarball of preloaded images
	I0920 18:14:12.841473   75577 preload.go:172] Found /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 18:14:12.841482   75577 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0920 18:14:12.841594   75577 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/config.json ...
	I0920 18:14:12.841781   75577 start.go:360] acquireMachinesLock for old-k8s-version-744025: {Name:mkfeedb385cf08b5d2aa00913e85815d02a180c2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 18:14:18.302232   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:14:24.382087   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:14:27.454110   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:14:33.534076   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:14:36.606161   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:14:42.686057   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:14:45.758152   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:14:51.838087   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:14:54.910159   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:00.990183   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:04.062105   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:10.142153   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:13.218090   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:19.294132   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:22.366139   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:28.446103   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:31.518062   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:37.598126   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:40.670145   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:46.750116   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:49.822142   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:55.902120   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:15:58.974169   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:16:05.054139   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:16:08.126122   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:16:14.206089   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:16:17.278137   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:16:23.358156   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:16:26.430180   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:16:32.510115   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:16:35.582114   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:16:41.662126   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:16:44.734140   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:16:50.814123   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:16:53.886188   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:16:59.966139   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:17:03.038067   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:17:09.118169   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:17:12.190091   74753 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.47:22: connect: no route to host
	I0920 18:17:15.194165   75086 start.go:364] duration metric: took 4m1.091611985s to acquireMachinesLock for "embed-certs-768431"
	I0920 18:17:15.194223   75086 start.go:96] Skipping create...Using existing machine configuration
	I0920 18:17:15.194231   75086 fix.go:54] fixHost starting: 
	I0920 18:17:15.194737   75086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:17:15.194794   75086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:17:15.210558   75086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43191
	I0920 18:17:15.211084   75086 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:17:15.211527   75086 main.go:141] libmachine: Using API Version  1
	I0920 18:17:15.211550   75086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:17:15.211882   75086 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:17:15.212099   75086 main.go:141] libmachine: (embed-certs-768431) Calling .DriverName
	I0920 18:17:15.212217   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetState
	I0920 18:17:15.213955   75086 fix.go:112] recreateIfNeeded on embed-certs-768431: state=Stopped err=<nil>
	I0920 18:17:15.213995   75086 main.go:141] libmachine: (embed-certs-768431) Calling .DriverName
	W0920 18:17:15.214129   75086 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 18:17:15.215650   75086 out.go:177] * Restarting existing kvm2 VM for "embed-certs-768431" ...
	I0920 18:17:15.191760   74753 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:17:15.191794   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetMachineName
	I0920 18:17:15.192105   74753 buildroot.go:166] provisioning hostname "no-preload-956403"
	I0920 18:17:15.192133   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetMachineName
	I0920 18:17:15.192325   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:17:15.194039   74753 machine.go:96] duration metric: took 4m37.421116123s to provisionDockerMachine
	I0920 18:17:15.194077   74753 fix.go:56] duration metric: took 4m37.444193238s for fixHost
	I0920 18:17:15.194087   74753 start.go:83] releasing machines lock for "no-preload-956403", held for 4m37.444234787s
	W0920 18:17:15.194109   74753 start.go:714] error starting host: provision: host is not running
	W0920 18:17:15.194214   74753 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0920 18:17:15.194222   74753 start.go:729] Will try again in 5 seconds ...
	I0920 18:17:15.216590   75086 main.go:141] libmachine: (embed-certs-768431) Calling .Start
	I0920 18:17:15.216773   75086 main.go:141] libmachine: (embed-certs-768431) Ensuring networks are active...
	I0920 18:17:15.217439   75086 main.go:141] libmachine: (embed-certs-768431) Ensuring network default is active
	I0920 18:17:15.217769   75086 main.go:141] libmachine: (embed-certs-768431) Ensuring network mk-embed-certs-768431 is active
	I0920 18:17:15.218058   75086 main.go:141] libmachine: (embed-certs-768431) Getting domain xml...
	I0920 18:17:15.218674   75086 main.go:141] libmachine: (embed-certs-768431) Creating domain...
	I0920 18:17:16.454922   75086 main.go:141] libmachine: (embed-certs-768431) Waiting to get IP...
	I0920 18:17:16.456011   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:16.456437   75086 main.go:141] libmachine: (embed-certs-768431) DBG | unable to find current IP address of domain embed-certs-768431 in network mk-embed-certs-768431
	I0920 18:17:16.456518   75086 main.go:141] libmachine: (embed-certs-768431) DBG | I0920 18:17:16.456416   76214 retry.go:31] will retry after 282.081004ms: waiting for machine to come up
	I0920 18:17:16.740062   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:16.740455   75086 main.go:141] libmachine: (embed-certs-768431) DBG | unable to find current IP address of domain embed-certs-768431 in network mk-embed-certs-768431
	I0920 18:17:16.740505   75086 main.go:141] libmachine: (embed-certs-768431) DBG | I0920 18:17:16.740420   76214 retry.go:31] will retry after 335.306847ms: waiting for machine to come up
	I0920 18:17:17.077169   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:17.077564   75086 main.go:141] libmachine: (embed-certs-768431) DBG | unable to find current IP address of domain embed-certs-768431 in network mk-embed-certs-768431
	I0920 18:17:17.077594   75086 main.go:141] libmachine: (embed-certs-768431) DBG | I0920 18:17:17.077513   76214 retry.go:31] will retry after 455.255315ms: waiting for machine to come up
	I0920 18:17:17.534166   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:17.534545   75086 main.go:141] libmachine: (embed-certs-768431) DBG | unable to find current IP address of domain embed-certs-768431 in network mk-embed-certs-768431
	I0920 18:17:17.534570   75086 main.go:141] libmachine: (embed-certs-768431) DBG | I0920 18:17:17.534511   76214 retry.go:31] will retry after 476.184378ms: waiting for machine to come up
	I0920 18:17:18.011940   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:18.012405   75086 main.go:141] libmachine: (embed-certs-768431) DBG | unable to find current IP address of domain embed-certs-768431 in network mk-embed-certs-768431
	I0920 18:17:18.012436   75086 main.go:141] libmachine: (embed-certs-768431) DBG | I0920 18:17:18.012359   76214 retry.go:31] will retry after 597.678215ms: waiting for machine to come up
	I0920 18:17:18.611179   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:18.611560   75086 main.go:141] libmachine: (embed-certs-768431) DBG | unable to find current IP address of domain embed-certs-768431 in network mk-embed-certs-768431
	I0920 18:17:18.611588   75086 main.go:141] libmachine: (embed-certs-768431) DBG | I0920 18:17:18.611521   76214 retry.go:31] will retry after 696.074491ms: waiting for machine to come up
	I0920 18:17:20.194460   74753 start.go:360] acquireMachinesLock for no-preload-956403: {Name:mkfeedb385cf08b5d2aa00913e85815d02a180c2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 18:17:19.309493   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:19.309940   75086 main.go:141] libmachine: (embed-certs-768431) DBG | unable to find current IP address of domain embed-certs-768431 in network mk-embed-certs-768431
	I0920 18:17:19.309958   75086 main.go:141] libmachine: (embed-certs-768431) DBG | I0920 18:17:19.309909   76214 retry.go:31] will retry after 825.638908ms: waiting for machine to come up
	I0920 18:17:20.137018   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:20.137380   75086 main.go:141] libmachine: (embed-certs-768431) DBG | unable to find current IP address of domain embed-certs-768431 in network mk-embed-certs-768431
	I0920 18:17:20.137407   75086 main.go:141] libmachine: (embed-certs-768431) DBG | I0920 18:17:20.137338   76214 retry.go:31] will retry after 997.909719ms: waiting for machine to come up
	I0920 18:17:21.137154   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:21.137608   75086 main.go:141] libmachine: (embed-certs-768431) DBG | unable to find current IP address of domain embed-certs-768431 in network mk-embed-certs-768431
	I0920 18:17:21.137630   75086 main.go:141] libmachine: (embed-certs-768431) DBG | I0920 18:17:21.137552   76214 retry.go:31] will retry after 1.368594293s: waiting for machine to come up
	I0920 18:17:22.507834   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:22.508202   75086 main.go:141] libmachine: (embed-certs-768431) DBG | unable to find current IP address of domain embed-certs-768431 in network mk-embed-certs-768431
	I0920 18:17:22.508228   75086 main.go:141] libmachine: (embed-certs-768431) DBG | I0920 18:17:22.508152   76214 retry.go:31] will retry after 1.922265011s: waiting for machine to come up
	I0920 18:17:24.431977   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:24.432422   75086 main.go:141] libmachine: (embed-certs-768431) DBG | unable to find current IP address of domain embed-certs-768431 in network mk-embed-certs-768431
	I0920 18:17:24.432452   75086 main.go:141] libmachine: (embed-certs-768431) DBG | I0920 18:17:24.432370   76214 retry.go:31] will retry after 2.875158038s: waiting for machine to come up
	I0920 18:17:27.309993   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:27.310512   75086 main.go:141] libmachine: (embed-certs-768431) DBG | unable to find current IP address of domain embed-certs-768431 in network mk-embed-certs-768431
	I0920 18:17:27.310539   75086 main.go:141] libmachine: (embed-certs-768431) DBG | I0920 18:17:27.310459   76214 retry.go:31] will retry after 3.089759463s: waiting for machine to come up
	I0920 18:17:30.402254   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:30.402718   75086 main.go:141] libmachine: (embed-certs-768431) DBG | unable to find current IP address of domain embed-certs-768431 in network mk-embed-certs-768431
	I0920 18:17:30.402757   75086 main.go:141] libmachine: (embed-certs-768431) DBG | I0920 18:17:30.402671   76214 retry.go:31] will retry after 3.42897838s: waiting for machine to come up
	I0920 18:17:33.835196   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:33.835576   75086 main.go:141] libmachine: (embed-certs-768431) Found IP for machine: 192.168.61.202
	I0920 18:17:33.835606   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has current primary IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:33.835615   75086 main.go:141] libmachine: (embed-certs-768431) Reserving static IP address...
	I0920 18:17:33.835974   75086 main.go:141] libmachine: (embed-certs-768431) Reserved static IP address: 192.168.61.202
	I0920 18:17:33.836010   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "embed-certs-768431", mac: "52:54:00:d2:f2:e2", ip: "192.168.61.202"} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:33.836024   75086 main.go:141] libmachine: (embed-certs-768431) Waiting for SSH to be available...
	I0920 18:17:33.836053   75086 main.go:141] libmachine: (embed-certs-768431) DBG | skip adding static IP to network mk-embed-certs-768431 - found existing host DHCP lease matching {name: "embed-certs-768431", mac: "52:54:00:d2:f2:e2", ip: "192.168.61.202"}
	I0920 18:17:33.836065   75086 main.go:141] libmachine: (embed-certs-768431) DBG | Getting to WaitForSSH function...
	I0920 18:17:33.838215   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:33.838561   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:33.838593   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:33.838735   75086 main.go:141] libmachine: (embed-certs-768431) DBG | Using SSH client type: external
	I0920 18:17:33.838768   75086 main.go:141] libmachine: (embed-certs-768431) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/embed-certs-768431/id_rsa (-rw-------)
	I0920 18:17:33.838809   75086 main.go:141] libmachine: (embed-certs-768431) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.202 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-8777/.minikube/machines/embed-certs-768431/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 18:17:33.838826   75086 main.go:141] libmachine: (embed-certs-768431) DBG | About to run SSH command:
	I0920 18:17:33.838838   75086 main.go:141] libmachine: (embed-certs-768431) DBG | exit 0
	I0920 18:17:33.962092   75086 main.go:141] libmachine: (embed-certs-768431) DBG | SSH cmd err, output: <nil>: 
	I0920 18:17:33.962513   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetConfigRaw
	I0920 18:17:33.963115   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetIP
	I0920 18:17:33.965714   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:33.966036   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:33.966056   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:33.966391   75086 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/embed-certs-768431/config.json ...
	I0920 18:17:33.966621   75086 machine.go:93] provisionDockerMachine start ...
	I0920 18:17:33.966641   75086 main.go:141] libmachine: (embed-certs-768431) Calling .DriverName
	I0920 18:17:33.966848   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHHostname
	I0920 18:17:33.968954   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:33.969235   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:33.969266   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:33.969358   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHPort
	I0920 18:17:33.969542   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:33.969694   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:33.969854   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHUsername
	I0920 18:17:33.970001   75086 main.go:141] libmachine: Using SSH client type: native
	I0920 18:17:33.970194   75086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0920 18:17:33.970204   75086 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 18:17:35.178887   75264 start.go:364] duration metric: took 3m58.041934266s to acquireMachinesLock for "default-k8s-diff-port-553719"
	I0920 18:17:35.178955   75264 start.go:96] Skipping create...Using existing machine configuration
	I0920 18:17:35.178968   75264 fix.go:54] fixHost starting: 
	I0920 18:17:35.179541   75264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:17:35.179604   75264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:17:35.199776   75264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39643
	I0920 18:17:35.200255   75264 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:17:35.200832   75264 main.go:141] libmachine: Using API Version  1
	I0920 18:17:35.200858   75264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:17:35.201213   75264 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:17:35.201427   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .DriverName
	I0920 18:17:35.201575   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetState
	I0920 18:17:35.203239   75264 fix.go:112] recreateIfNeeded on default-k8s-diff-port-553719: state=Stopped err=<nil>
	I0920 18:17:35.203279   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .DriverName
	W0920 18:17:35.203415   75264 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 18:17:35.205529   75264 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-553719" ...
	I0920 18:17:34.078412   75086 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 18:17:34.078447   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetMachineName
	I0920 18:17:34.078648   75086 buildroot.go:166] provisioning hostname "embed-certs-768431"
	I0920 18:17:34.078676   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetMachineName
	I0920 18:17:34.078854   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHHostname
	I0920 18:17:34.081703   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.082042   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:34.082079   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.082175   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHPort
	I0920 18:17:34.082371   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:34.082525   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:34.082656   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHUsername
	I0920 18:17:34.082778   75086 main.go:141] libmachine: Using SSH client type: native
	I0920 18:17:34.082937   75086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0920 18:17:34.082949   75086 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-768431 && echo "embed-certs-768431" | sudo tee /etc/hostname
	I0920 18:17:34.199661   75086 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-768431
	
	I0920 18:17:34.199726   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHHostname
	I0920 18:17:34.202468   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.202875   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:34.202903   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.203084   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHPort
	I0920 18:17:34.203328   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:34.203477   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:34.203630   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHUsername
	I0920 18:17:34.203769   75086 main.go:141] libmachine: Using SSH client type: native
	I0920 18:17:34.203936   75086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0920 18:17:34.203952   75086 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-768431' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-768431/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-768431' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 18:17:34.314604   75086 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:17:34.314632   75086 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-8777/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-8777/.minikube}
	I0920 18:17:34.314656   75086 buildroot.go:174] setting up certificates
	I0920 18:17:34.314666   75086 provision.go:84] configureAuth start
	I0920 18:17:34.314674   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetMachineName
	I0920 18:17:34.314940   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetIP
	I0920 18:17:34.317999   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.318306   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:34.318326   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.318538   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHHostname
	I0920 18:17:34.320715   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.321046   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:34.321073   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.321187   75086 provision.go:143] copyHostCerts
	I0920 18:17:34.321256   75086 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem, removing ...
	I0920 18:17:34.321276   75086 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem
	I0920 18:17:34.321359   75086 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem (1082 bytes)
	I0920 18:17:34.321452   75086 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem, removing ...
	I0920 18:17:34.321459   75086 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem
	I0920 18:17:34.321493   75086 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem (1123 bytes)
	I0920 18:17:34.321549   75086 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem, removing ...
	I0920 18:17:34.321558   75086 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem
	I0920 18:17:34.321580   75086 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem (1675 bytes)
	I0920 18:17:34.321692   75086 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem org=jenkins.embed-certs-768431 san=[127.0.0.1 192.168.61.202 embed-certs-768431 localhost minikube]
	I0920 18:17:34.547266   75086 provision.go:177] copyRemoteCerts
	I0920 18:17:34.547343   75086 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 18:17:34.547373   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHHostname
	I0920 18:17:34.550265   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.550648   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:34.550680   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.550895   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHPort
	I0920 18:17:34.551084   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:34.551227   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHUsername
	I0920 18:17:34.551359   75086 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/embed-certs-768431/id_rsa Username:docker}
	I0920 18:17:34.632404   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0920 18:17:34.656468   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 18:17:34.681704   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0920 18:17:34.706486   75086 provision.go:87] duration metric: took 391.807931ms to configureAuth
	I0920 18:17:34.706514   75086 buildroot.go:189] setting minikube options for container-runtime
	I0920 18:17:34.706681   75086 config.go:182] Loaded profile config "embed-certs-768431": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:17:34.706750   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHHostname
	I0920 18:17:34.709500   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.709851   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:34.709915   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.709972   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHPort
	I0920 18:17:34.710199   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:34.710337   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:34.710511   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHUsername
	I0920 18:17:34.710669   75086 main.go:141] libmachine: Using SSH client type: native
	I0920 18:17:34.710854   75086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0920 18:17:34.710875   75086 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 18:17:34.941246   75086 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 18:17:34.941277   75086 machine.go:96] duration metric: took 974.641764ms to provisionDockerMachine
	I0920 18:17:34.941293   75086 start.go:293] postStartSetup for "embed-certs-768431" (driver="kvm2")
	I0920 18:17:34.941341   75086 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 18:17:34.941370   75086 main.go:141] libmachine: (embed-certs-768431) Calling .DriverName
	I0920 18:17:34.941757   75086 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 18:17:34.941792   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHHostname
	I0920 18:17:34.944366   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.944712   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:34.944749   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:34.944861   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHPort
	I0920 18:17:34.945051   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:34.945198   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHUsername
	I0920 18:17:34.945320   75086 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/embed-certs-768431/id_rsa Username:docker}
	I0920 18:17:35.028570   75086 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 18:17:35.032711   75086 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 18:17:35.032751   75086 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/addons for local assets ...
	I0920 18:17:35.032859   75086 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/files for local assets ...
	I0920 18:17:35.032973   75086 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem -> 159732.pem in /etc/ssl/certs
	I0920 18:17:35.033127   75086 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 18:17:35.042212   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /etc/ssl/certs/159732.pem (1708 bytes)
	I0920 18:17:35.066379   75086 start.go:296] duration metric: took 125.0653ms for postStartSetup
	I0920 18:17:35.066429   75086 fix.go:56] duration metric: took 19.872197784s for fixHost
	I0920 18:17:35.066456   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHHostname
	I0920 18:17:35.069888   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:35.070261   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:35.070286   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:35.070472   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHPort
	I0920 18:17:35.070693   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:35.070836   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:35.070989   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHUsername
	I0920 18:17:35.071204   75086 main.go:141] libmachine: Using SSH client type: native
	I0920 18:17:35.071396   75086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I0920 18:17:35.071407   75086 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 18:17:35.178674   75086 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726856255.154624802
	
	I0920 18:17:35.178704   75086 fix.go:216] guest clock: 1726856255.154624802
	I0920 18:17:35.178712   75086 fix.go:229] Guest: 2024-09-20 18:17:35.154624802 +0000 UTC Remote: 2024-09-20 18:17:35.066435161 +0000 UTC m=+261.108982198 (delta=88.189641ms)
	I0920 18:17:35.178734   75086 fix.go:200] guest clock delta is within tolerance: 88.189641ms
	I0920 18:17:35.178760   75086 start.go:83] releasing machines lock for "embed-certs-768431", held for 19.984535459s
	I0920 18:17:35.178801   75086 main.go:141] libmachine: (embed-certs-768431) Calling .DriverName
	I0920 18:17:35.179101   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetIP
	I0920 18:17:35.181807   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:35.182179   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:35.182208   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:35.182353   75086 main.go:141] libmachine: (embed-certs-768431) Calling .DriverName
	I0920 18:17:35.182850   75086 main.go:141] libmachine: (embed-certs-768431) Calling .DriverName
	I0920 18:17:35.182993   75086 main.go:141] libmachine: (embed-certs-768431) Calling .DriverName
	I0920 18:17:35.183097   75086 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 18:17:35.183171   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHHostname
	I0920 18:17:35.183171   75086 ssh_runner.go:195] Run: cat /version.json
	I0920 18:17:35.183230   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHHostname
	I0920 18:17:35.185905   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:35.185942   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:35.186249   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:35.186279   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:35.186317   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:35.186331   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:35.186476   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHPort
	I0920 18:17:35.186597   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHPort
	I0920 18:17:35.186687   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:35.186791   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:17:35.186816   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHUsername
	I0920 18:17:35.186974   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHUsername
	I0920 18:17:35.186992   75086 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/embed-certs-768431/id_rsa Username:docker}
	I0920 18:17:35.187127   75086 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/embed-certs-768431/id_rsa Username:docker}
	I0920 18:17:35.311021   75086 ssh_runner.go:195] Run: systemctl --version
	I0920 18:17:35.317587   75086 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 18:17:35.462634   75086 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 18:17:35.469331   75086 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 18:17:35.469444   75086 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 18:17:35.485752   75086 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 18:17:35.485780   75086 start.go:495] detecting cgroup driver to use...
	I0920 18:17:35.485903   75086 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 18:17:35.507004   75086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 18:17:35.525868   75086 docker.go:217] disabling cri-docker service (if available) ...
	I0920 18:17:35.525934   75086 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 18:17:35.542533   75086 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 18:17:35.557622   75086 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 18:17:35.669845   75086 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 18:17:35.827784   75086 docker.go:233] disabling docker service ...
	I0920 18:17:35.827855   75086 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 18:17:35.842877   75086 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 18:17:35.859468   75086 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 18:17:35.994094   75086 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 18:17:36.123353   75086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 18:17:36.137941   75086 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 18:17:36.157781   75086 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 18:17:36.157871   75086 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:36.168857   75086 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 18:17:36.168929   75086 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:36.179807   75086 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:36.192260   75086 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:36.203220   75086 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 18:17:36.214388   75086 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:36.224686   75086 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:36.242143   75086 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:36.257047   75086 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 18:17:36.272014   75086 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 18:17:36.272130   75086 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 18:17:36.286083   75086 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 18:17:36.296624   75086 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:17:36.414196   75086 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 18:17:36.511654   75086 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 18:17:36.511720   75086 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 18:17:36.516593   75086 start.go:563] Will wait 60s for crictl version
	I0920 18:17:36.516640   75086 ssh_runner.go:195] Run: which crictl
	I0920 18:17:36.520360   75086 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 18:17:36.556846   75086 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 18:17:36.556949   75086 ssh_runner.go:195] Run: crio --version
	I0920 18:17:36.584699   75086 ssh_runner.go:195] Run: crio --version
	I0920 18:17:36.615179   75086 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 18:17:35.206839   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .Start
	I0920 18:17:35.207055   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Ensuring networks are active...
	I0920 18:17:35.207928   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Ensuring network default is active
	I0920 18:17:35.208260   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Ensuring network mk-default-k8s-diff-port-553719 is active
	I0920 18:17:35.208679   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Getting domain xml...
	I0920 18:17:35.209488   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Creating domain...
	I0920 18:17:36.500751   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting to get IP...
	I0920 18:17:36.501630   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:36.502033   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | unable to find current IP address of domain default-k8s-diff-port-553719 in network mk-default-k8s-diff-port-553719
	I0920 18:17:36.502137   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | I0920 18:17:36.502044   76346 retry.go:31] will retry after 216.050114ms: waiting for machine to come up
	I0920 18:17:36.719559   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:36.720002   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | unable to find current IP address of domain default-k8s-diff-port-553719 in network mk-default-k8s-diff-port-553719
	I0920 18:17:36.720026   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | I0920 18:17:36.719977   76346 retry.go:31] will retry after 303.832728ms: waiting for machine to come up
	I0920 18:17:37.025606   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:37.026096   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | unable to find current IP address of domain default-k8s-diff-port-553719 in network mk-default-k8s-diff-port-553719
	I0920 18:17:37.026137   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | I0920 18:17:37.026050   76346 retry.go:31] will retry after 316.827461ms: waiting for machine to come up
	I0920 18:17:36.616640   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetIP
	I0920 18:17:36.619927   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:36.620377   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:17:36.620407   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:17:36.620642   75086 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0920 18:17:36.627189   75086 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:17:36.639842   75086 kubeadm.go:883] updating cluster {Name:embed-certs-768431 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-768431 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.202 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 18:17:36.639953   75086 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:17:36.640019   75086 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:17:36.676519   75086 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 18:17:36.676599   75086 ssh_runner.go:195] Run: which lz4
	I0920 18:17:36.680472   75086 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 18:17:36.684650   75086 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 18:17:36.684683   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0920 18:17:38.090996   75086 crio.go:462] duration metric: took 1.410558742s to copy over tarball
	I0920 18:17:38.091068   75086 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 18:17:37.344880   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:37.345393   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | unable to find current IP address of domain default-k8s-diff-port-553719 in network mk-default-k8s-diff-port-553719
	I0920 18:17:37.345419   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | I0920 18:17:37.345323   76346 retry.go:31] will retry after 456.966571ms: waiting for machine to come up
	I0920 18:17:37.804436   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:37.804919   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | unable to find current IP address of domain default-k8s-diff-port-553719 in network mk-default-k8s-diff-port-553719
	I0920 18:17:37.804954   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | I0920 18:17:37.804891   76346 retry.go:31] will retry after 582.103589ms: waiting for machine to come up
	I0920 18:17:38.388738   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:38.389266   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | unable to find current IP address of domain default-k8s-diff-port-553719 in network mk-default-k8s-diff-port-553719
	I0920 18:17:38.389297   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | I0920 18:17:38.389217   76346 retry.go:31] will retry after 884.882678ms: waiting for machine to come up
	I0920 18:17:39.276048   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:39.276478   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | unable to find current IP address of domain default-k8s-diff-port-553719 in network mk-default-k8s-diff-port-553719
	I0920 18:17:39.276504   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | I0920 18:17:39.276453   76346 retry.go:31] will retry after 807.001285ms: waiting for machine to come up
	I0920 18:17:40.085749   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:40.086228   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | unable to find current IP address of domain default-k8s-diff-port-553719 in network mk-default-k8s-diff-port-553719
	I0920 18:17:40.086256   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | I0920 18:17:40.086190   76346 retry.go:31] will retry after 1.283354255s: waiting for machine to come up
	I0920 18:17:41.370861   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:41.371314   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | unable to find current IP address of domain default-k8s-diff-port-553719 in network mk-default-k8s-diff-port-553719
	I0920 18:17:41.371343   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | I0920 18:17:41.371265   76346 retry.go:31] will retry after 1.756084886s: waiting for machine to come up
	I0920 18:17:40.301535   75086 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.210437306s)
	I0920 18:17:40.301567   75086 crio.go:469] duration metric: took 2.210539553s to extract the tarball
	I0920 18:17:40.301578   75086 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 18:17:40.338638   75086 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:17:40.381753   75086 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 18:17:40.381780   75086 cache_images.go:84] Images are preloaded, skipping loading
	I0920 18:17:40.381788   75086 kubeadm.go:934] updating node { 192.168.61.202 8443 v1.31.1 crio true true} ...
	I0920 18:17:40.381914   75086 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-768431 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.202
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-768431 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 18:17:40.381998   75086 ssh_runner.go:195] Run: crio config
	I0920 18:17:40.428457   75086 cni.go:84] Creating CNI manager for ""
	I0920 18:17:40.428482   75086 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:17:40.428491   75086 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 18:17:40.428512   75086 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.202 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-768431 NodeName:embed-certs-768431 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.202"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.202 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 18:17:40.428681   75086 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.202
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-768431"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.202
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.202"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 18:17:40.428800   75086 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 18:17:40.439049   75086 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 18:17:40.439123   75086 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 18:17:40.449220   75086 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0920 18:17:40.466012   75086 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 18:17:40.482158   75086 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0920 18:17:40.499225   75086 ssh_runner.go:195] Run: grep 192.168.61.202	control-plane.minikube.internal$ /etc/hosts
	I0920 18:17:40.502817   75086 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.202	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:17:40.515241   75086 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:17:40.638615   75086 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:17:40.655790   75086 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/embed-certs-768431 for IP: 192.168.61.202
	I0920 18:17:40.655814   75086 certs.go:194] generating shared ca certs ...
	I0920 18:17:40.655834   75086 certs.go:226] acquiring lock for ca certs: {Name:mkc7ef6c737c6bdc3fdd9dcff8f57029c020d8f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:17:40.656006   75086 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key
	I0920 18:17:40.656070   75086 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key
	I0920 18:17:40.656084   75086 certs.go:256] generating profile certs ...
	I0920 18:17:40.656193   75086 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/embed-certs-768431/client.key
	I0920 18:17:40.656281   75086 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/embed-certs-768431/apiserver.key.a3dbf377
	I0920 18:17:40.656354   75086 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/embed-certs-768431/proxy-client.key
	I0920 18:17:40.656508   75086 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem (1338 bytes)
	W0920 18:17:40.656560   75086 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973_empty.pem, impossibly tiny 0 bytes
	I0920 18:17:40.656573   75086 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 18:17:40.656620   75086 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem (1082 bytes)
	I0920 18:17:40.656654   75086 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem (1123 bytes)
	I0920 18:17:40.656679   75086 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem (1675 bytes)
	I0920 18:17:40.656733   75086 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem (1708 bytes)
	I0920 18:17:40.657567   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 18:17:40.693111   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 18:17:40.733664   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 18:17:40.774367   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0920 18:17:40.800930   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/embed-certs-768431/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0920 18:17:40.827793   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/embed-certs-768431/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 18:17:40.851189   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/embed-certs-768431/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 18:17:40.875092   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/embed-certs-768431/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 18:17:40.898639   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem --> /usr/share/ca-certificates/15973.pem (1338 bytes)
	I0920 18:17:40.921569   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /usr/share/ca-certificates/159732.pem (1708 bytes)
	I0920 18:17:40.945739   75086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 18:17:40.969942   75086 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 18:17:40.987563   75086 ssh_runner.go:195] Run: openssl version
	I0920 18:17:40.993363   75086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15973.pem && ln -fs /usr/share/ca-certificates/15973.pem /etc/ssl/certs/15973.pem"
	I0920 18:17:41.004673   75086 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15973.pem
	I0920 18:17:41.008985   75086 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:04 /usr/share/ca-certificates/15973.pem
	I0920 18:17:41.009032   75086 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15973.pem
	I0920 18:17:41.014950   75086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15973.pem /etc/ssl/certs/51391683.0"
	I0920 18:17:41.027944   75086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/159732.pem && ln -fs /usr/share/ca-certificates/159732.pem /etc/ssl/certs/159732.pem"
	I0920 18:17:41.040711   75086 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/159732.pem
	I0920 18:17:41.045304   75086 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:04 /usr/share/ca-certificates/159732.pem
	I0920 18:17:41.045361   75086 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/159732.pem
	I0920 18:17:41.050891   75086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/159732.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 18:17:41.061542   75086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 18:17:41.072622   75086 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:17:41.076916   75086 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:17:41.076965   75086 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:17:41.082644   75086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 18:17:41.093136   75086 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 18:17:41.097496   75086 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 18:17:41.103393   75086 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 18:17:41.109444   75086 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 18:17:41.115844   75086 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 18:17:41.121578   75086 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 18:17:41.127292   75086 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 18:17:41.133310   75086 kubeadm.go:392] StartCluster: {Name:embed-certs-768431 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-768431 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.202 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:17:41.133393   75086 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 18:17:41.133439   75086 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:17:41.177284   75086 cri.go:89] found id: ""
	I0920 18:17:41.177380   75086 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 18:17:41.187289   75086 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 18:17:41.187311   75086 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 18:17:41.187362   75086 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 18:17:41.196445   75086 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 18:17:41.197939   75086 kubeconfig.go:125] found "embed-certs-768431" server: "https://192.168.61.202:8443"
	I0920 18:17:41.201030   75086 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 18:17:41.210356   75086 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.202
	I0920 18:17:41.210394   75086 kubeadm.go:1160] stopping kube-system containers ...
	I0920 18:17:41.210422   75086 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0920 18:17:41.210499   75086 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:17:41.244659   75086 cri.go:89] found id: ""
	I0920 18:17:41.244721   75086 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 18:17:41.264341   75086 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 18:17:41.275229   75086 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 18:17:41.275252   75086 kubeadm.go:157] found existing configuration files:
	
	I0920 18:17:41.275314   75086 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 18:17:41.285420   75086 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 18:17:41.285502   75086 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 18:17:41.295902   75086 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 18:17:41.305951   75086 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 18:17:41.306015   75086 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 18:17:41.316567   75086 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 18:17:41.325623   75086 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 18:17:41.325691   75086 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 18:17:41.336405   75086 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 18:17:41.346501   75086 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 18:17:41.346574   75086 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 18:17:41.357340   75086 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 18:17:41.367956   75086 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:17:41.479022   75086 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:17:42.796122   75086 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.317061067s)
	I0920 18:17:42.796155   75086 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:17:43.053135   75086 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:17:43.139770   75086 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:17:43.244066   75086 api_server.go:52] waiting for apiserver process to appear ...
	I0920 18:17:43.244166   75086 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:17:43.744622   75086 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:17:43.129383   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:43.129827   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | unable to find current IP address of domain default-k8s-diff-port-553719 in network mk-default-k8s-diff-port-553719
	I0920 18:17:43.129864   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | I0920 18:17:43.129798   76346 retry.go:31] will retry after 2.259775905s: waiting for machine to come up
	I0920 18:17:45.391508   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:45.391915   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | unable to find current IP address of domain default-k8s-diff-port-553719 in network mk-default-k8s-diff-port-553719
	I0920 18:17:45.391941   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | I0920 18:17:45.391885   76346 retry.go:31] will retry after 1.767770692s: waiting for machine to come up
	I0920 18:17:44.244723   75086 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:17:44.265463   75086 api_server.go:72] duration metric: took 1.02139562s to wait for apiserver process to appear ...
	I0920 18:17:44.265500   75086 api_server.go:88] waiting for apiserver healthz status ...
	I0920 18:17:44.265528   75086 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0920 18:17:46.935510   75086 api_server.go:279] https://192.168.61.202:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 18:17:46.935571   75086 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 18:17:46.935589   75086 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0920 18:17:46.947553   75086 api_server.go:279] https://192.168.61.202:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 18:17:46.947586   75086 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 18:17:47.265919   75086 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0920 18:17:47.272986   75086 api_server.go:279] https://192.168.61.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 18:17:47.273020   75086 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 18:17:47.766662   75086 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0920 18:17:47.783700   75086 api_server.go:279] https://192.168.61.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 18:17:47.783749   75086 api_server.go:103] status: https://192.168.61.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 18:17:48.266012   75086 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0920 18:17:48.270573   75086 api_server.go:279] https://192.168.61.202:8443/healthz returned 200:
	ok
	I0920 18:17:48.277290   75086 api_server.go:141] control plane version: v1.31.1
	I0920 18:17:48.277331   75086 api_server.go:131] duration metric: took 4.011813655s to wait for apiserver health ...
	I0920 18:17:48.277342   75086 cni.go:84] Creating CNI manager for ""
	I0920 18:17:48.277351   75086 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:17:48.278863   75086 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 18:17:48.279934   75086 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 18:17:48.304037   75086 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 18:17:48.337082   75086 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 18:17:48.349485   75086 system_pods.go:59] 8 kube-system pods found
	I0920 18:17:48.349541   75086 system_pods.go:61] "coredns-7c65d6cfc9-cskt4" [2ce6fa43-bdf9-4625-a198-8f59ee9fcf0b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 18:17:48.349554   75086 system_pods.go:61] "etcd-embed-certs-768431" [656af9fd-b380-4934-a0b5-5d39a755de44] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0920 18:17:48.349565   75086 system_pods.go:61] "kube-apiserver-embed-certs-768431" [e128f4ff-3432-4d27-83de-ed76b686403d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0920 18:17:48.349573   75086 system_pods.go:61] "kube-controller-manager-embed-certs-768431" [a2c7e6d7-e351-4e6a-ab95-638aecdaab28] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0920 18:17:48.349579   75086 system_pods.go:61] "kube-proxy-cshjm" [86a82f1b-d8d5-4ab7-89fb-84b836dc0470] Running
	I0920 18:17:48.349586   75086 system_pods.go:61] "kube-scheduler-embed-certs-768431" [64bc50a6-2e99-4992-b249-9ffdf463539a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0920 18:17:48.349601   75086 system_pods.go:61] "metrics-server-6867b74b74-dwnt6" [bb0a120f-ca9e-4b54-826b-55aa20b2f6a4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 18:17:48.349606   75086 system_pods.go:61] "storage-provisioner" [083c6964-a774-4b85-8e6d-acd04ededb3c] Running
	I0920 18:17:48.349617   75086 system_pods.go:74] duration metric: took 12.507925ms to wait for pod list to return data ...
	I0920 18:17:48.349630   75086 node_conditions.go:102] verifying NodePressure condition ...
	I0920 18:17:48.353604   75086 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 18:17:48.353635   75086 node_conditions.go:123] node cpu capacity is 2
	I0920 18:17:48.353648   75086 node_conditions.go:105] duration metric: took 4.012436ms to run NodePressure ...
	I0920 18:17:48.353668   75086 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:17:48.623666   75086 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0920 18:17:48.634113   75086 kubeadm.go:739] kubelet initialised
	I0920 18:17:48.634184   75086 kubeadm.go:740] duration metric: took 10.458173ms waiting for restarted kubelet to initialise ...
	I0920 18:17:48.634205   75086 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:17:48.642334   75086 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-cskt4" in "kube-system" namespace to be "Ready" ...
	I0920 18:17:47.161670   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:47.162064   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | unable to find current IP address of domain default-k8s-diff-port-553719 in network mk-default-k8s-diff-port-553719
	I0920 18:17:47.162094   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | I0920 18:17:47.162020   76346 retry.go:31] will retry after 2.299554771s: waiting for machine to come up
	I0920 18:17:49.463022   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:49.463483   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | unable to find current IP address of domain default-k8s-diff-port-553719 in network mk-default-k8s-diff-port-553719
	I0920 18:17:49.463515   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | I0920 18:17:49.463442   76346 retry.go:31] will retry after 4.372569793s: waiting for machine to come up
	I0920 18:17:50.650781   75086 pod_ready.go:103] pod "coredns-7c65d6cfc9-cskt4" in "kube-system" namespace has status "Ready":"False"
	I0920 18:17:53.149156   75086 pod_ready.go:103] pod "coredns-7c65d6cfc9-cskt4" in "kube-system" namespace has status "Ready":"False"
	I0920 18:17:55.087078   75577 start.go:364] duration metric: took 3m42.245259516s to acquireMachinesLock for "old-k8s-version-744025"
	I0920 18:17:55.087155   75577 start.go:96] Skipping create...Using existing machine configuration
	I0920 18:17:55.087166   75577 fix.go:54] fixHost starting: 
	I0920 18:17:55.087618   75577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:17:55.087671   75577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:17:55.107336   75577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34989
	I0920 18:17:55.107839   75577 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:17:55.108462   75577 main.go:141] libmachine: Using API Version  1
	I0920 18:17:55.108496   75577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:17:55.108855   75577 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:17:55.109072   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .DriverName
	I0920 18:17:55.109222   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetState
	I0920 18:17:55.110831   75577 fix.go:112] recreateIfNeeded on old-k8s-version-744025: state=Stopped err=<nil>
	I0920 18:17:55.110857   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .DriverName
	W0920 18:17:55.110973   75577 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 18:17:55.113408   75577 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-744025" ...
	I0920 18:17:53.840215   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:53.840817   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has current primary IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:53.840844   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Found IP for machine: 192.168.72.190
	I0920 18:17:53.840859   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Reserving static IP address...
	I0920 18:17:53.841438   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Reserved static IP address: 192.168.72.190
	I0920 18:17:53.841460   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Waiting for SSH to be available...
	I0920 18:17:53.841475   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-553719", mac: "52:54:00:dd:93:60", ip: "192.168.72.190"} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:53.841503   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | skip adding static IP to network mk-default-k8s-diff-port-553719 - found existing host DHCP lease matching {name: "default-k8s-diff-port-553719", mac: "52:54:00:dd:93:60", ip: "192.168.72.190"}
	I0920 18:17:53.841583   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | Getting to WaitForSSH function...
	I0920 18:17:53.843772   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:53.844082   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:53.844115   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:53.844201   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | Using SSH client type: external
	I0920 18:17:53.844238   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/default-k8s-diff-port-553719/id_rsa (-rw-------)
	I0920 18:17:53.844282   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.190 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-8777/.minikube/machines/default-k8s-diff-port-553719/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 18:17:53.844296   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | About to run SSH command:
	I0920 18:17:53.844309   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | exit 0
	I0920 18:17:53.965938   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | SSH cmd err, output: <nil>: 
	I0920 18:17:53.966354   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetConfigRaw
	I0920 18:17:53.967105   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetIP
	I0920 18:17:53.969715   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:53.970112   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:53.970140   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:53.970429   75264 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/default-k8s-diff-port-553719/config.json ...
	I0920 18:17:53.970686   75264 machine.go:93] provisionDockerMachine start ...
	I0920 18:17:53.970707   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .DriverName
	I0920 18:17:53.970924   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHHostname
	I0920 18:17:53.973207   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:53.973541   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:53.973571   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:53.973716   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHPort
	I0920 18:17:53.973901   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:53.974041   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:53.974155   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHUsername
	I0920 18:17:53.974298   75264 main.go:141] libmachine: Using SSH client type: native
	I0920 18:17:53.974518   75264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.190 22 <nil> <nil>}
	I0920 18:17:53.974533   75264 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 18:17:54.078095   75264 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 18:17:54.078121   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetMachineName
	I0920 18:17:54.078377   75264 buildroot.go:166] provisioning hostname "default-k8s-diff-port-553719"
	I0920 18:17:54.078397   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetMachineName
	I0920 18:17:54.078540   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHHostname
	I0920 18:17:54.081589   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.081998   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:54.082032   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.082179   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHPort
	I0920 18:17:54.082375   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:54.082539   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:54.082743   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHUsername
	I0920 18:17:54.082949   75264 main.go:141] libmachine: Using SSH client type: native
	I0920 18:17:54.083153   75264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.190 22 <nil> <nil>}
	I0920 18:17:54.083173   75264 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-553719 && echo "default-k8s-diff-port-553719" | sudo tee /etc/hostname
	I0920 18:17:54.199155   75264 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-553719
	
	I0920 18:17:54.199192   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHHostname
	I0920 18:17:54.202231   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.202650   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:54.202685   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.202835   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHPort
	I0920 18:17:54.203112   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:54.203289   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:54.203458   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHUsername
	I0920 18:17:54.203696   75264 main.go:141] libmachine: Using SSH client type: native
	I0920 18:17:54.203944   75264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.190 22 <nil> <nil>}
	I0920 18:17:54.203969   75264 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-553719' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-553719/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-553719' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 18:17:54.312356   75264 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:17:54.312386   75264 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-8777/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-8777/.minikube}
	I0920 18:17:54.312432   75264 buildroot.go:174] setting up certificates
	I0920 18:17:54.312450   75264 provision.go:84] configureAuth start
	I0920 18:17:54.312464   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetMachineName
	I0920 18:17:54.312758   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetIP
	I0920 18:17:54.315327   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.315697   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:54.315725   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.315870   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHHostname
	I0920 18:17:54.318203   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.318653   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:54.318679   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.318824   75264 provision.go:143] copyHostCerts
	I0920 18:17:54.318885   75264 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem, removing ...
	I0920 18:17:54.318906   75264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem
	I0920 18:17:54.318978   75264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem (1675 bytes)
	I0920 18:17:54.319096   75264 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem, removing ...
	I0920 18:17:54.319107   75264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem
	I0920 18:17:54.319141   75264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem (1082 bytes)
	I0920 18:17:54.319384   75264 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem, removing ...
	I0920 18:17:54.319401   75264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem
	I0920 18:17:54.319465   75264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem (1123 bytes)
	I0920 18:17:54.319551   75264 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-553719 san=[127.0.0.1 192.168.72.190 default-k8s-diff-port-553719 localhost minikube]
	I0920 18:17:54.453062   75264 provision.go:177] copyRemoteCerts
	I0920 18:17:54.453131   75264 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 18:17:54.453160   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHHostname
	I0920 18:17:54.456047   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.456402   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:54.456431   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.456598   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHPort
	I0920 18:17:54.456796   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:54.456970   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHUsername
	I0920 18:17:54.457092   75264 sshutil.go:53] new ssh client: &{IP:192.168.72.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/default-k8s-diff-port-553719/id_rsa Username:docker}
	I0920 18:17:54.544567   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0920 18:17:54.570009   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0920 18:17:54.594249   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0920 18:17:54.617652   75264 provision.go:87] duration metric: took 305.186554ms to configureAuth
	I0920 18:17:54.617686   75264 buildroot.go:189] setting minikube options for container-runtime
	I0920 18:17:54.617971   75264 config.go:182] Loaded profile config "default-k8s-diff-port-553719": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:17:54.618064   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHHostname
	I0920 18:17:54.621408   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.621882   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:54.621915   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.622140   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHPort
	I0920 18:17:54.622368   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:54.622592   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:54.622833   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHUsername
	I0920 18:17:54.623047   75264 main.go:141] libmachine: Using SSH client type: native
	I0920 18:17:54.623287   75264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.190 22 <nil> <nil>}
	I0920 18:17:54.623305   75264 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 18:17:54.859786   75264 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 18:17:54.859810   75264 machine.go:96] duration metric: took 889.110491ms to provisionDockerMachine
	I0920 18:17:54.859820   75264 start.go:293] postStartSetup for "default-k8s-diff-port-553719" (driver="kvm2")
	I0920 18:17:54.859831   75264 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 18:17:54.859850   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .DriverName
	I0920 18:17:54.860209   75264 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 18:17:54.860258   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHHostname
	I0920 18:17:54.863576   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.863933   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:54.863966   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.864168   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHPort
	I0920 18:17:54.864345   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:54.864494   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHUsername
	I0920 18:17:54.864640   75264 sshutil.go:53] new ssh client: &{IP:192.168.72.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/default-k8s-diff-port-553719/id_rsa Username:docker}
	I0920 18:17:54.944717   75264 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 18:17:54.948836   75264 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 18:17:54.948870   75264 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/addons for local assets ...
	I0920 18:17:54.948937   75264 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/files for local assets ...
	I0920 18:17:54.949014   75264 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem -> 159732.pem in /etc/ssl/certs
	I0920 18:17:54.949116   75264 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 18:17:54.958726   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /etc/ssl/certs/159732.pem (1708 bytes)
	I0920 18:17:54.982792   75264 start.go:296] duration metric: took 122.958621ms for postStartSetup
	I0920 18:17:54.982831   75264 fix.go:56] duration metric: took 19.803863913s for fixHost
	I0920 18:17:54.982856   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHHostname
	I0920 18:17:54.985588   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.985924   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:54.985966   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:54.986145   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHPort
	I0920 18:17:54.986407   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:54.986586   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:54.986784   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHUsername
	I0920 18:17:54.986982   75264 main.go:141] libmachine: Using SSH client type: native
	I0920 18:17:54.987233   75264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.190 22 <nil> <nil>}
	I0920 18:17:54.987248   75264 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 18:17:55.086859   75264 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726856275.038902431
	
	I0920 18:17:55.086888   75264 fix.go:216] guest clock: 1726856275.038902431
	I0920 18:17:55.086900   75264 fix.go:229] Guest: 2024-09-20 18:17:55.038902431 +0000 UTC Remote: 2024-09-20 18:17:54.98283641 +0000 UTC m=+257.985357778 (delta=56.066021ms)
	I0920 18:17:55.086959   75264 fix.go:200] guest clock delta is within tolerance: 56.066021ms
	I0920 18:17:55.086968   75264 start.go:83] releasing machines lock for "default-k8s-diff-port-553719", held for 19.908037967s
	I0920 18:17:55.087009   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .DriverName
	I0920 18:17:55.087303   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetIP
	I0920 18:17:55.090396   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:55.090777   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:55.090805   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:55.090973   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .DriverName
	I0920 18:17:55.091481   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .DriverName
	I0920 18:17:55.091684   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .DriverName
	I0920 18:17:55.091772   75264 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 18:17:55.091827   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHHostname
	I0920 18:17:55.091951   75264 ssh_runner.go:195] Run: cat /version.json
	I0920 18:17:55.091976   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHHostname
	I0920 18:17:55.094742   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:55.094878   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:55.095102   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:55.095133   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:55.095265   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHPort
	I0920 18:17:55.095362   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:55.095400   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:55.095443   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:55.095550   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHPort
	I0920 18:17:55.095619   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHUsername
	I0920 18:17:55.095689   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:17:55.095763   75264 sshutil.go:53] new ssh client: &{IP:192.168.72.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/default-k8s-diff-port-553719/id_rsa Username:docker}
	I0920 18:17:55.095795   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHUsername
	I0920 18:17:55.095950   75264 sshutil.go:53] new ssh client: &{IP:192.168.72.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/default-k8s-diff-port-553719/id_rsa Username:docker}
	I0920 18:17:55.213223   75264 ssh_runner.go:195] Run: systemctl --version
	I0920 18:17:55.220952   75264 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 18:17:55.370747   75264 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 18:17:55.377509   75264 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 18:17:55.377595   75264 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 18:17:55.395830   75264 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 18:17:55.395854   75264 start.go:495] detecting cgroup driver to use...
	I0920 18:17:55.395920   75264 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 18:17:55.412885   75264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 18:17:55.428380   75264 docker.go:217] disabling cri-docker service (if available) ...
	I0920 18:17:55.428433   75264 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 18:17:55.444371   75264 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 18:17:55.459485   75264 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 18:17:55.583649   75264 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 18:17:55.768003   75264 docker.go:233] disabling docker service ...
	I0920 18:17:55.768065   75264 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 18:17:55.787062   75264 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 18:17:55.802662   75264 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 18:17:55.967892   75264 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 18:17:56.105744   75264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 18:17:56.120499   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 18:17:56.140527   75264 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 18:17:56.140613   75264 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:56.151282   75264 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 18:17:56.151355   75264 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:56.164680   75264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:56.176142   75264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:56.187120   75264 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 18:17:56.198384   75264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:56.209298   75264 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:56.226714   75264 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:17:56.237886   75264 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 18:17:56.253664   75264 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 18:17:56.253778   75264 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 18:17:56.267429   75264 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 18:17:56.279118   75264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:17:56.412692   75264 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 18:17:56.513349   75264 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 18:17:56.513438   75264 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 18:17:56.517974   75264 start.go:563] Will wait 60s for crictl version
	I0920 18:17:56.518042   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:17:56.521966   75264 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 18:17:56.561446   75264 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 18:17:56.561520   75264 ssh_runner.go:195] Run: crio --version
	I0920 18:17:56.592009   75264 ssh_runner.go:195] Run: crio --version
	I0920 18:17:56.626555   75264 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 18:17:56.627668   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetIP
	I0920 18:17:56.630995   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:56.631450   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:17:56.631480   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:17:56.631751   75264 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0920 18:17:56.636473   75264 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:17:56.648824   75264 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-553719 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-553719 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.190 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 18:17:56.648937   75264 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:17:56.648980   75264 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:17:56.691029   75264 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 18:17:56.691104   75264 ssh_runner.go:195] Run: which lz4
	I0920 18:17:56.695538   75264 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 18:17:56.699703   75264 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 18:17:56.699735   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0920 18:17:55.114625   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .Start
	I0920 18:17:55.114814   75577 main.go:141] libmachine: (old-k8s-version-744025) Ensuring networks are active...
	I0920 18:17:55.115808   75577 main.go:141] libmachine: (old-k8s-version-744025) Ensuring network default is active
	I0920 18:17:55.116209   75577 main.go:141] libmachine: (old-k8s-version-744025) Ensuring network mk-old-k8s-version-744025 is active
	I0920 18:17:55.116615   75577 main.go:141] libmachine: (old-k8s-version-744025) Getting domain xml...
	I0920 18:17:55.117403   75577 main.go:141] libmachine: (old-k8s-version-744025) Creating domain...
	I0920 18:17:56.411359   75577 main.go:141] libmachine: (old-k8s-version-744025) Waiting to get IP...
	I0920 18:17:56.412507   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:17:56.413010   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:17:56.413094   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:17:56.412996   76990 retry.go:31] will retry after 300.110477ms: waiting for machine to come up
	I0920 18:17:56.714648   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:17:56.715163   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:17:56.715183   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:17:56.715114   76990 retry.go:31] will retry after 352.948637ms: waiting for machine to come up
	I0920 18:17:57.069760   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:17:57.070449   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:17:57.070474   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:17:57.070404   76990 retry.go:31] will retry after 335.58281ms: waiting for machine to come up
	I0920 18:17:57.408023   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:17:57.408521   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:17:57.408546   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:17:57.408469   76990 retry.go:31] will retry after 498.040148ms: waiting for machine to come up
	I0920 18:17:54.653346   75086 pod_ready.go:93] pod "coredns-7c65d6cfc9-cskt4" in "kube-system" namespace has status "Ready":"True"
	I0920 18:17:54.653373   75086 pod_ready.go:82] duration metric: took 6.011015799s for pod "coredns-7c65d6cfc9-cskt4" in "kube-system" namespace to be "Ready" ...
	I0920 18:17:54.653383   75086 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:17:56.660606   75086 pod_ready.go:103] pod "etcd-embed-certs-768431" in "kube-system" namespace has status "Ready":"False"
	I0920 18:17:58.162253   75086 pod_ready.go:93] pod "etcd-embed-certs-768431" in "kube-system" namespace has status "Ready":"True"
	I0920 18:17:58.162282   75086 pod_ready.go:82] duration metric: took 3.508893207s for pod "etcd-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:17:58.162292   75086 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:17:58.109419   75264 crio.go:462] duration metric: took 1.413913626s to copy over tarball
	I0920 18:17:58.109504   75264 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 18:18:00.360623   75264 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.25108327s)
	I0920 18:18:00.360655   75264 crio.go:469] duration metric: took 2.251201466s to extract the tarball
	I0920 18:18:00.360703   75264 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 18:18:00.400822   75264 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:18:00.451366   75264 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 18:18:00.451402   75264 cache_images.go:84] Images are preloaded, skipping loading
	I0920 18:18:00.451413   75264 kubeadm.go:934] updating node { 192.168.72.190 8444 v1.31.1 crio true true} ...
	I0920 18:18:00.451590   75264 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-553719 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.190
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-553719 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 18:18:00.451688   75264 ssh_runner.go:195] Run: crio config
	I0920 18:18:00.502703   75264 cni.go:84] Creating CNI manager for ""
	I0920 18:18:00.502729   75264 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:18:00.502740   75264 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 18:18:00.502778   75264 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.190 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-553719 NodeName:default-k8s-diff-port-553719 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.190"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.190 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 18:18:00.502927   75264 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.190
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-553719"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.190
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.190"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 18:18:00.502996   75264 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 18:18:00.513768   75264 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 18:18:00.513870   75264 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 18:18:00.523631   75264 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0920 18:18:00.540428   75264 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 18:18:00.556858   75264 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0920 18:18:00.574954   75264 ssh_runner.go:195] Run: grep 192.168.72.190	control-plane.minikube.internal$ /etc/hosts
	I0920 18:18:00.578951   75264 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.190	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:18:00.592592   75264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:18:00.712035   75264 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:18:00.728657   75264 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/default-k8s-diff-port-553719 for IP: 192.168.72.190
	I0920 18:18:00.728684   75264 certs.go:194] generating shared ca certs ...
	I0920 18:18:00.728706   75264 certs.go:226] acquiring lock for ca certs: {Name:mkc7ef6c737c6bdc3fdd9dcff8f57029c020d8f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:18:00.728877   75264 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key
	I0920 18:18:00.728937   75264 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key
	I0920 18:18:00.728948   75264 certs.go:256] generating profile certs ...
	I0920 18:18:00.729055   75264 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/default-k8s-diff-port-553719/client.key
	I0920 18:18:00.729151   75264 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/default-k8s-diff-port-553719/apiserver.key.cc1f66e7
	I0920 18:18:00.729205   75264 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/default-k8s-diff-port-553719/proxy-client.key
	I0920 18:18:00.729368   75264 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem (1338 bytes)
	W0920 18:18:00.729415   75264 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973_empty.pem, impossibly tiny 0 bytes
	I0920 18:18:00.729425   75264 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 18:18:00.729453   75264 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem (1082 bytes)
	I0920 18:18:00.729474   75264 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem (1123 bytes)
	I0920 18:18:00.729501   75264 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem (1675 bytes)
	I0920 18:18:00.729538   75264 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem (1708 bytes)
	I0920 18:18:00.730192   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 18:18:00.775634   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 18:18:00.812006   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 18:18:00.846904   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0920 18:18:00.877908   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/default-k8s-diff-port-553719/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0920 18:18:00.910057   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/default-k8s-diff-port-553719/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 18:18:00.935377   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/default-k8s-diff-port-553719/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 18:18:00.960143   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/default-k8s-diff-port-553719/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 18:18:00.983906   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 18:18:01.007105   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem --> /usr/share/ca-certificates/15973.pem (1338 bytes)
	I0920 18:18:01.030331   75264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /usr/share/ca-certificates/159732.pem (1708 bytes)
	I0920 18:18:01.055976   75264 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 18:18:01.073892   75264 ssh_runner.go:195] Run: openssl version
	I0920 18:18:01.081246   75264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 18:18:01.092174   75264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:18:01.096541   75264 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:18:01.096595   75264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:18:01.103564   75264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 18:18:01.117697   75264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15973.pem && ln -fs /usr/share/ca-certificates/15973.pem /etc/ssl/certs/15973.pem"
	I0920 18:18:01.130349   75264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15973.pem
	I0920 18:18:01.135397   75264 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:04 /usr/share/ca-certificates/15973.pem
	I0920 18:18:01.135471   75264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15973.pem
	I0920 18:18:01.141399   75264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15973.pem /etc/ssl/certs/51391683.0"
	I0920 18:18:01.153123   75264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/159732.pem && ln -fs /usr/share/ca-certificates/159732.pem /etc/ssl/certs/159732.pem"
	I0920 18:18:01.165157   75264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/159732.pem
	I0920 18:18:01.170596   75264 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:04 /usr/share/ca-certificates/159732.pem
	I0920 18:18:01.170678   75264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/159732.pem
	I0920 18:18:01.176534   75264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/159732.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 18:18:01.188576   75264 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 18:18:01.193401   75264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 18:18:01.199557   75264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 18:18:01.205410   75264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 18:18:01.211296   75264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 18:18:01.216953   75264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 18:18:01.223417   75264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 18:18:01.229172   75264 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-553719 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-553719 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.190 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:18:01.229428   75264 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 18:18:01.229488   75264 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:18:01.270537   75264 cri.go:89] found id: ""
	I0920 18:18:01.270610   75264 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 18:18:01.284566   75264 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 18:18:01.284588   75264 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 18:18:01.284638   75264 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 18:18:01.297884   75264 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 18:18:01.298879   75264 kubeconfig.go:125] found "default-k8s-diff-port-553719" server: "https://192.168.72.190:8444"
	I0920 18:18:01.300996   75264 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 18:18:01.312183   75264 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.190
	I0920 18:18:01.312251   75264 kubeadm.go:1160] stopping kube-system containers ...
	I0920 18:18:01.312266   75264 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0920 18:18:01.312324   75264 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:18:01.358873   75264 cri.go:89] found id: ""
	I0920 18:18:01.358959   75264 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 18:18:01.377027   75264 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 18:18:01.386872   75264 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 18:18:01.386890   75264 kubeadm.go:157] found existing configuration files:
	
	I0920 18:18:01.386931   75264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0920 18:18:01.396044   75264 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 18:18:01.396118   75264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 18:18:01.405783   75264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0920 18:18:01.415296   75264 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 18:18:01.415367   75264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 18:18:01.427782   75264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0920 18:18:01.438838   75264 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 18:18:01.438897   75264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 18:18:01.450255   75264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0920 18:18:01.461237   75264 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 18:18:01.461313   75264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 18:18:01.472543   75264 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 18:18:01.483456   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:01.607155   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:17:57.908137   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:17:57.908561   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:17:57.908589   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:17:57.908522   76990 retry.go:31] will retry after 749.044696ms: waiting for machine to come up
	I0920 18:17:58.658869   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:17:58.659393   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:17:58.659424   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:17:58.659342   76990 retry.go:31] will retry after 949.936088ms: waiting for machine to come up
	I0920 18:17:59.610936   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:17:59.611467   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:17:59.611513   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:17:59.611430   76990 retry.go:31] will retry after 762.437104ms: waiting for machine to come up
	I0920 18:18:00.375768   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:00.376207   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:18:00.376235   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:18:00.376152   76990 retry.go:31] will retry after 1.228102027s: waiting for machine to come up
	I0920 18:18:01.606490   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:01.606958   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:18:01.606982   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:18:01.606907   76990 retry.go:31] will retry after 1.383186524s: waiting for machine to come up
	I0920 18:18:00.169862   75086 pod_ready.go:103] pod "kube-apiserver-embed-certs-768431" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:01.669351   75086 pod_ready.go:93] pod "kube-apiserver-embed-certs-768431" in "kube-system" namespace has status "Ready":"True"
	I0920 18:18:01.669378   75086 pod_ready.go:82] duration metric: took 3.507078668s for pod "kube-apiserver-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:01.669392   75086 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:01.674981   75086 pod_ready.go:93] pod "kube-controller-manager-embed-certs-768431" in "kube-system" namespace has status "Ready":"True"
	I0920 18:18:01.675006   75086 pod_ready.go:82] duration metric: took 5.604682ms for pod "kube-controller-manager-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:01.675018   75086 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-cshjm" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:01.680522   75086 pod_ready.go:93] pod "kube-proxy-cshjm" in "kube-system" namespace has status "Ready":"True"
	I0920 18:18:01.680547   75086 pod_ready.go:82] duration metric: took 5.521295ms for pod "kube-proxy-cshjm" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:01.680559   75086 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:01.685097   75086 pod_ready.go:93] pod "kube-scheduler-embed-certs-768431" in "kube-system" namespace has status "Ready":"True"
	I0920 18:18:01.685120   75086 pod_ready.go:82] duration metric: took 4.551724ms for pod "kube-scheduler-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:01.685131   75086 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:03.693234   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:02.268120   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:02.471363   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:02.530979   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:02.580339   75264 api_server.go:52] waiting for apiserver process to appear ...
	I0920 18:18:02.580455   75264 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:03.081156   75264 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:03.580564   75264 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:04.080991   75264 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:04.095108   75264 api_server.go:72] duration metric: took 1.514770034s to wait for apiserver process to appear ...
	I0920 18:18:04.095139   75264 api_server.go:88] waiting for apiserver healthz status ...
	I0920 18:18:04.095164   75264 api_server.go:253] Checking apiserver healthz at https://192.168.72.190:8444/healthz ...
	I0920 18:18:04.095704   75264 api_server.go:269] stopped: https://192.168.72.190:8444/healthz: Get "https://192.168.72.190:8444/healthz": dial tcp 192.168.72.190:8444: connect: connection refused
	I0920 18:18:04.595270   75264 api_server.go:253] Checking apiserver healthz at https://192.168.72.190:8444/healthz ...
	I0920 18:18:06.378496   75264 api_server.go:279] https://192.168.72.190:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 18:18:06.378532   75264 api_server.go:103] status: https://192.168.72.190:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 18:18:06.378550   75264 api_server.go:253] Checking apiserver healthz at https://192.168.72.190:8444/healthz ...
	I0920 18:18:06.425353   75264 api_server.go:279] https://192.168.72.190:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 18:18:06.425386   75264 api_server.go:103] status: https://192.168.72.190:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 18:18:06.595786   75264 api_server.go:253] Checking apiserver healthz at https://192.168.72.190:8444/healthz ...
	I0920 18:18:06.602114   75264 api_server.go:279] https://192.168.72.190:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 18:18:06.602148   75264 api_server.go:103] status: https://192.168.72.190:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 18:18:07.096095   75264 api_server.go:253] Checking apiserver healthz at https://192.168.72.190:8444/healthz ...
	I0920 18:18:07.102320   75264 api_server.go:279] https://192.168.72.190:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 18:18:07.102348   75264 api_server.go:103] status: https://192.168.72.190:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 18:18:07.595804   75264 api_server.go:253] Checking apiserver healthz at https://192.168.72.190:8444/healthz ...
	I0920 18:18:07.600025   75264 api_server.go:279] https://192.168.72.190:8444/healthz returned 200:
	ok
	I0920 18:18:07.606598   75264 api_server.go:141] control plane version: v1.31.1
	I0920 18:18:07.606626   75264 api_server.go:131] duration metric: took 3.511479158s to wait for apiserver health ...
	I0920 18:18:07.606637   75264 cni.go:84] Creating CNI manager for ""
	I0920 18:18:07.606645   75264 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:18:07.608423   75264 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 18:18:02.992412   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:02.992887   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:18:02.992919   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:18:02.992822   76990 retry.go:31] will retry after 2.100326569s: waiting for machine to come up
	I0920 18:18:05.095088   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:05.095546   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:18:05.095569   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:18:05.095507   76990 retry.go:31] will retry after 1.758181729s: waiting for machine to come up
	I0920 18:18:06.855172   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:06.855654   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:18:06.855678   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:18:06.855606   76990 retry.go:31] will retry after 2.478116743s: waiting for machine to come up
	I0920 18:18:05.694350   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:08.191644   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:07.609444   75264 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 18:18:07.620420   75264 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 18:18:07.637796   75264 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 18:18:07.652676   75264 system_pods.go:59] 8 kube-system pods found
	I0920 18:18:07.652710   75264 system_pods.go:61] "coredns-7c65d6cfc9-dmdfb" [8e4ffdb3-37e0-4498-bdf5-d6e7dbcad020] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 18:18:07.652721   75264 system_pods.go:61] "etcd-default-k8s-diff-port-553719" [e8de0e96-5f3a-4f3f-ae69-c5da7d3c2eb7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0920 18:18:07.652727   75264 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-553719" [5164e26c-7fcd-41dc-865f-429046a9ad61] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0920 18:18:07.652735   75264 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-553719" [b2c00a76-9645-4c63-9812-43e6ed9f1d4f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0920 18:18:07.652741   75264 system_pods.go:61] "kube-proxy-p9crq" [83e0f53d-6960-42c4-904d-ea85ba9160f4] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0920 18:18:07.652746   75264 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-553719" [42155f7b-95bb-467b-acf5-89eb4f80cf76] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0920 18:18:07.652751   75264 system_pods.go:61] "metrics-server-6867b74b74-vtl79" [29e0b6eb-22a9-4e37-97f9-83b48cc38193] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 18:18:07.652760   75264 system_pods.go:61] "storage-provisioner" [6fad2d07-f99e-45ac-9657-bce6d73d7fce] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0920 18:18:07.652769   75264 system_pods.go:74] duration metric: took 14.950839ms to wait for pod list to return data ...
	I0920 18:18:07.652780   75264 node_conditions.go:102] verifying NodePressure condition ...
	I0920 18:18:07.658890   75264 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 18:18:07.658927   75264 node_conditions.go:123] node cpu capacity is 2
	I0920 18:18:07.658941   75264 node_conditions.go:105] duration metric: took 6.15496ms to run NodePressure ...
	I0920 18:18:07.658964   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:07.932231   75264 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0920 18:18:07.936474   75264 kubeadm.go:739] kubelet initialised
	I0920 18:18:07.936500   75264 kubeadm.go:740] duration metric: took 4.236071ms waiting for restarted kubelet to initialise ...
	I0920 18:18:07.936507   75264 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:18:07.941918   75264 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-dmdfb" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:07.947609   75264 pod_ready.go:98] node "default-k8s-diff-port-553719" hosting pod "coredns-7c65d6cfc9-dmdfb" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:07.947642   75264 pod_ready.go:82] duration metric: took 5.693968ms for pod "coredns-7c65d6cfc9-dmdfb" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:07.947653   75264 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-553719" hosting pod "coredns-7c65d6cfc9-dmdfb" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:07.947660   75264 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:07.954317   75264 pod_ready.go:98] node "default-k8s-diff-port-553719" hosting pod "etcd-default-k8s-diff-port-553719" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:07.954358   75264 pod_ready.go:82] duration metric: took 6.689603ms for pod "etcd-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:07.954373   75264 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-553719" hosting pod "etcd-default-k8s-diff-port-553719" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:07.954382   75264 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:07.966099   75264 pod_ready.go:98] node "default-k8s-diff-port-553719" hosting pod "kube-apiserver-default-k8s-diff-port-553719" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:07.966126   75264 pod_ready.go:82] duration metric: took 11.735727ms for pod "kube-apiserver-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:07.966141   75264 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-553719" hosting pod "kube-apiserver-default-k8s-diff-port-553719" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:07.966154   75264 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:08.041768   75264 pod_ready.go:98] node "default-k8s-diff-port-553719" hosting pod "kube-controller-manager-default-k8s-diff-port-553719" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:08.041794   75264 pod_ready.go:82] duration metric: took 75.630916ms for pod "kube-controller-manager-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:08.041805   75264 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-553719" hosting pod "kube-controller-manager-default-k8s-diff-port-553719" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:08.041816   75264 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-p9crq" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:08.440423   75264 pod_ready.go:98] node "default-k8s-diff-port-553719" hosting pod "kube-proxy-p9crq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:08.440459   75264 pod_ready.go:82] duration metric: took 398.635469ms for pod "kube-proxy-p9crq" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:08.440471   75264 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-553719" hosting pod "kube-proxy-p9crq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:08.440480   75264 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:08.841917   75264 pod_ready.go:98] node "default-k8s-diff-port-553719" hosting pod "kube-scheduler-default-k8s-diff-port-553719" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:08.841953   75264 pod_ready.go:82] duration metric: took 401.459059ms for pod "kube-scheduler-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:08.841968   75264 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-553719" hosting pod "kube-scheduler-default-k8s-diff-port-553719" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:08.841977   75264 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:09.241198   75264 pod_ready.go:98] node "default-k8s-diff-port-553719" hosting pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:09.241225   75264 pod_ready.go:82] duration metric: took 399.238784ms for pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:09.241237   75264 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-553719" hosting pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:09.241245   75264 pod_ready.go:39] duration metric: took 1.304729447s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:18:09.241259   75264 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 18:18:09.253519   75264 ops.go:34] apiserver oom_adj: -16
	I0920 18:18:09.253546   75264 kubeadm.go:597] duration metric: took 7.96895197s to restartPrimaryControlPlane
	I0920 18:18:09.253558   75264 kubeadm.go:394] duration metric: took 8.024395552s to StartCluster
	I0920 18:18:09.253586   75264 settings.go:142] acquiring lock: {Name:mk2a13c58cdc0faf8cddca5d6716175d45db9bfd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:18:09.253682   75264 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19672-8777/kubeconfig
	I0920 18:18:09.255907   75264 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/kubeconfig: {Name:mkf32a4c736808e023459b2f0e40188618a38db1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:18:09.256208   75264 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.190 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:18:09.256322   75264 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 18:18:09.256394   75264 config.go:182] Loaded profile config "default-k8s-diff-port-553719": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:18:09.256420   75264 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-553719"
	I0920 18:18:09.256428   75264 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-553719"
	I0920 18:18:09.256440   75264 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-553719"
	I0920 18:18:09.256448   75264 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-553719"
	I0920 18:18:09.256457   75264 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-553719"
	W0920 18:18:09.256468   75264 addons.go:243] addon metrics-server should already be in state true
	I0920 18:18:09.256496   75264 host.go:66] Checking if "default-k8s-diff-port-553719" exists ...
	I0920 18:18:09.256441   75264 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-553719"
	W0920 18:18:09.256569   75264 addons.go:243] addon storage-provisioner should already be in state true
	I0920 18:18:09.256602   75264 host.go:66] Checking if "default-k8s-diff-port-553719" exists ...
	I0920 18:18:09.256814   75264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:09.256844   75264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:09.256882   75264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:09.256926   75264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:09.256930   75264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:09.256957   75264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:09.258738   75264 out.go:177] * Verifying Kubernetes components...
	I0920 18:18:09.259893   75264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:18:09.272294   75264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45635
	I0920 18:18:09.272357   75264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39993
	I0920 18:18:09.272304   75264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32923
	I0920 18:18:09.272766   75264 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:09.272889   75264 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:09.272918   75264 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:09.273279   75264 main.go:141] libmachine: Using API Version  1
	I0920 18:18:09.273294   75264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:09.273416   75264 main.go:141] libmachine: Using API Version  1
	I0920 18:18:09.273438   75264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:09.273457   75264 main.go:141] libmachine: Using API Version  1
	I0920 18:18:09.273478   75264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:09.273657   75264 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:09.273850   75264 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:09.273922   75264 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:09.274017   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetState
	I0920 18:18:09.274244   75264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:09.274287   75264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:09.274399   75264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:09.274430   75264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:09.277535   75264 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-553719"
	W0920 18:18:09.277556   75264 addons.go:243] addon default-storageclass should already be in state true
	I0920 18:18:09.277586   75264 host.go:66] Checking if "default-k8s-diff-port-553719" exists ...
	I0920 18:18:09.277990   75264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:09.278041   75264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:09.290955   75264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39845
	I0920 18:18:09.291487   75264 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:09.292058   75264 main.go:141] libmachine: Using API Version  1
	I0920 18:18:09.292087   75264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:09.292409   75264 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:09.292607   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetState
	I0920 18:18:09.293351   75264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44575
	I0920 18:18:09.293790   75264 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:09.294056   75264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44517
	I0920 18:18:09.294412   75264 main.go:141] libmachine: Using API Version  1
	I0920 18:18:09.294438   75264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:09.294540   75264 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:09.294854   75264 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:09.294902   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .DriverName
	I0920 18:18:09.295362   75264 main.go:141] libmachine: Using API Version  1
	I0920 18:18:09.295387   75264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:09.295521   75264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:09.295569   75264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:09.295783   75264 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:09.295993   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetState
	I0920 18:18:09.297231   75264 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0920 18:18:09.298214   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .DriverName
	I0920 18:18:09.298825   75264 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 18:18:09.298849   75264 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 18:18:09.298870   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHHostname
	I0920 18:18:09.299849   75264 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:18:09.301018   75264 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 18:18:09.301034   75264 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 18:18:09.301048   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHHostname
	I0920 18:18:09.302841   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:18:09.303141   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:18:09.303165   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:18:09.303335   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHPort
	I0920 18:18:09.303491   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:18:09.303627   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHUsername
	I0920 18:18:09.303772   75264 sshutil.go:53] new ssh client: &{IP:192.168.72.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/default-k8s-diff-port-553719/id_rsa Username:docker}
	I0920 18:18:09.304104   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:18:09.304507   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:18:09.304552   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:18:09.304623   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHPort
	I0920 18:18:09.304786   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:18:09.304946   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHUsername
	I0920 18:18:09.305085   75264 sshutil.go:53] new ssh client: &{IP:192.168.72.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/default-k8s-diff-port-553719/id_rsa Username:docker}
	I0920 18:18:09.312912   75264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35085
	I0920 18:18:09.313277   75264 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:09.313780   75264 main.go:141] libmachine: Using API Version  1
	I0920 18:18:09.313801   75264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:09.314355   75264 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:09.314525   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetState
	I0920 18:18:09.316088   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .DriverName
	I0920 18:18:09.316415   75264 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 18:18:09.316429   75264 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 18:18:09.316448   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHHostname
	I0920 18:18:09.319116   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:18:09.319484   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:93:60", ip: ""} in network mk-default-k8s-diff-port-553719: {Iface:virbr4 ExpiryTime:2024-09-20 19:17:47 +0000 UTC Type:0 Mac:52:54:00:dd:93:60 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:default-k8s-diff-port-553719 Clientid:01:52:54:00:dd:93:60}
	I0920 18:18:09.319512   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | domain default-k8s-diff-port-553719 has defined IP address 192.168.72.190 and MAC address 52:54:00:dd:93:60 in network mk-default-k8s-diff-port-553719
	I0920 18:18:09.319664   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHPort
	I0920 18:18:09.319832   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHKeyPath
	I0920 18:18:09.319984   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .GetSSHUsername
	I0920 18:18:09.320098   75264 sshutil.go:53] new ssh client: &{IP:192.168.72.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/default-k8s-diff-port-553719/id_rsa Username:docker}
	I0920 18:18:09.457570   75264 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:18:09.476315   75264 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-553719" to be "Ready" ...
	I0920 18:18:09.599420   75264 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 18:18:09.599442   75264 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0920 18:18:09.600777   75264 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 18:18:09.621891   75264 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 18:18:09.626217   75264 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 18:18:09.626251   75264 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 18:18:09.675537   75264 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 18:18:09.675569   75264 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 18:18:09.723167   75264 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 18:18:10.813006   75264 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.191083617s)
	I0920 18:18:10.813080   75264 main.go:141] libmachine: Making call to close driver server
	I0920 18:18:10.813095   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .Close
	I0920 18:18:10.813403   75264 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:18:10.813452   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | Closing plugin on server side
	I0920 18:18:10.813455   75264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:18:10.813497   75264 main.go:141] libmachine: Making call to close driver server
	I0920 18:18:10.813518   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .Close
	I0920 18:18:10.813748   75264 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:18:10.813764   75264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:18:10.813783   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | Closing plugin on server side
	I0920 18:18:10.813975   75264 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.213167075s)
	I0920 18:18:10.814018   75264 main.go:141] libmachine: Making call to close driver server
	I0920 18:18:10.814034   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .Close
	I0920 18:18:10.814323   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | Closing plugin on server side
	I0920 18:18:10.814325   75264 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:18:10.814345   75264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:18:10.814358   75264 main.go:141] libmachine: Making call to close driver server
	I0920 18:18:10.814368   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .Close
	I0920 18:18:10.814653   75264 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:18:10.814673   75264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:18:10.821768   75264 main.go:141] libmachine: Making call to close driver server
	I0920 18:18:10.821785   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .Close
	I0920 18:18:10.822016   75264 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:18:10.822034   75264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:18:10.846062   75264 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.122849108s)
	I0920 18:18:10.846122   75264 main.go:141] libmachine: Making call to close driver server
	I0920 18:18:10.846139   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .Close
	I0920 18:18:10.846441   75264 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:18:10.846469   75264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:18:10.846469   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | Closing plugin on server side
	I0920 18:18:10.846479   75264 main.go:141] libmachine: Making call to close driver server
	I0920 18:18:10.846488   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) Calling .Close
	I0920 18:18:10.846715   75264 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:18:10.846737   75264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:18:10.846748   75264 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-553719"
	I0920 18:18:10.846749   75264 main.go:141] libmachine: (default-k8s-diff-port-553719) DBG | Closing plugin on server side
	I0920 18:18:10.848702   75264 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0920 18:18:10.849990   75264 addons.go:510] duration metric: took 1.593667614s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0920 18:18:11.480490   75264 node_ready.go:53] node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:09.334928   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:09.335367   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | unable to find current IP address of domain old-k8s-version-744025 in network mk-old-k8s-version-744025
	I0920 18:18:09.335402   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | I0920 18:18:09.335314   76990 retry.go:31] will retry after 4.194120768s: waiting for machine to come up
	I0920 18:18:10.192078   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:12.192186   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:14.778926   74753 start.go:364] duration metric: took 54.584395971s to acquireMachinesLock for "no-preload-956403"
	I0920 18:18:14.778979   74753 start.go:96] Skipping create...Using existing machine configuration
	I0920 18:18:14.778987   74753 fix.go:54] fixHost starting: 
	I0920 18:18:14.779392   74753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:14.779430   74753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:14.796004   74753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45761
	I0920 18:18:14.796509   74753 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:14.797006   74753 main.go:141] libmachine: Using API Version  1
	I0920 18:18:14.797028   74753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:14.797330   74753 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:14.797499   74753 main.go:141] libmachine: (no-preload-956403) Calling .DriverName
	I0920 18:18:14.797650   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetState
	I0920 18:18:14.799376   74753 fix.go:112] recreateIfNeeded on no-preload-956403: state=Stopped err=<nil>
	I0920 18:18:14.799400   74753 main.go:141] libmachine: (no-preload-956403) Calling .DriverName
	W0920 18:18:14.799564   74753 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 18:18:14.802451   74753 out.go:177] * Restarting existing kvm2 VM for "no-preload-956403" ...
	I0920 18:18:13.533702   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.534281   75577 main.go:141] libmachine: (old-k8s-version-744025) Found IP for machine: 192.168.39.207
	I0920 18:18:13.534309   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has current primary IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.534314   75577 main.go:141] libmachine: (old-k8s-version-744025) Reserving static IP address...
	I0920 18:18:13.534725   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "old-k8s-version-744025", mac: "52:54:00:e5:57:41", ip: "192.168.39.207"} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:13.534761   75577 main.go:141] libmachine: (old-k8s-version-744025) Reserved static IP address: 192.168.39.207
	I0920 18:18:13.534783   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | skip adding static IP to network mk-old-k8s-version-744025 - found existing host DHCP lease matching {name: "old-k8s-version-744025", mac: "52:54:00:e5:57:41", ip: "192.168.39.207"}
	I0920 18:18:13.534800   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | Getting to WaitForSSH function...
	I0920 18:18:13.534816   75577 main.go:141] libmachine: (old-k8s-version-744025) Waiting for SSH to be available...
	I0920 18:18:13.536879   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.537271   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:13.537301   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.537391   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | Using SSH client type: external
	I0920 18:18:13.537439   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/old-k8s-version-744025/id_rsa (-rw-------)
	I0920 18:18:13.537476   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.207 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-8777/.minikube/machines/old-k8s-version-744025/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 18:18:13.537486   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | About to run SSH command:
	I0920 18:18:13.537498   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | exit 0
	I0920 18:18:13.662214   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | SSH cmd err, output: <nil>: 
	I0920 18:18:13.662565   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetConfigRaw
	I0920 18:18:13.663269   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetIP
	I0920 18:18:13.666068   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.666530   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:13.666561   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.666899   75577 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/config.json ...
	I0920 18:18:13.667111   75577 machine.go:93] provisionDockerMachine start ...
	I0920 18:18:13.667129   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .DriverName
	I0920 18:18:13.667331   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHHostname
	I0920 18:18:13.669347   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.669717   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:13.669743   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.669908   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHPort
	I0920 18:18:13.670167   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:13.670334   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:13.670453   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHUsername
	I0920 18:18:13.670583   75577 main.go:141] libmachine: Using SSH client type: native
	I0920 18:18:13.670759   75577 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.207 22 <nil> <nil>}
	I0920 18:18:13.670770   75577 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 18:18:13.774059   75577 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 18:18:13.774093   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetMachineName
	I0920 18:18:13.774406   75577 buildroot.go:166] provisioning hostname "old-k8s-version-744025"
	I0920 18:18:13.774434   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetMachineName
	I0920 18:18:13.774618   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHHostname
	I0920 18:18:13.777175   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.777482   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:13.777513   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.777633   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHPort
	I0920 18:18:13.777803   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:13.777966   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:13.778082   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHUsername
	I0920 18:18:13.778235   75577 main.go:141] libmachine: Using SSH client type: native
	I0920 18:18:13.778404   75577 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.207 22 <nil> <nil>}
	I0920 18:18:13.778417   75577 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-744025 && echo "old-k8s-version-744025" | sudo tee /etc/hostname
	I0920 18:18:13.900180   75577 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-744025
	
	I0920 18:18:13.900224   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHHostname
	I0920 18:18:13.902958   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.903309   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:13.903340   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:13.903543   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHPort
	I0920 18:18:13.903762   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:13.903931   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:13.904051   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHUsername
	I0920 18:18:13.904214   75577 main.go:141] libmachine: Using SSH client type: native
	I0920 18:18:13.904393   75577 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.207 22 <nil> <nil>}
	I0920 18:18:13.904409   75577 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-744025' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-744025/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-744025' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 18:18:14.023748   75577 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:18:14.023781   75577 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-8777/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-8777/.minikube}
	I0920 18:18:14.023834   75577 buildroot.go:174] setting up certificates
	I0920 18:18:14.023851   75577 provision.go:84] configureAuth start
	I0920 18:18:14.023866   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetMachineName
	I0920 18:18:14.024154   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetIP
	I0920 18:18:14.027240   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.027640   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:14.027778   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.027867   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHHostname
	I0920 18:18:14.030383   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.030741   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:14.030765   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.030915   75577 provision.go:143] copyHostCerts
	I0920 18:18:14.030979   75577 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem, removing ...
	I0920 18:18:14.030999   75577 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem
	I0920 18:18:14.031072   75577 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem (1082 bytes)
	I0920 18:18:14.031188   75577 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem, removing ...
	I0920 18:18:14.031205   75577 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem
	I0920 18:18:14.031240   75577 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem (1123 bytes)
	I0920 18:18:14.031340   75577 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem, removing ...
	I0920 18:18:14.031351   75577 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem
	I0920 18:18:14.031378   75577 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem (1675 bytes)
	I0920 18:18:14.031455   75577 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-744025 san=[127.0.0.1 192.168.39.207 localhost minikube old-k8s-version-744025]
	I0920 18:18:14.140775   75577 provision.go:177] copyRemoteCerts
	I0920 18:18:14.140847   75577 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 18:18:14.140883   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHHostname
	I0920 18:18:14.143599   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.144016   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:14.144062   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.144199   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHPort
	I0920 18:18:14.144377   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:14.144526   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHUsername
	I0920 18:18:14.144656   75577 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/old-k8s-version-744025/id_rsa Username:docker}
	I0920 18:18:14.228286   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 18:18:14.257293   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0920 18:18:14.281335   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0920 18:18:14.305736   75577 provision.go:87] duration metric: took 281.853458ms to configureAuth
	I0920 18:18:14.305762   75577 buildroot.go:189] setting minikube options for container-runtime
	I0920 18:18:14.305983   75577 config.go:182] Loaded profile config "old-k8s-version-744025": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0920 18:18:14.306076   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHHostname
	I0920 18:18:14.308833   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.309220   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:14.309244   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.309535   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHPort
	I0920 18:18:14.309779   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:14.309974   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:14.310124   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHUsername
	I0920 18:18:14.310316   75577 main.go:141] libmachine: Using SSH client type: native
	I0920 18:18:14.310535   75577 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.207 22 <nil> <nil>}
	I0920 18:18:14.310551   75577 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 18:18:14.536416   75577 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 18:18:14.536442   75577 machine.go:96] duration metric: took 869.318772ms to provisionDockerMachine
	I0920 18:18:14.536456   75577 start.go:293] postStartSetup for "old-k8s-version-744025" (driver="kvm2")
	I0920 18:18:14.536468   75577 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 18:18:14.536510   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .DriverName
	I0920 18:18:14.536803   75577 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 18:18:14.536831   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHHostname
	I0920 18:18:14.539503   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.539923   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:14.539951   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.540126   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHPort
	I0920 18:18:14.540303   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:14.540463   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHUsername
	I0920 18:18:14.540649   75577 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/old-k8s-version-744025/id_rsa Username:docker}
	I0920 18:18:14.625866   75577 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 18:18:14.630527   75577 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 18:18:14.630551   75577 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/addons for local assets ...
	I0920 18:18:14.630637   75577 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/files for local assets ...
	I0920 18:18:14.630727   75577 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem -> 159732.pem in /etc/ssl/certs
	I0920 18:18:14.630840   75577 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 18:18:14.640965   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /etc/ssl/certs/159732.pem (1708 bytes)
	I0920 18:18:14.668227   75577 start.go:296] duration metric: took 131.756559ms for postStartSetup
	I0920 18:18:14.668268   75577 fix.go:56] duration metric: took 19.581104117s for fixHost
	I0920 18:18:14.668295   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHHostname
	I0920 18:18:14.671138   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.671520   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:14.671549   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.671777   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHPort
	I0920 18:18:14.671981   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:14.672141   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:14.672280   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHUsername
	I0920 18:18:14.672436   75577 main.go:141] libmachine: Using SSH client type: native
	I0920 18:18:14.672606   75577 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.207 22 <nil> <nil>}
	I0920 18:18:14.672616   75577 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 18:18:14.778752   75577 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726856294.753230182
	
	I0920 18:18:14.778785   75577 fix.go:216] guest clock: 1726856294.753230182
	I0920 18:18:14.778796   75577 fix.go:229] Guest: 2024-09-20 18:18:14.753230182 +0000 UTC Remote: 2024-09-20 18:18:14.668273351 +0000 UTC m=+241.967312055 (delta=84.956831ms)
	I0920 18:18:14.778827   75577 fix.go:200] guest clock delta is within tolerance: 84.956831ms
	I0920 18:18:14.778836   75577 start.go:83] releasing machines lock for "old-k8s-version-744025", held for 19.691716285s
	I0920 18:18:14.778874   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .DriverName
	I0920 18:18:14.779135   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetIP
	I0920 18:18:14.781932   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.782357   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:14.782386   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.782572   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .DriverName
	I0920 18:18:14.783124   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .DriverName
	I0920 18:18:14.783328   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .DriverName
	I0920 18:18:14.783401   75577 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 18:18:14.783465   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHHostname
	I0920 18:18:14.783519   75577 ssh_runner.go:195] Run: cat /version.json
	I0920 18:18:14.783552   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHHostname
	I0920 18:18:14.786645   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.786817   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.786994   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:14.787023   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.787207   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:14.787259   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:14.787264   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHPort
	I0920 18:18:14.787404   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHPort
	I0920 18:18:14.787469   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:14.787561   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHKeyPath
	I0920 18:18:14.787612   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHUsername
	I0920 18:18:14.787711   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetSSHUsername
	I0920 18:18:14.787845   75577 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/old-k8s-version-744025/id_rsa Username:docker}
	I0920 18:18:14.787845   75577 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/old-k8s-version-744025/id_rsa Username:docker}
	I0920 18:18:14.872183   75577 ssh_runner.go:195] Run: systemctl --version
	I0920 18:18:14.913275   75577 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 18:18:15.071299   75577 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 18:18:15.078725   75577 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 18:18:15.078806   75577 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 18:18:15.101497   75577 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 18:18:15.101522   75577 start.go:495] detecting cgroup driver to use...
	I0920 18:18:15.101579   75577 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 18:18:15.120626   75577 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 18:18:15.137297   75577 docker.go:217] disabling cri-docker service (if available) ...
	I0920 18:18:15.137401   75577 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 18:18:15.152152   75577 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 18:18:15.167359   75577 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 18:18:15.288763   75577 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 18:18:15.475429   75577 docker.go:233] disabling docker service ...
	I0920 18:18:15.475512   75577 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 18:18:15.493331   75577 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 18:18:15.510326   75577 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 18:18:15.629119   75577 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 18:18:15.758923   75577 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 18:18:15.776778   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 18:18:15.798980   75577 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0920 18:18:15.799042   75577 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:18:15.810264   75577 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 18:18:15.810322   75577 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:18:15.821060   75577 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:18:15.832685   75577 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:18:15.843026   75577 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 18:18:15.853996   75577 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 18:18:15.864126   75577 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 18:18:15.864183   75577 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 18:18:15.878216   75577 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 18:18:15.889820   75577 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:18:16.015555   75577 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 18:18:16.118797   75577 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 18:18:16.118880   75577 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 18:18:16.124430   75577 start.go:563] Will wait 60s for crictl version
	I0920 18:18:16.124485   75577 ssh_runner.go:195] Run: which crictl
	I0920 18:18:16.128632   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 18:18:16.165596   75577 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 18:18:16.165707   75577 ssh_runner.go:195] Run: crio --version
	I0920 18:18:16.198772   75577 ssh_runner.go:195] Run: crio --version
	I0920 18:18:16.238511   75577 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0920 18:18:13.980331   75264 node_ready.go:53] node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:15.981435   75264 node_ready.go:53] node "default-k8s-diff-port-553719" has status "Ready":"False"
	I0920 18:18:16.982412   75264 node_ready.go:49] node "default-k8s-diff-port-553719" has status "Ready":"True"
	I0920 18:18:16.982441   75264 node_ready.go:38] duration metric: took 7.506086526s for node "default-k8s-diff-port-553719" to be "Ready" ...
	I0920 18:18:16.982457   75264 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:18:16.989870   75264 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-dmdfb" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:14.803646   74753 main.go:141] libmachine: (no-preload-956403) Calling .Start
	I0920 18:18:14.803856   74753 main.go:141] libmachine: (no-preload-956403) Ensuring networks are active...
	I0920 18:18:14.804594   74753 main.go:141] libmachine: (no-preload-956403) Ensuring network default is active
	I0920 18:18:14.804920   74753 main.go:141] libmachine: (no-preload-956403) Ensuring network mk-no-preload-956403 is active
	I0920 18:18:14.805341   74753 main.go:141] libmachine: (no-preload-956403) Getting domain xml...
	I0920 18:18:14.806068   74753 main.go:141] libmachine: (no-preload-956403) Creating domain...
	I0920 18:18:16.196663   74753 main.go:141] libmachine: (no-preload-956403) Waiting to get IP...
	I0920 18:18:16.197381   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:16.197762   74753 main.go:141] libmachine: (no-preload-956403) DBG | unable to find current IP address of domain no-preload-956403 in network mk-no-preload-956403
	I0920 18:18:16.197885   74753 main.go:141] libmachine: (no-preload-956403) DBG | I0920 18:18:16.197764   77223 retry.go:31] will retry after 214.087819ms: waiting for machine to come up
	I0920 18:18:16.413365   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:16.413951   74753 main.go:141] libmachine: (no-preload-956403) DBG | unable to find current IP address of domain no-preload-956403 in network mk-no-preload-956403
	I0920 18:18:16.413977   74753 main.go:141] libmachine: (no-preload-956403) DBG | I0920 18:18:16.413916   77223 retry.go:31] will retry after 249.35647ms: waiting for machine to come up
	I0920 18:18:16.665587   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:16.666168   74753 main.go:141] libmachine: (no-preload-956403) DBG | unable to find current IP address of domain no-preload-956403 in network mk-no-preload-956403
	I0920 18:18:16.666203   74753 main.go:141] libmachine: (no-preload-956403) DBG | I0920 18:18:16.666132   77223 retry.go:31] will retry after 374.598012ms: waiting for machine to come up
	I0920 18:18:17.042911   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:17.043594   74753 main.go:141] libmachine: (no-preload-956403) DBG | unable to find current IP address of domain no-preload-956403 in network mk-no-preload-956403
	I0920 18:18:17.043618   74753 main.go:141] libmachine: (no-preload-956403) DBG | I0920 18:18:17.043550   77223 retry.go:31] will retry after 536.252353ms: waiting for machine to come up
	I0920 18:18:17.581141   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:17.581582   74753 main.go:141] libmachine: (no-preload-956403) DBG | unable to find current IP address of domain no-preload-956403 in network mk-no-preload-956403
	I0920 18:18:17.581616   74753 main.go:141] libmachine: (no-preload-956403) DBG | I0920 18:18:17.581528   77223 retry.go:31] will retry after 459.241867ms: waiting for machine to come up
	I0920 18:18:16.239946   75577 main.go:141] libmachine: (old-k8s-version-744025) Calling .GetIP
	I0920 18:18:16.242727   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:16.243147   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:57:41", ip: ""} in network mk-old-k8s-version-744025: {Iface:virbr1 ExpiryTime:2024-09-20 19:18:06 +0000 UTC Type:0 Mac:52:54:00:e5:57:41 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:old-k8s-version-744025 Clientid:01:52:54:00:e5:57:41}
	I0920 18:18:16.243184   75577 main.go:141] libmachine: (old-k8s-version-744025) DBG | domain old-k8s-version-744025 has defined IP address 192.168.39.207 and MAC address 52:54:00:e5:57:41 in network mk-old-k8s-version-744025
	I0920 18:18:16.243561   75577 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 18:18:16.247928   75577 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:18:16.262165   75577 kubeadm.go:883] updating cluster {Name:old-k8s-version-744025 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-744025 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.207 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 18:18:16.262310   75577 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0920 18:18:16.262358   75577 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:18:16.313771   75577 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0920 18:18:16.313854   75577 ssh_runner.go:195] Run: which lz4
	I0920 18:18:16.318361   75577 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 18:18:16.322529   75577 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 18:18:16.322570   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0920 18:18:14.192319   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:16.194161   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:18.699498   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:18.999985   75264 pod_ready.go:103] pod "coredns-7c65d6cfc9-dmdfb" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:21.497825   75264 pod_ready.go:103] pod "coredns-7c65d6cfc9-dmdfb" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:18.042075   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:18.042566   74753 main.go:141] libmachine: (no-preload-956403) DBG | unable to find current IP address of domain no-preload-956403 in network mk-no-preload-956403
	I0920 18:18:18.042603   74753 main.go:141] libmachine: (no-preload-956403) DBG | I0920 18:18:18.042533   77223 retry.go:31] will retry after 833.585895ms: waiting for machine to come up
	I0920 18:18:18.877534   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:18.878023   74753 main.go:141] libmachine: (no-preload-956403) DBG | unable to find current IP address of domain no-preload-956403 in network mk-no-preload-956403
	I0920 18:18:18.878047   74753 main.go:141] libmachine: (no-preload-956403) DBG | I0920 18:18:18.877988   77223 retry.go:31] will retry after 1.035805905s: waiting for machine to come up
	I0920 18:18:19.915316   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:19.915735   74753 main.go:141] libmachine: (no-preload-956403) DBG | unable to find current IP address of domain no-preload-956403 in network mk-no-preload-956403
	I0920 18:18:19.915859   74753 main.go:141] libmachine: (no-preload-956403) DBG | I0920 18:18:19.915787   77223 retry.go:31] will retry after 978.827371ms: waiting for machine to come up
	I0920 18:18:20.896532   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:20.897185   74753 main.go:141] libmachine: (no-preload-956403) DBG | unable to find current IP address of domain no-preload-956403 in network mk-no-preload-956403
	I0920 18:18:20.897216   74753 main.go:141] libmachine: (no-preload-956403) DBG | I0920 18:18:20.897122   77223 retry.go:31] will retry after 1.808853939s: waiting for machine to come up
	I0920 18:18:17.953626   75577 crio.go:462] duration metric: took 1.635321078s to copy over tarball
	I0920 18:18:17.953717   75577 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 18:18:21.049561   75577 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.095809102s)
	I0920 18:18:21.049588   75577 crio.go:469] duration metric: took 3.095926273s to extract the tarball
	I0920 18:18:21.049596   75577 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 18:18:21.093521   75577 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:18:21.132318   75577 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0920 18:18:21.132361   75577 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0920 18:18:21.132426   75577 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:18:21.132449   75577 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 18:18:21.132432   75577 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 18:18:21.132551   75577 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 18:18:21.132587   75577 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0920 18:18:21.132708   75577 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0920 18:18:21.132819   75577 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 18:18:21.132557   75577 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0920 18:18:21.134217   75577 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0920 18:18:21.134256   75577 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0920 18:18:21.134277   75577 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 18:18:21.134313   75577 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0920 18:18:21.134327   75577 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 18:18:21.134341   75577 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 18:18:21.134266   75577 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 18:18:21.134356   75577 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:18:21.421546   75577 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0920 18:18:21.467311   75577 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0920 18:18:21.467349   75577 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0920 18:18:21.467400   75577 ssh_runner.go:195] Run: which crictl
	I0920 18:18:21.471158   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 18:18:21.474543   75577 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0920 18:18:21.480501   75577 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 18:18:21.499826   75577 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0920 18:18:21.499835   75577 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0920 18:18:21.505712   75577 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0920 18:18:21.514377   75577 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0920 18:18:21.514897   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 18:18:21.601531   75577 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0920 18:18:21.601582   75577 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0920 18:18:21.601635   75577 ssh_runner.go:195] Run: which crictl
	I0920 18:18:21.660417   75577 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0920 18:18:21.660457   75577 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 18:18:21.660504   75577 ssh_runner.go:195] Run: which crictl
	I0920 18:18:21.660528   75577 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0920 18:18:21.660571   75577 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 18:18:21.660625   75577 ssh_runner.go:195] Run: which crictl
	I0920 18:18:21.667538   75577 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0920 18:18:21.667558   75577 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0920 18:18:21.667583   75577 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 18:18:21.667595   75577 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0920 18:18:21.667633   75577 ssh_runner.go:195] Run: which crictl
	I0920 18:18:21.667652   75577 ssh_runner.go:195] Run: which crictl
	I0920 18:18:21.676529   75577 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0920 18:18:21.676580   75577 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 18:18:21.676602   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 18:18:21.676633   75577 ssh_runner.go:195] Run: which crictl
	I0920 18:18:21.676643   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 18:18:21.680041   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 18:18:21.681078   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 18:18:21.681180   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 18:18:21.683409   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 18:18:21.804439   75577 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0920 18:18:21.804492   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 18:18:21.804530   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 18:18:21.804585   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 18:18:21.823434   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 18:18:21.823497   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 18:18:21.823539   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 18:18:21.932568   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 18:18:21.932619   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 18:18:21.932639   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 18:18:21.958001   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 18:18:21.971089   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 18:18:21.975643   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 18:18:22.100118   75577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 18:18:22.100173   75577 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0920 18:18:22.100197   75577 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0920 18:18:22.100276   75577 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0920 18:18:22.100333   75577 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0920 18:18:22.109895   75577 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0920 18:18:22.137798   75577 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0920 18:18:22.331884   75577 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:18:22.476470   75577 cache_images.go:92] duration metric: took 1.344086626s to LoadCachedImages
	W0920 18:18:22.476575   75577 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0920 18:18:22.476595   75577 kubeadm.go:934] updating node { 192.168.39.207 8443 v1.20.0 crio true true} ...
	I0920 18:18:22.476742   75577 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-744025 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.207
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-744025 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 18:18:22.476833   75577 ssh_runner.go:195] Run: crio config
	I0920 18:18:22.528964   75577 cni.go:84] Creating CNI manager for ""
	I0920 18:18:22.528990   75577 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:18:22.528999   75577 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 18:18:22.529016   75577 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.207 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-744025 NodeName:old-k8s-version-744025 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.207"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.207 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0920 18:18:22.529173   75577 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.207
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-744025"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.207
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.207"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 18:18:22.529233   75577 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0920 18:18:22.540849   75577 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 18:18:22.540935   75577 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 18:18:22.551148   75577 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0920 18:18:22.569199   75577 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 18:18:22.585909   75577 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0920 18:18:22.604987   75577 ssh_runner.go:195] Run: grep 192.168.39.207	control-plane.minikube.internal$ /etc/hosts
	I0920 18:18:22.609152   75577 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.207	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:18:22.628042   75577 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:18:21.191366   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:23.193152   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:23.914443   75264 pod_ready.go:103] pod "coredns-7c65d6cfc9-dmdfb" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:25.002152   75264 pod_ready.go:93] pod "coredns-7c65d6cfc9-dmdfb" in "kube-system" namespace has status "Ready":"True"
	I0920 18:18:25.002185   75264 pod_ready.go:82] duration metric: took 8.012280973s for pod "coredns-7c65d6cfc9-dmdfb" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:25.002198   75264 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:25.010541   75264 pod_ready.go:93] pod "etcd-default-k8s-diff-port-553719" in "kube-system" namespace has status "Ready":"True"
	I0920 18:18:25.010570   75264 pod_ready.go:82] duration metric: took 8.362504ms for pod "etcd-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:25.010591   75264 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:25.023701   75264 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-553719" in "kube-system" namespace has status "Ready":"True"
	I0920 18:18:25.023741   75264 pod_ready.go:82] duration metric: took 13.139423ms for pod "kube-apiserver-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:25.023758   75264 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:25.031247   75264 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-553719" in "kube-system" namespace has status "Ready":"True"
	I0920 18:18:25.031282   75264 pod_ready.go:82] duration metric: took 7.515474ms for pod "kube-controller-manager-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:25.031307   75264 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-p9crq" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:25.119242   75264 pod_ready.go:93] pod "kube-proxy-p9crq" in "kube-system" namespace has status "Ready":"True"
	I0920 18:18:25.119267   75264 pod_ready.go:82] duration metric: took 87.951791ms for pod "kube-proxy-p9crq" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:25.119280   75264 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:25.518243   75264 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-553719" in "kube-system" namespace has status "Ready":"True"
	I0920 18:18:25.518267   75264 pod_ready.go:82] duration metric: took 398.979314ms for pod "kube-scheduler-default-k8s-diff-port-553719" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:25.518281   75264 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:22.707778   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:22.708271   74753 main.go:141] libmachine: (no-preload-956403) DBG | unable to find current IP address of domain no-preload-956403 in network mk-no-preload-956403
	I0920 18:18:22.708333   74753 main.go:141] libmachine: (no-preload-956403) DBG | I0920 18:18:22.708161   77223 retry.go:31] will retry after 1.516042611s: waiting for machine to come up
	I0920 18:18:24.225560   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:24.225987   74753 main.go:141] libmachine: (no-preload-956403) DBG | unable to find current IP address of domain no-preload-956403 in network mk-no-preload-956403
	I0920 18:18:24.226041   74753 main.go:141] libmachine: (no-preload-956403) DBG | I0920 18:18:24.225968   77223 retry.go:31] will retry after 2.135874415s: waiting for machine to come up
	I0920 18:18:26.363371   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:26.363861   74753 main.go:141] libmachine: (no-preload-956403) DBG | unable to find current IP address of domain no-preload-956403 in network mk-no-preload-956403
	I0920 18:18:26.363895   74753 main.go:141] libmachine: (no-preload-956403) DBG | I0920 18:18:26.363777   77223 retry.go:31] will retry after 3.29383193s: waiting for machine to come up
	I0920 18:18:22.796651   75577 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:18:22.813789   75577 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025 for IP: 192.168.39.207
	I0920 18:18:22.813817   75577 certs.go:194] generating shared ca certs ...
	I0920 18:18:22.813848   75577 certs.go:226] acquiring lock for ca certs: {Name:mkc7ef6c737c6bdc3fdd9dcff8f57029c020d8f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:18:22.814045   75577 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key
	I0920 18:18:22.814101   75577 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key
	I0920 18:18:22.814118   75577 certs.go:256] generating profile certs ...
	I0920 18:18:22.814290   75577 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/client.key
	I0920 18:18:22.814383   75577 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/apiserver.key.3105b99d
	I0920 18:18:22.814445   75577 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/proxy-client.key
	I0920 18:18:22.814626   75577 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem (1338 bytes)
	W0920 18:18:22.814660   75577 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973_empty.pem, impossibly tiny 0 bytes
	I0920 18:18:22.814666   75577 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 18:18:22.814691   75577 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem (1082 bytes)
	I0920 18:18:22.814717   75577 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem (1123 bytes)
	I0920 18:18:22.814749   75577 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem (1675 bytes)
	I0920 18:18:22.814813   75577 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem (1708 bytes)
	I0920 18:18:22.815736   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 18:18:22.863832   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 18:18:22.921747   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 18:18:22.959556   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0920 18:18:22.992097   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0920 18:18:23.027565   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 18:18:23.057374   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 18:18:23.094290   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/old-k8s-version-744025/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 18:18:23.120095   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem --> /usr/share/ca-certificates/15973.pem (1338 bytes)
	I0920 18:18:23.144374   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /usr/share/ca-certificates/159732.pem (1708 bytes)
	I0920 18:18:23.169431   75577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 18:18:23.195779   75577 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 18:18:23.212820   75577 ssh_runner.go:195] Run: openssl version
	I0920 18:18:23.218876   75577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 18:18:23.229684   75577 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:18:23.234533   75577 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:18:23.234603   75577 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:18:23.240460   75577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 18:18:23.251940   75577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15973.pem && ln -fs /usr/share/ca-certificates/15973.pem /etc/ssl/certs/15973.pem"
	I0920 18:18:23.263308   75577 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15973.pem
	I0920 18:18:23.268059   75577 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:04 /usr/share/ca-certificates/15973.pem
	I0920 18:18:23.268128   75577 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15973.pem
	I0920 18:18:23.274199   75577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15973.pem /etc/ssl/certs/51391683.0"
	I0920 18:18:23.286362   75577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/159732.pem && ln -fs /usr/share/ca-certificates/159732.pem /etc/ssl/certs/159732.pem"
	I0920 18:18:23.303962   75577 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/159732.pem
	I0920 18:18:23.310907   75577 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:04 /usr/share/ca-certificates/159732.pem
	I0920 18:18:23.310981   75577 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/159732.pem
	I0920 18:18:23.317881   75577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/159732.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 18:18:23.329247   75577 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 18:18:23.334223   75577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 18:18:23.340565   75577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 18:18:23.346929   75577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 18:18:23.353681   75577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 18:18:23.359699   75577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 18:18:23.365749   75577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 18:18:23.371899   75577 kubeadm.go:392] StartCluster: {Name:old-k8s-version-744025 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-744025 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.207 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:18:23.371981   75577 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 18:18:23.372027   75577 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:18:23.419904   75577 cri.go:89] found id: ""
	I0920 18:18:23.419982   75577 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 18:18:23.431761   75577 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 18:18:23.431782   75577 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 18:18:23.431833   75577 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 18:18:23.444358   75577 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 18:18:23.445545   75577 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-744025" does not appear in /home/jenkins/minikube-integration/19672-8777/kubeconfig
	I0920 18:18:23.446531   75577 kubeconfig.go:62] /home/jenkins/minikube-integration/19672-8777/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-744025" cluster setting kubeconfig missing "old-k8s-version-744025" context setting]
	I0920 18:18:23.447808   75577 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/kubeconfig: {Name:mkf32a4c736808e023459b2f0e40188618a38db1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:18:23.568927   75577 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 18:18:23.579991   75577 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.207
	I0920 18:18:23.580025   75577 kubeadm.go:1160] stopping kube-system containers ...
	I0920 18:18:23.580038   75577 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0920 18:18:23.580097   75577 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:18:23.625568   75577 cri.go:89] found id: ""
	I0920 18:18:23.625648   75577 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 18:18:23.643938   75577 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 18:18:23.654375   75577 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 18:18:23.654398   75577 kubeadm.go:157] found existing configuration files:
	
	I0920 18:18:23.654453   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 18:18:23.664335   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 18:18:23.664409   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 18:18:23.674996   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 18:18:23.685310   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 18:18:23.685401   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 18:18:23.696241   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 18:18:23.706386   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 18:18:23.706465   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 18:18:23.716491   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 18:18:23.726566   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 18:18:23.726626   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 18:18:23.738576   75577 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 18:18:23.749510   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:23.877503   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:24.789322   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:25.054969   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:25.152117   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:25.249140   75577 api_server.go:52] waiting for apiserver process to appear ...
	I0920 18:18:25.249245   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:25.749427   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:26.249360   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:26.749895   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:27.249636   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:25.194678   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:27.693680   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:27.524378   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:29.524998   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:32.025310   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:29.661278   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:29.661743   74753 main.go:141] libmachine: (no-preload-956403) DBG | unable to find current IP address of domain no-preload-956403 in network mk-no-preload-956403
	I0920 18:18:29.661762   74753 main.go:141] libmachine: (no-preload-956403) DBG | I0920 18:18:29.661714   77223 retry.go:31] will retry after 3.154777794s: waiting for machine to come up
	I0920 18:18:27.749629   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:28.250139   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:28.749691   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:29.249709   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:29.749550   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:30.250186   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:30.749562   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:31.249622   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:31.749925   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:32.250324   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:30.191419   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:32.191873   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:32.820331   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:32.820870   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has current primary IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:32.820895   74753 main.go:141] libmachine: (no-preload-956403) Found IP for machine: 192.168.50.47
	I0920 18:18:32.820908   74753 main.go:141] libmachine: (no-preload-956403) Reserving static IP address...
	I0920 18:18:32.821313   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "no-preload-956403", mac: "52:54:00:b6:13:30", ip: "192.168.50.47"} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:32.821339   74753 main.go:141] libmachine: (no-preload-956403) Reserved static IP address: 192.168.50.47
	I0920 18:18:32.821355   74753 main.go:141] libmachine: (no-preload-956403) DBG | skip adding static IP to network mk-no-preload-956403 - found existing host DHCP lease matching {name: "no-preload-956403", mac: "52:54:00:b6:13:30", ip: "192.168.50.47"}
	I0920 18:18:32.821372   74753 main.go:141] libmachine: (no-preload-956403) DBG | Getting to WaitForSSH function...
	I0920 18:18:32.821389   74753 main.go:141] libmachine: (no-preload-956403) Waiting for SSH to be available...
	I0920 18:18:32.823590   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:32.823894   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:32.823928   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:32.824019   74753 main.go:141] libmachine: (no-preload-956403) DBG | Using SSH client type: external
	I0920 18:18:32.824046   74753 main.go:141] libmachine: (no-preload-956403) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/no-preload-956403/id_rsa (-rw-------)
	I0920 18:18:32.824077   74753 main.go:141] libmachine: (no-preload-956403) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.47 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-8777/.minikube/machines/no-preload-956403/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 18:18:32.824090   74753 main.go:141] libmachine: (no-preload-956403) DBG | About to run SSH command:
	I0920 18:18:32.824102   74753 main.go:141] libmachine: (no-preload-956403) DBG | exit 0
	I0920 18:18:32.950122   74753 main.go:141] libmachine: (no-preload-956403) DBG | SSH cmd err, output: <nil>: 
	I0920 18:18:32.950537   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetConfigRaw
	I0920 18:18:32.951187   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetIP
	I0920 18:18:32.953731   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:32.954073   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:32.954102   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:32.954365   74753 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/no-preload-956403/config.json ...
	I0920 18:18:32.954603   74753 machine.go:93] provisionDockerMachine start ...
	I0920 18:18:32.954621   74753 main.go:141] libmachine: (no-preload-956403) Calling .DriverName
	I0920 18:18:32.954814   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:18:32.957049   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:32.957379   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:32.957425   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:32.957538   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHPort
	I0920 18:18:32.957717   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:32.957920   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:32.958094   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHUsername
	I0920 18:18:32.958282   74753 main.go:141] libmachine: Using SSH client type: native
	I0920 18:18:32.958494   74753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.47 22 <nil> <nil>}
	I0920 18:18:32.958506   74753 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 18:18:33.058187   74753 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 18:18:33.058222   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetMachineName
	I0920 18:18:33.058477   74753 buildroot.go:166] provisioning hostname "no-preload-956403"
	I0920 18:18:33.058508   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetMachineName
	I0920 18:18:33.058668   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:18:33.061310   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.061616   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:33.061649   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.061783   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHPort
	I0920 18:18:33.061961   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:33.062089   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:33.062197   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHUsername
	I0920 18:18:33.062340   74753 main.go:141] libmachine: Using SSH client type: native
	I0920 18:18:33.062536   74753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.47 22 <nil> <nil>}
	I0920 18:18:33.062553   74753 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-956403 && echo "no-preload-956403" | sudo tee /etc/hostname
	I0920 18:18:33.175825   74753 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-956403
	
	I0920 18:18:33.175860   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:18:33.178483   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.178749   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:33.178769   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.178904   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHPort
	I0920 18:18:33.179077   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:33.179226   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:33.179366   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHUsername
	I0920 18:18:33.179545   74753 main.go:141] libmachine: Using SSH client type: native
	I0920 18:18:33.179710   74753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.47 22 <nil> <nil>}
	I0920 18:18:33.179726   74753 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-956403' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-956403/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-956403' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 18:18:33.290675   74753 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:18:33.290701   74753 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-8777/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-8777/.minikube}
	I0920 18:18:33.290722   74753 buildroot.go:174] setting up certificates
	I0920 18:18:33.290735   74753 provision.go:84] configureAuth start
	I0920 18:18:33.290747   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetMachineName
	I0920 18:18:33.291015   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetIP
	I0920 18:18:33.293810   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.294244   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:33.294282   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.294337   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:18:33.296376   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.296749   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:33.296776   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.296930   74753 provision.go:143] copyHostCerts
	I0920 18:18:33.296985   74753 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem, removing ...
	I0920 18:18:33.296995   74753 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem
	I0920 18:18:33.297048   74753 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem (1082 bytes)
	I0920 18:18:33.297166   74753 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem, removing ...
	I0920 18:18:33.297177   74753 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem
	I0920 18:18:33.297211   74753 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem (1123 bytes)
	I0920 18:18:33.297287   74753 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem, removing ...
	I0920 18:18:33.297296   74753 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem
	I0920 18:18:33.297320   74753 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem (1675 bytes)
	I0920 18:18:33.297387   74753 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem org=jenkins.no-preload-956403 san=[127.0.0.1 192.168.50.47 localhost minikube no-preload-956403]
	I0920 18:18:33.366768   74753 provision.go:177] copyRemoteCerts
	I0920 18:18:33.366830   74753 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 18:18:33.366852   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:18:33.369441   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.369755   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:33.369787   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.369958   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHPort
	I0920 18:18:33.370127   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:33.370293   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHUsername
	I0920 18:18:33.370490   74753 sshutil.go:53] new ssh client: &{IP:192.168.50.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/no-preload-956403/id_rsa Username:docker}
	I0920 18:18:33.452070   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0920 18:18:33.477564   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0920 18:18:33.501002   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 18:18:33.526725   74753 provision.go:87] duration metric: took 235.975552ms to configureAuth
	I0920 18:18:33.526755   74753 buildroot.go:189] setting minikube options for container-runtime
	I0920 18:18:33.526943   74753 config.go:182] Loaded profile config "no-preload-956403": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:18:33.527011   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:18:33.529870   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.530338   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:33.530371   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.530485   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHPort
	I0920 18:18:33.530708   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:33.530897   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:33.531057   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHUsername
	I0920 18:18:33.531276   74753 main.go:141] libmachine: Using SSH client type: native
	I0920 18:18:33.531497   74753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.47 22 <nil> <nil>}
	I0920 18:18:33.531519   74753 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 18:18:33.758711   74753 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 18:18:33.758739   74753 machine.go:96] duration metric: took 804.122904ms to provisionDockerMachine
	I0920 18:18:33.758753   74753 start.go:293] postStartSetup for "no-preload-956403" (driver="kvm2")
	I0920 18:18:33.758771   74753 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 18:18:33.758796   74753 main.go:141] libmachine: (no-preload-956403) Calling .DriverName
	I0920 18:18:33.759207   74753 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 18:18:33.759262   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:18:33.762524   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.762991   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:33.763027   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.763207   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHPort
	I0920 18:18:33.763471   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:33.763643   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHUsername
	I0920 18:18:33.763861   74753 sshutil.go:53] new ssh client: &{IP:192.168.50.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/no-preload-956403/id_rsa Username:docker}
	I0920 18:18:33.844910   74753 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 18:18:33.849111   74753 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 18:18:33.849137   74753 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/addons for local assets ...
	I0920 18:18:33.849205   74753 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/files for local assets ...
	I0920 18:18:33.849280   74753 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem -> 159732.pem in /etc/ssl/certs
	I0920 18:18:33.849367   74753 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 18:18:33.858757   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /etc/ssl/certs/159732.pem (1708 bytes)
	I0920 18:18:33.883430   74753 start.go:296] duration metric: took 124.663757ms for postStartSetup
	I0920 18:18:33.883468   74753 fix.go:56] duration metric: took 19.104481875s for fixHost
	I0920 18:18:33.883488   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:18:33.886703   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.887047   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:33.887076   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.887276   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHPort
	I0920 18:18:33.887502   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:33.887685   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:33.887816   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHUsername
	I0920 18:18:33.887962   74753 main.go:141] libmachine: Using SSH client type: native
	I0920 18:18:33.888137   74753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.47 22 <nil> <nil>}
	I0920 18:18:33.888148   74753 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 18:18:33.994635   74753 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726856313.964367484
	
	I0920 18:18:33.994660   74753 fix.go:216] guest clock: 1726856313.964367484
	I0920 18:18:33.994670   74753 fix.go:229] Guest: 2024-09-20 18:18:33.964367484 +0000 UTC Remote: 2024-09-20 18:18:33.883472234 +0000 UTC m=+356.278282007 (delta=80.89525ms)
	I0920 18:18:33.994695   74753 fix.go:200] guest clock delta is within tolerance: 80.89525ms
	I0920 18:18:33.994701   74753 start.go:83] releasing machines lock for "no-preload-956403", held for 19.215743841s
	I0920 18:18:33.994726   74753 main.go:141] libmachine: (no-preload-956403) Calling .DriverName
	I0920 18:18:33.994976   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetIP
	I0920 18:18:33.997685   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.998089   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:33.998114   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:33.998291   74753 main.go:141] libmachine: (no-preload-956403) Calling .DriverName
	I0920 18:18:33.998765   74753 main.go:141] libmachine: (no-preload-956403) Calling .DriverName
	I0920 18:18:33.998923   74753 main.go:141] libmachine: (no-preload-956403) Calling .DriverName
	I0920 18:18:33.999007   74753 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 18:18:33.999054   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:18:33.999142   74753 ssh_runner.go:195] Run: cat /version.json
	I0920 18:18:33.999168   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:18:34.001725   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:34.001891   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:34.002197   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:34.002225   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:34.002369   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHPort
	I0920 18:18:34.002424   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:34.002452   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:34.002533   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:34.002625   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHPort
	I0920 18:18:34.002695   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHUsername
	I0920 18:18:34.002744   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:34.002813   74753 sshutil.go:53] new ssh client: &{IP:192.168.50.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/no-preload-956403/id_rsa Username:docker}
	I0920 18:18:34.002888   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHUsername
	I0920 18:18:34.003032   74753 sshutil.go:53] new ssh client: &{IP:192.168.50.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/no-preload-956403/id_rsa Username:docker}
	I0920 18:18:34.079092   74753 ssh_runner.go:195] Run: systemctl --version
	I0920 18:18:34.112066   74753 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 18:18:34.257205   74753 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 18:18:34.265774   74753 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 18:18:34.265871   74753 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 18:18:34.285222   74753 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 18:18:34.285247   74753 start.go:495] detecting cgroup driver to use...
	I0920 18:18:34.285320   74753 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 18:18:34.302192   74753 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 18:18:34.316624   74753 docker.go:217] disabling cri-docker service (if available) ...
	I0920 18:18:34.316695   74753 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 18:18:34.331098   74753 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 18:18:34.345433   74753 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 18:18:34.481886   74753 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 18:18:34.644442   74753 docker.go:233] disabling docker service ...
	I0920 18:18:34.644530   74753 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 18:18:34.658714   74753 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 18:18:34.671506   74753 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 18:18:34.809548   74753 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 18:18:34.957438   74753 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 18:18:34.972102   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 18:18:34.993129   74753 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 18:18:34.993199   74753 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:18:35.004196   74753 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 18:18:35.004273   74753 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:18:35.015258   74753 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:18:35.026240   74753 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:18:35.037658   74753 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 18:18:35.048637   74753 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:18:35.059494   74753 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:18:35.077264   74753 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:18:35.087777   74753 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 18:18:35.097812   74753 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 18:18:35.097947   74753 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 18:18:35.112200   74753 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 18:18:35.123381   74753 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:18:35.262386   74753 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 18:18:35.361152   74753 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 18:18:35.361257   74753 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 18:18:35.366321   74753 start.go:563] Will wait 60s for crictl version
	I0920 18:18:35.366390   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:18:35.370379   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 18:18:35.415080   74753 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 18:18:35.415182   74753 ssh_runner.go:195] Run: crio --version
	I0920 18:18:35.446934   74753 ssh_runner.go:195] Run: crio --version
	I0920 18:18:35.478823   74753 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 18:18:34.026138   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:36.525133   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:35.479956   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetIP
	I0920 18:18:35.483428   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:35.483820   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:35.483848   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:35.484079   74753 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0920 18:18:35.488686   74753 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:18:35.501184   74753 kubeadm.go:883] updating cluster {Name:no-preload-956403 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-956403 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.47 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 18:18:35.501348   74753 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:18:35.501391   74753 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:18:35.535377   74753 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 18:18:35.535400   74753 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0920 18:18:35.535466   74753 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 18:18:35.535499   74753 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0920 18:18:35.535510   74753 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0920 18:18:35.535534   74753 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0920 18:18:35.535538   74753 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0920 18:18:35.535513   74753 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0920 18:18:35.535625   74753 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0920 18:18:35.535483   74753 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:18:35.537100   74753 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 18:18:35.537104   74753 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0920 18:18:35.537126   74753 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0920 18:18:35.537091   74753 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0920 18:18:35.537164   74753 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0920 18:18:35.537180   74753 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0920 18:18:35.537191   74753 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0920 18:18:35.537105   74753 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:18:35.770046   74753 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0920 18:18:35.796364   74753 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0920 18:18:35.800776   74753 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0920 18:18:35.802291   74753 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0920 18:18:35.845969   74753 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0920 18:18:35.860263   74753 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0920 18:18:35.892349   74753 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 18:18:35.919269   74753 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0920 18:18:35.919323   74753 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0920 18:18:35.919375   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:18:35.929045   74753 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0920 18:18:35.929096   74753 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0920 18:18:35.929143   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:18:35.929181   74753 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0920 18:18:35.929235   74753 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0920 18:18:35.929299   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:18:35.968513   74753 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0920 18:18:35.968557   74753 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0920 18:18:35.968553   74753 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0920 18:18:35.968589   74753 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0920 18:18:35.968612   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:18:35.968636   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:18:35.984335   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0920 18:18:35.984366   74753 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0920 18:18:35.984379   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0920 18:18:35.984403   74753 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 18:18:35.984433   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0920 18:18:35.984450   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:18:35.984474   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0920 18:18:35.984505   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0920 18:18:36.106947   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0920 18:18:36.106964   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 18:18:36.106954   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0920 18:18:36.107025   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0920 18:18:36.107112   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0920 18:18:36.107142   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0920 18:18:36.231892   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0920 18:18:36.231989   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0920 18:18:36.232026   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0920 18:18:36.232143   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0920 18:18:36.232193   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0920 18:18:36.232281   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 18:18:36.355534   74753 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0920 18:18:36.355632   74753 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0920 18:18:36.355650   74753 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0920 18:18:36.355730   74753 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0920 18:18:36.358584   74753 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0920 18:18:36.358653   74753 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0920 18:18:36.358678   74753 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0920 18:18:36.358677   74753 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0920 18:18:36.358723   74753 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0920 18:18:36.358751   74753 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0920 18:18:36.367463   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 18:18:36.372389   74753 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0920 18:18:36.372412   74753 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0920 18:18:36.372457   74753 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0920 18:18:36.372580   74753 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I0920 18:18:36.374496   74753 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0920 18:18:36.374538   74753 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I0920 18:18:36.374638   74753 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I0920 18:18:36.410860   74753 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0920 18:18:36.410961   74753 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0920 18:18:36.738003   74753 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:18:32.749877   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:33.249391   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:33.749510   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:34.250003   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:34.749967   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:35.249953   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:35.749992   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:36.250161   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:36.750339   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:37.249600   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:34.192782   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:36.692485   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:38.692626   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:39.024337   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:41.025364   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:38.480788   74753 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.108302901s)
	I0920 18:18:38.480819   74753 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0920 18:18:38.480839   74753 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0920 18:18:38.480869   74753 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1: (2.069867717s)
	I0920 18:18:38.480893   74753 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0920 18:18:38.480896   74753 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I0920 18:18:38.480922   74753 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.742864605s)
	I0920 18:18:38.480970   74753 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0920 18:18:38.480993   74753 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:18:38.481031   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:18:40.340748   74753 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (1.85982838s)
	I0920 18:18:40.340782   74753 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0920 18:18:40.340790   74753 ssh_runner.go:235] Completed: which crictl: (1.859743083s)
	I0920 18:18:40.340812   74753 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0920 18:18:40.340850   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:18:40.340869   74753 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0920 18:18:37.750269   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:38.249855   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:38.749845   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:39.249342   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:39.750306   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:40.250103   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:40.750346   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:41.250134   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:41.750136   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:42.249945   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:41.191300   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:43.193536   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:43.025860   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:45.527119   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:44.427309   74753 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (4.086436233s)
	I0920 18:18:44.427365   74753 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (4.086473186s)
	I0920 18:18:44.427395   74753 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0920 18:18:44.427400   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:18:44.427430   74753 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0920 18:18:44.427510   74753 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0920 18:18:46.393094   74753 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (1.965557326s)
	I0920 18:18:46.393133   74753 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0920 18:18:46.393163   74753 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0920 18:18:46.393170   74753 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.965750119s)
	I0920 18:18:46.393216   74753 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0920 18:18:46.393241   74753 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:18:42.749980   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:43.249416   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:43.750087   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:44.249972   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:44.749649   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:45.250128   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:45.750346   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:46.249611   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:46.749566   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:47.249814   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:45.193713   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:47.691882   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:48.027047   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:50.527076   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:47.771559   74753 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.37829872s)
	I0920 18:18:47.771595   74753 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0920 18:18:47.771607   74753 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.378351302s)
	I0920 18:18:47.771618   74753 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0920 18:18:47.771645   74753 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0920 18:18:47.771668   74753 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0920 18:18:47.771726   74753 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0920 18:18:49.544117   74753 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.772423068s)
	I0920 18:18:49.544147   74753 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0920 18:18:49.544159   74753 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.772401848s)
	I0920 18:18:49.544187   74753 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0920 18:18:49.544199   74753 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0920 18:18:49.544275   74753 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0920 18:18:50.198691   74753 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-8777/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0920 18:18:50.198744   74753 cache_images.go:123] Successfully loaded all cached images
	I0920 18:18:50.198752   74753 cache_images.go:92] duration metric: took 14.66333409s to LoadCachedImages
	I0920 18:18:50.198766   74753 kubeadm.go:934] updating node { 192.168.50.47 8443 v1.31.1 crio true true} ...
	I0920 18:18:50.198900   74753 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-956403 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.47
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-956403 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 18:18:50.198988   74753 ssh_runner.go:195] Run: crio config
	I0920 18:18:50.249876   74753 cni.go:84] Creating CNI manager for ""
	I0920 18:18:50.249901   74753 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:18:50.249915   74753 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 18:18:50.249942   74753 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.47 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-956403 NodeName:no-preload-956403 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.47"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.47 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 18:18:50.250150   74753 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.47
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-956403"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.47
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.47"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 18:18:50.250264   74753 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 18:18:50.262805   74753 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 18:18:50.262886   74753 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 18:18:50.272958   74753 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0920 18:18:50.290269   74753 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 18:18:50.306981   74753 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0920 18:18:50.324360   74753 ssh_runner.go:195] Run: grep 192.168.50.47	control-plane.minikube.internal$ /etc/hosts
	I0920 18:18:50.328382   74753 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.47	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:18:50.341021   74753 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:18:50.462105   74753 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:18:50.478780   74753 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/no-preload-956403 for IP: 192.168.50.47
	I0920 18:18:50.478804   74753 certs.go:194] generating shared ca certs ...
	I0920 18:18:50.478824   74753 certs.go:226] acquiring lock for ca certs: {Name:mkc7ef6c737c6bdc3fdd9dcff8f57029c020d8f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:18:50.479010   74753 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key
	I0920 18:18:50.479069   74753 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key
	I0920 18:18:50.479082   74753 certs.go:256] generating profile certs ...
	I0920 18:18:50.479188   74753 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/no-preload-956403/client.key
	I0920 18:18:50.479270   74753 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/no-preload-956403/apiserver.key.859cf278
	I0920 18:18:50.479335   74753 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/no-preload-956403/proxy-client.key
	I0920 18:18:50.479491   74753 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem (1338 bytes)
	W0920 18:18:50.479534   74753 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973_empty.pem, impossibly tiny 0 bytes
	I0920 18:18:50.479549   74753 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 18:18:50.479596   74753 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem (1082 bytes)
	I0920 18:18:50.479633   74753 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem (1123 bytes)
	I0920 18:18:50.479668   74753 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem (1675 bytes)
	I0920 18:18:50.479771   74753 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem (1708 bytes)
	I0920 18:18:50.480696   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 18:18:50.514559   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 18:18:50.548790   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 18:18:50.581384   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0920 18:18:50.609138   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/no-preload-956403/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0920 18:18:50.641098   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/no-preload-956403/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 18:18:50.680479   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/no-preload-956403/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 18:18:50.705168   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/no-preload-956403/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 18:18:50.727603   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/ssl/certs/159732.pem --> /usr/share/ca-certificates/159732.pem (1708 bytes)
	I0920 18:18:50.750272   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 18:18:50.776117   74753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/15973.pem --> /usr/share/ca-certificates/15973.pem (1338 bytes)
	I0920 18:18:50.799799   74753 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 18:18:50.816250   74753 ssh_runner.go:195] Run: openssl version
	I0920 18:18:50.821680   74753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/159732.pem && ln -fs /usr/share/ca-certificates/159732.pem /etc/ssl/certs/159732.pem"
	I0920 18:18:50.832295   74753 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/159732.pem
	I0920 18:18:50.836833   74753 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:04 /usr/share/ca-certificates/159732.pem
	I0920 18:18:50.836896   74753 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/159732.pem
	I0920 18:18:50.842626   74753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/159732.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 18:18:50.853297   74753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 18:18:50.864400   74753 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:18:50.868951   74753 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:18:50.869011   74753 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:18:50.874547   74753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 18:18:50.885615   74753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15973.pem && ln -fs /usr/share/ca-certificates/15973.pem /etc/ssl/certs/15973.pem"
	I0920 18:18:50.896960   74753 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15973.pem
	I0920 18:18:50.901798   74753 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:04 /usr/share/ca-certificates/15973.pem
	I0920 18:18:50.901879   74753 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15973.pem
	I0920 18:18:50.907920   74753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15973.pem /etc/ssl/certs/51391683.0"
	I0920 18:18:50.919345   74753 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 18:18:50.923671   74753 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 18:18:50.929701   74753 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 18:18:50.935649   74753 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 18:18:50.942162   74753 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 18:18:50.948012   74753 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 18:18:50.954097   74753 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 18:18:50.960442   74753 kubeadm.go:392] StartCluster: {Name:no-preload-956403 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-956403 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.47 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:18:50.960535   74753 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 18:18:50.960600   74753 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:18:50.998528   74753 cri.go:89] found id: ""
	I0920 18:18:50.998608   74753 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 18:18:51.008964   74753 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 18:18:51.008985   74753 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 18:18:51.009043   74753 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 18:18:51.018457   74753 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 18:18:51.019553   74753 kubeconfig.go:125] found "no-preload-956403" server: "https://192.168.50.47:8443"
	I0920 18:18:51.021712   74753 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 18:18:51.033439   74753 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.47
	I0920 18:18:51.033470   74753 kubeadm.go:1160] stopping kube-system containers ...
	I0920 18:18:51.033481   74753 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0920 18:18:51.033538   74753 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:18:51.070049   74753 cri.go:89] found id: ""
	I0920 18:18:51.070137   74753 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 18:18:51.087472   74753 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 18:18:51.098582   74753 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 18:18:51.098606   74753 kubeadm.go:157] found existing configuration files:
	
	I0920 18:18:51.098654   74753 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 18:18:51.107201   74753 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 18:18:51.107276   74753 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 18:18:51.116058   74753 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 18:18:51.124563   74753 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 18:18:51.124630   74753 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 18:18:51.134174   74753 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 18:18:51.142880   74753 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 18:18:51.142944   74753 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 18:18:51.152181   74753 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 18:18:51.161942   74753 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 18:18:51.162012   74753 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 18:18:51.171615   74753 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 18:18:51.180728   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:51.292140   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:52.019018   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:52.239327   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:52.317900   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:52.433910   74753 api_server.go:52] waiting for apiserver process to appear ...
	I0920 18:18:52.433991   74753 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:47.749487   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:48.249612   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:48.750324   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:49.250006   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:49.749667   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:50.249802   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:50.749597   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:51.250236   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:51.750203   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:52.250132   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:49.693075   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:52.192547   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:53.024979   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:55.025225   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:52.934956   74753 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:53.434681   74753 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:53.451797   74753 api_server.go:72] duration metric: took 1.017867426s to wait for apiserver process to appear ...
	I0920 18:18:53.451828   74753 api_server.go:88] waiting for apiserver healthz status ...
	I0920 18:18:53.451851   74753 api_server.go:253] Checking apiserver healthz at https://192.168.50.47:8443/healthz ...
	I0920 18:18:53.452366   74753 api_server.go:269] stopped: https://192.168.50.47:8443/healthz: Get "https://192.168.50.47:8443/healthz": dial tcp 192.168.50.47:8443: connect: connection refused
	I0920 18:18:53.952175   74753 api_server.go:253] Checking apiserver healthz at https://192.168.50.47:8443/healthz ...
	I0920 18:18:55.972801   74753 api_server.go:279] https://192.168.50.47:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 18:18:55.972835   74753 api_server.go:103] status: https://192.168.50.47:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 18:18:55.972854   74753 api_server.go:253] Checking apiserver healthz at https://192.168.50.47:8443/healthz ...
	I0920 18:18:56.018532   74753 api_server.go:279] https://192.168.50.47:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 18:18:56.018563   74753 api_server.go:103] status: https://192.168.50.47:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 18:18:56.452127   74753 api_server.go:253] Checking apiserver healthz at https://192.168.50.47:8443/healthz ...
	I0920 18:18:56.459752   74753 api_server.go:279] https://192.168.50.47:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 18:18:56.459796   74753 api_server.go:103] status: https://192.168.50.47:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 18:18:56.952284   74753 api_server.go:253] Checking apiserver healthz at https://192.168.50.47:8443/healthz ...
	I0920 18:18:56.959496   74753 api_server.go:279] https://192.168.50.47:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 18:18:56.959553   74753 api_server.go:103] status: https://192.168.50.47:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 18:18:57.452049   74753 api_server.go:253] Checking apiserver healthz at https://192.168.50.47:8443/healthz ...
	I0920 18:18:57.456586   74753 api_server.go:279] https://192.168.50.47:8443/healthz returned 200:
	ok
	I0920 18:18:57.463754   74753 api_server.go:141] control plane version: v1.31.1
	I0920 18:18:57.463785   74753 api_server.go:131] duration metric: took 4.011948835s to wait for apiserver health ...
	I0920 18:18:57.463795   74753 cni.go:84] Creating CNI manager for ""
	I0920 18:18:57.463804   74753 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:18:57.465916   74753 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 18:18:57.467416   74753 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 18:18:57.485487   74753 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 18:18:57.529426   74753 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 18:18:57.540046   74753 system_pods.go:59] 8 kube-system pods found
	I0920 18:18:57.540100   74753 system_pods.go:61] "coredns-7c65d6cfc9-j2t5h" [dd2636f1-3200-4f22-957c-046277c9be8c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 18:18:57.540113   74753 system_pods.go:61] "etcd-no-preload-956403" [daf84be0-e673-4685-a998-47ec64664484] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0920 18:18:57.540127   74753 system_pods.go:61] "kube-apiserver-no-preload-956403" [416217e6-3c91-49ec-bd69-812a49b4afd9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0920 18:18:57.540139   74753 system_pods.go:61] "kube-controller-manager-no-preload-956403" [30fae740-b073-4619-bca5-010ca90b2667] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0920 18:18:57.540148   74753 system_pods.go:61] "kube-proxy-sz4bm" [269600fb-ef65-4b17-8c07-76c79e35f5a8] Running
	I0920 18:18:57.540160   74753 system_pods.go:61] "kube-scheduler-no-preload-956403" [46432927-e506-4665-8825-c5f6a2ed0458] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0920 18:18:57.540174   74753 system_pods.go:61] "metrics-server-6867b74b74-tfsff" [599ba06a-6d4d-483b-b390-a3595a814757] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 18:18:57.540184   74753 system_pods.go:61] "storage-provisioner" [df661627-7d32-4467-9805-1ae65d4fa35c] Running
	I0920 18:18:57.540196   74753 system_pods.go:74] duration metric: took 10.749595ms to wait for pod list to return data ...
	I0920 18:18:57.540208   74753 node_conditions.go:102] verifying NodePressure condition ...
	I0920 18:18:57.544401   74753 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 18:18:57.544438   74753 node_conditions.go:123] node cpu capacity is 2
	I0920 18:18:57.544451   74753 node_conditions.go:105] duration metric: took 4.237618ms to run NodePressure ...
	I0920 18:18:57.544471   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:18:52.749468   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:53.249519   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:53.750056   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:54.250286   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:54.750240   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:55.249980   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:55.750192   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:56.249940   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:56.749289   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:57.249615   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:54.193519   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:56.693152   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:57.818736   74753 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0920 18:18:57.823058   74753 kubeadm.go:739] kubelet initialised
	I0920 18:18:57.823082   74753 kubeadm.go:740] duration metric: took 4.316691ms waiting for restarted kubelet to initialise ...
	I0920 18:18:57.823091   74753 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:18:57.827594   74753 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-j2t5h" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:57.833207   74753 pod_ready.go:98] node "no-preload-956403" hosting pod "coredns-7c65d6cfc9-j2t5h" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:57.833240   74753 pod_ready.go:82] duration metric: took 5.620335ms for pod "coredns-7c65d6cfc9-j2t5h" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:57.833252   74753 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-956403" hosting pod "coredns-7c65d6cfc9-j2t5h" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:57.833269   74753 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:57.838687   74753 pod_ready.go:98] node "no-preload-956403" hosting pod "etcd-no-preload-956403" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:57.838723   74753 pod_ready.go:82] duration metric: took 5.441795ms for pod "etcd-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:57.838737   74753 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-956403" hosting pod "etcd-no-preload-956403" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:57.838747   74753 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:57.848481   74753 pod_ready.go:98] node "no-preload-956403" hosting pod "kube-apiserver-no-preload-956403" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:57.848510   74753 pod_ready.go:82] duration metric: took 9.754053ms for pod "kube-apiserver-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:57.848520   74753 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-956403" hosting pod "kube-apiserver-no-preload-956403" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:57.848530   74753 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:57.934356   74753 pod_ready.go:98] node "no-preload-956403" hosting pod "kube-controller-manager-no-preload-956403" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:57.934383   74753 pod_ready.go:82] duration metric: took 85.842265ms for pod "kube-controller-manager-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:57.934392   74753 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-956403" hosting pod "kube-controller-manager-no-preload-956403" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:57.934397   74753 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-sz4bm" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:58.332467   74753 pod_ready.go:98] node "no-preload-956403" hosting pod "kube-proxy-sz4bm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:58.332495   74753 pod_ready.go:82] duration metric: took 398.088821ms for pod "kube-proxy-sz4bm" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:58.332504   74753 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-956403" hosting pod "kube-proxy-sz4bm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:58.332510   74753 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:58.732836   74753 pod_ready.go:98] node "no-preload-956403" hosting pod "kube-scheduler-no-preload-956403" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:58.732859   74753 pod_ready.go:82] duration metric: took 400.343274ms for pod "kube-scheduler-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:58.732868   74753 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-956403" hosting pod "kube-scheduler-no-preload-956403" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:58.732874   74753 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace to be "Ready" ...
	I0920 18:18:59.132465   74753 pod_ready.go:98] node "no-preload-956403" hosting pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:59.132489   74753 pod_ready.go:82] duration metric: took 399.606625ms for pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace to be "Ready" ...
	E0920 18:18:59.132503   74753 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-956403" hosting pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:59.132510   74753 pod_ready.go:39] duration metric: took 1.309409155s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:18:59.132526   74753 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 18:18:59.147163   74753 ops.go:34] apiserver oom_adj: -16
	I0920 18:18:59.147190   74753 kubeadm.go:597] duration metric: took 8.138198351s to restartPrimaryControlPlane
	I0920 18:18:59.147200   74753 kubeadm.go:394] duration metric: took 8.186768244s to StartCluster
	I0920 18:18:59.147214   74753 settings.go:142] acquiring lock: {Name:mk2a13c58cdc0faf8cddca5d6716175d45db9bfd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:18:59.147287   74753 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19672-8777/kubeconfig
	I0920 18:18:59.149036   74753 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/kubeconfig: {Name:mkf32a4c736808e023459b2f0e40188618a38db1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:18:59.149303   74753 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.47 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:18:59.149381   74753 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 18:18:59.149479   74753 addons.go:69] Setting storage-provisioner=true in profile "no-preload-956403"
	I0920 18:18:59.149503   74753 addons.go:234] Setting addon storage-provisioner=true in "no-preload-956403"
	I0920 18:18:59.149501   74753 addons.go:69] Setting default-storageclass=true in profile "no-preload-956403"
	W0920 18:18:59.149515   74753 addons.go:243] addon storage-provisioner should already be in state true
	I0920 18:18:59.149526   74753 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-956403"
	I0920 18:18:59.149520   74753 addons.go:69] Setting metrics-server=true in profile "no-preload-956403"
	I0920 18:18:59.149546   74753 host.go:66] Checking if "no-preload-956403" exists ...
	I0920 18:18:59.149558   74753 addons.go:234] Setting addon metrics-server=true in "no-preload-956403"
	W0920 18:18:59.149575   74753 addons.go:243] addon metrics-server should already be in state true
	I0920 18:18:59.149588   74753 config.go:182] Loaded profile config "no-preload-956403": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:18:59.149610   74753 host.go:66] Checking if "no-preload-956403" exists ...
	I0920 18:18:59.149847   74753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:59.149889   74753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:59.149944   74753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:59.149954   74753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:59.150003   74753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:59.150075   74753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:59.151276   74753 out.go:177] * Verifying Kubernetes components...
	I0920 18:18:59.152577   74753 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:18:59.178294   74753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42565
	I0920 18:18:59.178917   74753 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:59.179414   74753 main.go:141] libmachine: Using API Version  1
	I0920 18:18:59.179435   74753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:59.179869   74753 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:59.180059   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetState
	I0920 18:18:59.183556   74753 addons.go:234] Setting addon default-storageclass=true in "no-preload-956403"
	W0920 18:18:59.183575   74753 addons.go:243] addon default-storageclass should already be in state true
	I0920 18:18:59.183606   74753 host.go:66] Checking if "no-preload-956403" exists ...
	I0920 18:18:59.183970   74753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:59.184011   74753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:59.195338   74753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43759
	I0920 18:18:59.195739   74753 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:59.196290   74753 main.go:141] libmachine: Using API Version  1
	I0920 18:18:59.196316   74753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:59.196742   74753 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:59.197211   74753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33531
	I0920 18:18:59.197327   74753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:59.197375   74753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:59.197552   74753 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:59.198055   74753 main.go:141] libmachine: Using API Version  1
	I0920 18:18:59.198080   74753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:59.198420   74753 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:59.198997   74753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:59.199037   74753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:59.203164   74753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40055
	I0920 18:18:59.203700   74753 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:59.204320   74753 main.go:141] libmachine: Using API Version  1
	I0920 18:18:59.204341   74753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:59.204745   74753 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:59.205399   74753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:18:59.205440   74753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:59.215457   74753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37991
	I0920 18:18:59.215953   74753 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:59.216312   74753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35461
	I0920 18:18:59.216521   74753 main.go:141] libmachine: Using API Version  1
	I0920 18:18:59.216723   74753 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:59.216826   74753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:59.217221   74753 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:59.217403   74753 main.go:141] libmachine: Using API Version  1
	I0920 18:18:59.217414   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetState
	I0920 18:18:59.217452   74753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:59.217942   74753 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:59.218403   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetState
	I0920 18:18:59.219340   74753 main.go:141] libmachine: (no-preload-956403) Calling .DriverName
	I0920 18:18:59.220536   74753 main.go:141] libmachine: (no-preload-956403) Calling .DriverName
	I0920 18:18:59.221934   74753 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0920 18:18:59.222788   74753 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:18:59.223744   74753 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 18:18:59.223766   74753 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 18:18:59.223788   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:18:59.224567   74753 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 18:18:59.224588   74753 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 18:18:59.224607   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:18:59.225333   74753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45207
	I0920 18:18:59.225992   74753 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:59.226975   74753 main.go:141] libmachine: Using API Version  1
	I0920 18:18:59.226991   74753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:59.227472   74753 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:59.227788   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetState
	I0920 18:18:59.227818   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:59.228234   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:59.228282   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:59.228441   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHPort
	I0920 18:18:59.228636   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:59.228671   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:59.228821   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHUsername
	I0920 18:18:59.228960   74753 sshutil.go:53] new ssh client: &{IP:192.168.50.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/no-preload-956403/id_rsa Username:docker}
	I0920 18:18:59.229280   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:59.229297   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:59.229478   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHPort
	I0920 18:18:59.229651   74753 main.go:141] libmachine: (no-preload-956403) Calling .DriverName
	I0920 18:18:59.229807   74753 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 18:18:59.229817   74753 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 18:18:59.229843   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHHostname
	I0920 18:18:59.230195   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:59.230729   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHUsername
	I0920 18:18:59.230899   74753 sshutil.go:53] new ssh client: &{IP:192.168.50.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/no-preload-956403/id_rsa Username:docker}
	I0920 18:18:59.232419   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:59.232844   74753 main.go:141] libmachine: (no-preload-956403) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:13:30", ip: ""} in network mk-no-preload-956403: {Iface:virbr2 ExpiryTime:2024-09-20 19:18:27 +0000 UTC Type:0 Mac:52:54:00:b6:13:30 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:no-preload-956403 Clientid:01:52:54:00:b6:13:30}
	I0920 18:18:59.232877   74753 main.go:141] libmachine: (no-preload-956403) DBG | domain no-preload-956403 has defined IP address 192.168.50.47 and MAC address 52:54:00:b6:13:30 in network mk-no-preload-956403
	I0920 18:18:59.233020   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHPort
	I0920 18:18:59.233201   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHKeyPath
	I0920 18:18:59.233311   74753 main.go:141] libmachine: (no-preload-956403) Calling .GetSSHUsername
	I0920 18:18:59.233424   74753 sshutil.go:53] new ssh client: &{IP:192.168.50.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/no-preload-956403/id_rsa Username:docker}
	I0920 18:18:59.373284   74753 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:18:59.391114   74753 node_ready.go:35] waiting up to 6m0s for node "no-preload-956403" to be "Ready" ...
	I0920 18:18:59.475607   74753 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 18:18:59.530301   74753 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 18:18:59.530320   74753 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0920 18:18:59.530804   74753 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 18:18:59.607246   74753 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 18:18:59.607272   74753 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 18:18:59.685234   74753 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 18:18:59.685273   74753 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 18:18:59.753902   74753 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 18:19:00.029605   74753 main.go:141] libmachine: Making call to close driver server
	I0920 18:19:00.029630   74753 main.go:141] libmachine: (no-preload-956403) Calling .Close
	I0920 18:19:00.029968   74753 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:19:00.030016   74753 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:19:00.030032   74753 main.go:141] libmachine: Making call to close driver server
	I0920 18:19:00.030039   74753 main.go:141] libmachine: (no-preload-956403) Calling .Close
	I0920 18:19:00.030285   74753 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:19:00.030299   74753 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:19:00.049472   74753 main.go:141] libmachine: Making call to close driver server
	I0920 18:19:00.049500   74753 main.go:141] libmachine: (no-preload-956403) Calling .Close
	I0920 18:19:00.049765   74753 main.go:141] libmachine: (no-preload-956403) DBG | Closing plugin on server side
	I0920 18:19:00.049781   74753 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:19:00.049797   74753 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:19:00.790635   74753 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.259791892s)
	I0920 18:19:00.790694   74753 main.go:141] libmachine: Making call to close driver server
	I0920 18:19:00.790703   74753 main.go:141] libmachine: (no-preload-956403) Calling .Close
	I0920 18:19:00.791004   74753 main.go:141] libmachine: (no-preload-956403) DBG | Closing plugin on server side
	I0920 18:19:00.791007   74753 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:19:00.791075   74753 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:19:00.791100   74753 main.go:141] libmachine: Making call to close driver server
	I0920 18:19:00.791110   74753 main.go:141] libmachine: (no-preload-956403) Calling .Close
	I0920 18:19:00.791339   74753 main.go:141] libmachine: (no-preload-956403) DBG | Closing plugin on server side
	I0920 18:19:00.791347   74753 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:19:00.791358   74753 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:19:00.829250   74753 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.075301587s)
	I0920 18:19:00.829297   74753 main.go:141] libmachine: Making call to close driver server
	I0920 18:19:00.829311   74753 main.go:141] libmachine: (no-preload-956403) Calling .Close
	I0920 18:19:00.829607   74753 main.go:141] libmachine: (no-preload-956403) DBG | Closing plugin on server side
	I0920 18:19:00.829612   74753 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:19:00.829632   74753 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:19:00.829641   74753 main.go:141] libmachine: Making call to close driver server
	I0920 18:19:00.829647   74753 main.go:141] libmachine: (no-preload-956403) Calling .Close
	I0920 18:19:00.829905   74753 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:19:00.829927   74753 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:19:00.829926   74753 main.go:141] libmachine: (no-preload-956403) DBG | Closing plugin on server side
	I0920 18:19:00.829938   74753 addons.go:475] Verifying addon metrics-server=true in "no-preload-956403"
	I0920 18:19:00.831897   74753 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0920 18:18:57.524979   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:18:59.526500   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:02.024579   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:00.833093   74753 addons.go:510] duration metric: took 1.683722999s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0920 18:19:01.394507   74753 node_ready.go:53] node "no-preload-956403" has status "Ready":"False"
	I0920 18:18:57.750113   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:58.250100   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:58.750133   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:59.249427   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:59.749350   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:00.250267   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:00.749723   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:01.249549   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:01.749698   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:02.250043   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:59.195097   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:01.692391   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:03.692510   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:04.024713   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:06.525591   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:03.395035   74753 node_ready.go:53] node "no-preload-956403" has status "Ready":"False"
	I0920 18:19:05.895477   74753 node_ready.go:53] node "no-preload-956403" has status "Ready":"False"
	I0920 18:19:06.395177   74753 node_ready.go:49] node "no-preload-956403" has status "Ready":"True"
	I0920 18:19:06.395211   74753 node_ready.go:38] duration metric: took 7.0040677s for node "no-preload-956403" to be "Ready" ...
	I0920 18:19:06.395224   74753 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:19:06.400929   74753 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-j2t5h" in "kube-system" namespace to be "Ready" ...
	I0920 18:19:06.406052   74753 pod_ready.go:93] pod "coredns-7c65d6cfc9-j2t5h" in "kube-system" namespace has status "Ready":"True"
	I0920 18:19:06.406076   74753 pod_ready.go:82] duration metric: took 5.118178ms for pod "coredns-7c65d6cfc9-j2t5h" in "kube-system" namespace to be "Ready" ...
	I0920 18:19:06.406088   74753 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	I0920 18:19:07.412930   74753 pod_ready.go:93] pod "etcd-no-preload-956403" in "kube-system" namespace has status "Ready":"True"
	I0920 18:19:07.412946   74753 pod_ready.go:82] duration metric: took 1.006851075s for pod "etcd-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	I0920 18:19:07.412955   74753 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	I0920 18:19:07.420312   74753 pod_ready.go:93] pod "kube-apiserver-no-preload-956403" in "kube-system" namespace has status "Ready":"True"
	I0920 18:19:07.420342   74753 pod_ready.go:82] duration metric: took 7.380815ms for pod "kube-apiserver-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	I0920 18:19:07.420354   74753 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	I0920 18:19:02.750082   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:03.249861   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:03.749400   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:04.250213   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:04.749806   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:05.249569   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:05.750115   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:06.250182   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:06.750188   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:07.250041   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:06.191328   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:08.192222   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:09.024510   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:11.025023   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:08.426149   74753 pod_ready.go:93] pod "kube-controller-manager-no-preload-956403" in "kube-system" namespace has status "Ready":"True"
	I0920 18:19:08.426173   74753 pod_ready.go:82] duration metric: took 1.005811855s for pod "kube-controller-manager-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	I0920 18:19:08.426182   74753 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-sz4bm" in "kube-system" namespace to be "Ready" ...
	I0920 18:19:08.431446   74753 pod_ready.go:93] pod "kube-proxy-sz4bm" in "kube-system" namespace has status "Ready":"True"
	I0920 18:19:08.431474   74753 pod_ready.go:82] duration metric: took 5.284488ms for pod "kube-proxy-sz4bm" in "kube-system" namespace to be "Ready" ...
	I0920 18:19:08.431486   74753 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	I0920 18:19:08.794973   74753 pod_ready.go:93] pod "kube-scheduler-no-preload-956403" in "kube-system" namespace has status "Ready":"True"
	I0920 18:19:08.794997   74753 pod_ready.go:82] duration metric: took 363.504269ms for pod "kube-scheduler-no-preload-956403" in "kube-system" namespace to be "Ready" ...
	I0920 18:19:08.795005   74753 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace to be "Ready" ...
	I0920 18:19:10.802181   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:07.749479   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:08.249641   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:08.749637   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:09.249803   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:09.749534   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:10.249365   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:10.749366   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:11.250045   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:11.750298   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:12.249766   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:10.192631   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:12.692470   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:13.525217   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:16.025516   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:13.300755   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:15.301345   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:12.749447   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:13.249356   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:13.749514   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:14.249942   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:14.749988   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:15.249405   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:15.749620   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:16.249962   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:16.750325   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:17.250198   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:15.191379   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:17.692235   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:18.525176   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:21.023910   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:17.801315   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:20.302369   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:22.302578   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:17.749509   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:18.250076   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:18.749518   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:19.249440   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:19.750156   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:20.249686   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:20.750315   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:21.249755   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:21.749370   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:22.249767   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:20.192027   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:22.192925   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:23.024677   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:25.025216   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:24.801558   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:26.802796   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:22.749289   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:23.249386   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:23.749870   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:24.249424   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:24.750349   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:25.249582   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:25.249657   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:25.287604   75577 cri.go:89] found id: ""
	I0920 18:19:25.287633   75577 logs.go:276] 0 containers: []
	W0920 18:19:25.287641   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:25.287647   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:25.287692   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:25.322093   75577 cri.go:89] found id: ""
	I0920 18:19:25.322127   75577 logs.go:276] 0 containers: []
	W0920 18:19:25.322137   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:25.322144   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:25.322194   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:25.354812   75577 cri.go:89] found id: ""
	I0920 18:19:25.354840   75577 logs.go:276] 0 containers: []
	W0920 18:19:25.354851   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:25.354863   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:25.354928   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:25.387777   75577 cri.go:89] found id: ""
	I0920 18:19:25.387804   75577 logs.go:276] 0 containers: []
	W0920 18:19:25.387813   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:25.387819   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:25.387867   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:25.420325   75577 cri.go:89] found id: ""
	I0920 18:19:25.420354   75577 logs.go:276] 0 containers: []
	W0920 18:19:25.420364   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:25.420372   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:25.420437   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:25.454824   75577 cri.go:89] found id: ""
	I0920 18:19:25.454853   75577 logs.go:276] 0 containers: []
	W0920 18:19:25.454864   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:25.454871   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:25.454933   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:25.493520   75577 cri.go:89] found id: ""
	I0920 18:19:25.493548   75577 logs.go:276] 0 containers: []
	W0920 18:19:25.493559   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:25.493566   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:25.493639   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:25.528795   75577 cri.go:89] found id: ""
	I0920 18:19:25.528820   75577 logs.go:276] 0 containers: []
	W0920 18:19:25.528828   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:25.528836   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:25.528847   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:25.571411   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:25.571442   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:25.625648   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:25.625693   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:25.639891   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:25.639920   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:25.769263   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:25.769289   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:25.769303   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:19:24.691757   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:26.695197   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:27.524327   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:29.525192   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:32.024450   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:29.301400   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:31.301988   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:28.346894   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:28.359891   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:28.359967   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:28.396117   75577 cri.go:89] found id: ""
	I0920 18:19:28.396144   75577 logs.go:276] 0 containers: []
	W0920 18:19:28.396152   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:28.396158   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:28.396206   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:28.447888   75577 cri.go:89] found id: ""
	I0920 18:19:28.447923   75577 logs.go:276] 0 containers: []
	W0920 18:19:28.447933   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:28.447941   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:28.447998   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:28.485018   75577 cri.go:89] found id: ""
	I0920 18:19:28.485046   75577 logs.go:276] 0 containers: []
	W0920 18:19:28.485054   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:28.485060   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:28.485108   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:28.520904   75577 cri.go:89] found id: ""
	I0920 18:19:28.520934   75577 logs.go:276] 0 containers: []
	W0920 18:19:28.520942   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:28.520948   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:28.520996   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:28.561375   75577 cri.go:89] found id: ""
	I0920 18:19:28.561407   75577 logs.go:276] 0 containers: []
	W0920 18:19:28.561415   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:28.561421   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:28.561467   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:28.597643   75577 cri.go:89] found id: ""
	I0920 18:19:28.597671   75577 logs.go:276] 0 containers: []
	W0920 18:19:28.597679   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:28.597685   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:28.597747   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:28.632885   75577 cri.go:89] found id: ""
	I0920 18:19:28.632912   75577 logs.go:276] 0 containers: []
	W0920 18:19:28.632921   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:28.632926   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:28.632973   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:28.670639   75577 cri.go:89] found id: ""
	I0920 18:19:28.670665   75577 logs.go:276] 0 containers: []
	W0920 18:19:28.670675   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:28.670699   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:28.670714   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:19:28.749623   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:28.749659   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:28.790808   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:28.790831   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:28.842027   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:28.842070   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:28.855927   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:28.855954   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:28.935807   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:31.436321   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:31.450159   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:31.450221   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:31.485444   75577 cri.go:89] found id: ""
	I0920 18:19:31.485483   75577 logs.go:276] 0 containers: []
	W0920 18:19:31.485494   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:31.485502   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:31.485562   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:31.520888   75577 cri.go:89] found id: ""
	I0920 18:19:31.520918   75577 logs.go:276] 0 containers: []
	W0920 18:19:31.520927   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:31.520941   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:31.521007   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:31.555001   75577 cri.go:89] found id: ""
	I0920 18:19:31.555029   75577 logs.go:276] 0 containers: []
	W0920 18:19:31.555040   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:31.555047   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:31.555111   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:31.588766   75577 cri.go:89] found id: ""
	I0920 18:19:31.588794   75577 logs.go:276] 0 containers: []
	W0920 18:19:31.588802   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:31.588808   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:31.588872   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:31.622006   75577 cri.go:89] found id: ""
	I0920 18:19:31.622037   75577 logs.go:276] 0 containers: []
	W0920 18:19:31.622048   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:31.622056   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:31.622110   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:31.659475   75577 cri.go:89] found id: ""
	I0920 18:19:31.659508   75577 logs.go:276] 0 containers: []
	W0920 18:19:31.659519   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:31.659527   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:31.659589   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:31.695380   75577 cri.go:89] found id: ""
	I0920 18:19:31.695415   75577 logs.go:276] 0 containers: []
	W0920 18:19:31.695426   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:31.695436   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:31.695521   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:31.729507   75577 cri.go:89] found id: ""
	I0920 18:19:31.729540   75577 logs.go:276] 0 containers: []
	W0920 18:19:31.729550   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:31.729561   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:31.729574   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:31.781857   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:31.781908   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:31.795325   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:31.795356   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:31.868684   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:31.868708   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:31.868722   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:19:31.945334   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:31.945371   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:29.191780   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:31.192712   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:33.693996   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:34.024591   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:36.024999   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:33.801443   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:36.300715   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:34.485723   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:34.499039   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:34.499106   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:34.534664   75577 cri.go:89] found id: ""
	I0920 18:19:34.534697   75577 logs.go:276] 0 containers: []
	W0920 18:19:34.534709   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:34.534717   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:34.534777   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:34.567515   75577 cri.go:89] found id: ""
	I0920 18:19:34.567545   75577 logs.go:276] 0 containers: []
	W0920 18:19:34.567556   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:34.567564   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:34.567624   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:34.601653   75577 cri.go:89] found id: ""
	I0920 18:19:34.601693   75577 logs.go:276] 0 containers: []
	W0920 18:19:34.601704   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:34.601712   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:34.601775   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:34.635238   75577 cri.go:89] found id: ""
	I0920 18:19:34.635271   75577 logs.go:276] 0 containers: []
	W0920 18:19:34.635282   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:34.635291   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:34.635361   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:34.669665   75577 cri.go:89] found id: ""
	I0920 18:19:34.669689   75577 logs.go:276] 0 containers: []
	W0920 18:19:34.669697   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:34.669703   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:34.669751   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:34.705984   75577 cri.go:89] found id: ""
	I0920 18:19:34.706012   75577 logs.go:276] 0 containers: []
	W0920 18:19:34.706022   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:34.706032   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:34.706110   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:34.740052   75577 cri.go:89] found id: ""
	I0920 18:19:34.740079   75577 logs.go:276] 0 containers: []
	W0920 18:19:34.740087   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:34.740092   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:34.740139   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:34.774930   75577 cri.go:89] found id: ""
	I0920 18:19:34.774962   75577 logs.go:276] 0 containers: []
	W0920 18:19:34.774973   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:34.774984   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:34.775081   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:34.791674   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:34.791714   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:34.894988   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:34.895019   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:34.895035   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:19:34.973785   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:34.973823   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:35.010928   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:35.010964   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:37.561543   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:37.574865   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:37.574925   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:37.609145   75577 cri.go:89] found id: ""
	I0920 18:19:37.609170   75577 logs.go:276] 0 containers: []
	W0920 18:19:37.609178   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:37.609183   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:37.609247   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:37.645080   75577 cri.go:89] found id: ""
	I0920 18:19:37.645107   75577 logs.go:276] 0 containers: []
	W0920 18:19:37.645115   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:37.645121   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:37.645168   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:37.680060   75577 cri.go:89] found id: ""
	I0920 18:19:37.680094   75577 logs.go:276] 0 containers: []
	W0920 18:19:37.680105   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:37.680113   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:37.680173   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:37.713586   75577 cri.go:89] found id: ""
	I0920 18:19:37.713618   75577 logs.go:276] 0 containers: []
	W0920 18:19:37.713629   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:37.713636   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:37.713694   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:36.192483   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:38.692017   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:38.025974   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:40.524671   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:38.302403   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:40.802748   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:37.750249   75577 cri.go:89] found id: ""
	I0920 18:19:37.750274   75577 logs.go:276] 0 containers: []
	W0920 18:19:37.750282   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:37.750289   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:37.750351   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:37.785613   75577 cri.go:89] found id: ""
	I0920 18:19:37.785642   75577 logs.go:276] 0 containers: []
	W0920 18:19:37.785650   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:37.785656   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:37.785705   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:37.824240   75577 cri.go:89] found id: ""
	I0920 18:19:37.824267   75577 logs.go:276] 0 containers: []
	W0920 18:19:37.824278   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:37.824286   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:37.824348   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:37.858522   75577 cri.go:89] found id: ""
	I0920 18:19:37.858547   75577 logs.go:276] 0 containers: []
	W0920 18:19:37.858556   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:37.858564   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:37.858575   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:19:37.939852   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:37.939891   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:37.981029   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:37.981062   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:38.030606   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:38.030633   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:38.043914   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:38.043953   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:38.124846   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:40.625196   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:40.640638   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:40.640708   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:40.687107   75577 cri.go:89] found id: ""
	I0920 18:19:40.687131   75577 logs.go:276] 0 containers: []
	W0920 18:19:40.687140   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:40.687148   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:40.687206   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:40.725727   75577 cri.go:89] found id: ""
	I0920 18:19:40.725858   75577 logs.go:276] 0 containers: []
	W0920 18:19:40.725875   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:40.725889   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:40.725967   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:40.762965   75577 cri.go:89] found id: ""
	I0920 18:19:40.762988   75577 logs.go:276] 0 containers: []
	W0920 18:19:40.762996   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:40.763002   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:40.763049   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:40.797387   75577 cri.go:89] found id: ""
	I0920 18:19:40.797415   75577 logs.go:276] 0 containers: []
	W0920 18:19:40.797427   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:40.797439   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:40.797498   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:40.834482   75577 cri.go:89] found id: ""
	I0920 18:19:40.834513   75577 logs.go:276] 0 containers: []
	W0920 18:19:40.834521   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:40.834528   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:40.834578   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:40.870062   75577 cri.go:89] found id: ""
	I0920 18:19:40.870090   75577 logs.go:276] 0 containers: []
	W0920 18:19:40.870099   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:40.870106   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:40.870164   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:40.906613   75577 cri.go:89] found id: ""
	I0920 18:19:40.906642   75577 logs.go:276] 0 containers: []
	W0920 18:19:40.906653   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:40.906662   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:40.906721   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:40.953033   75577 cri.go:89] found id: ""
	I0920 18:19:40.953056   75577 logs.go:276] 0 containers: []
	W0920 18:19:40.953065   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:40.953073   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:40.953083   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:40.998490   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:40.998523   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:41.051488   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:41.051525   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:41.067908   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:41.067937   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:41.144854   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:41.144878   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:41.144890   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:19:40.692456   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:43.192532   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:43.024238   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:45.024998   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:47.025427   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:43.302334   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:45.801415   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:43.723613   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:43.736857   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:43.736924   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:43.772588   75577 cri.go:89] found id: ""
	I0920 18:19:43.772624   75577 logs.go:276] 0 containers: []
	W0920 18:19:43.772635   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:43.772643   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:43.772712   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:43.808580   75577 cri.go:89] found id: ""
	I0920 18:19:43.808611   75577 logs.go:276] 0 containers: []
	W0920 18:19:43.808622   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:43.808629   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:43.808695   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:43.847167   75577 cri.go:89] found id: ""
	I0920 18:19:43.847207   75577 logs.go:276] 0 containers: []
	W0920 18:19:43.847218   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:43.847243   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:43.847302   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:43.883614   75577 cri.go:89] found id: ""
	I0920 18:19:43.883646   75577 logs.go:276] 0 containers: []
	W0920 18:19:43.883658   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:43.883667   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:43.883738   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:43.917602   75577 cri.go:89] found id: ""
	I0920 18:19:43.917627   75577 logs.go:276] 0 containers: []
	W0920 18:19:43.917635   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:43.917641   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:43.917694   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:43.953232   75577 cri.go:89] found id: ""
	I0920 18:19:43.953259   75577 logs.go:276] 0 containers: []
	W0920 18:19:43.953268   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:43.953273   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:43.953325   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:43.991204   75577 cri.go:89] found id: ""
	I0920 18:19:43.991234   75577 logs.go:276] 0 containers: []
	W0920 18:19:43.991246   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:43.991253   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:43.991334   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:44.028117   75577 cri.go:89] found id: ""
	I0920 18:19:44.028140   75577 logs.go:276] 0 containers: []
	W0920 18:19:44.028147   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:44.028154   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:44.028164   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:44.068175   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:44.068203   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:44.119953   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:44.119993   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:44.134127   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:44.134154   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:44.211563   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:44.211592   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:44.211604   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:19:46.787328   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:46.803429   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:46.803516   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:46.844813   75577 cri.go:89] found id: ""
	I0920 18:19:46.844839   75577 logs.go:276] 0 containers: []
	W0920 18:19:46.844850   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:46.844856   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:46.844914   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:46.878441   75577 cri.go:89] found id: ""
	I0920 18:19:46.878483   75577 logs.go:276] 0 containers: []
	W0920 18:19:46.878497   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:46.878506   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:46.878580   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:46.913933   75577 cri.go:89] found id: ""
	I0920 18:19:46.913976   75577 logs.go:276] 0 containers: []
	W0920 18:19:46.913986   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:46.913993   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:46.914066   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:46.948577   75577 cri.go:89] found id: ""
	I0920 18:19:46.948609   75577 logs.go:276] 0 containers: []
	W0920 18:19:46.948618   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:46.948625   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:46.948689   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:46.982742   75577 cri.go:89] found id: ""
	I0920 18:19:46.982770   75577 logs.go:276] 0 containers: []
	W0920 18:19:46.982778   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:46.982785   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:46.982836   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:47.017075   75577 cri.go:89] found id: ""
	I0920 18:19:47.017107   75577 logs.go:276] 0 containers: []
	W0920 18:19:47.017120   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:47.017128   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:47.017190   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:47.055494   75577 cri.go:89] found id: ""
	I0920 18:19:47.055520   75577 logs.go:276] 0 containers: []
	W0920 18:19:47.055528   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:47.055534   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:47.055586   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:47.091966   75577 cri.go:89] found id: ""
	I0920 18:19:47.091998   75577 logs.go:276] 0 containers: []
	W0920 18:19:47.092006   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:47.092019   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:47.092035   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:47.142916   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:47.142955   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:47.158471   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:47.158502   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:47.241530   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:47.241553   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:47.241567   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:19:47.316958   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:47.316992   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:45.192724   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:47.692046   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:49.524915   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:52.026403   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:48.302289   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:50.303276   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:49.854403   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:49.867317   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:49.867431   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:49.903627   75577 cri.go:89] found id: ""
	I0920 18:19:49.903661   75577 logs.go:276] 0 containers: []
	W0920 18:19:49.903674   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:49.903682   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:49.903745   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:49.936857   75577 cri.go:89] found id: ""
	I0920 18:19:49.936892   75577 logs.go:276] 0 containers: []
	W0920 18:19:49.936902   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:49.936909   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:49.936975   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:49.974403   75577 cri.go:89] found id: ""
	I0920 18:19:49.974433   75577 logs.go:276] 0 containers: []
	W0920 18:19:49.974441   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:49.974447   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:49.974505   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:50.011810   75577 cri.go:89] found id: ""
	I0920 18:19:50.011839   75577 logs.go:276] 0 containers: []
	W0920 18:19:50.011850   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:50.011857   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:50.011921   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:50.050491   75577 cri.go:89] found id: ""
	I0920 18:19:50.050527   75577 logs.go:276] 0 containers: []
	W0920 18:19:50.050538   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:50.050546   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:50.050610   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:50.088270   75577 cri.go:89] found id: ""
	I0920 18:19:50.088299   75577 logs.go:276] 0 containers: []
	W0920 18:19:50.088308   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:50.088314   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:50.088375   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:50.126355   75577 cri.go:89] found id: ""
	I0920 18:19:50.126382   75577 logs.go:276] 0 containers: []
	W0920 18:19:50.126392   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:50.126399   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:50.126460   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:50.160770   75577 cri.go:89] found id: ""
	I0920 18:19:50.160798   75577 logs.go:276] 0 containers: []
	W0920 18:19:50.160808   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:50.160819   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:50.160834   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:50.212866   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:50.212914   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:50.227269   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:50.227295   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:50.301363   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:50.301392   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:50.301406   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:19:50.376293   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:50.376330   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:50.192002   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:52.192488   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:54.524436   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:56.526032   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:52.801396   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:54.802393   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:57.301066   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:52.916567   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:52.930445   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:52.930525   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:52.965360   75577 cri.go:89] found id: ""
	I0920 18:19:52.965392   75577 logs.go:276] 0 containers: []
	W0920 18:19:52.965403   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:52.965411   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:52.965468   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:53.000439   75577 cri.go:89] found id: ""
	I0920 18:19:53.000480   75577 logs.go:276] 0 containers: []
	W0920 18:19:53.000495   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:53.000503   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:53.000577   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:53.034598   75577 cri.go:89] found id: ""
	I0920 18:19:53.034630   75577 logs.go:276] 0 containers: []
	W0920 18:19:53.034640   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:53.034647   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:53.034744   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:53.068636   75577 cri.go:89] found id: ""
	I0920 18:19:53.068664   75577 logs.go:276] 0 containers: []
	W0920 18:19:53.068673   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:53.068678   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:53.068750   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:53.104381   75577 cri.go:89] found id: ""
	I0920 18:19:53.104408   75577 logs.go:276] 0 containers: []
	W0920 18:19:53.104416   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:53.104421   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:53.104474   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:53.137865   75577 cri.go:89] found id: ""
	I0920 18:19:53.137897   75577 logs.go:276] 0 containers: []
	W0920 18:19:53.137909   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:53.137922   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:53.137983   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:53.171825   75577 cri.go:89] found id: ""
	I0920 18:19:53.171861   75577 logs.go:276] 0 containers: []
	W0920 18:19:53.171874   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:53.171883   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:53.171952   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:53.210742   75577 cri.go:89] found id: ""
	I0920 18:19:53.210774   75577 logs.go:276] 0 containers: []
	W0920 18:19:53.210784   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:53.210796   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:53.210811   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:19:53.285597   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:53.285637   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:53.329821   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:53.329871   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:53.381362   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:53.381415   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:53.396044   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:53.396074   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:53.471582   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:55.972369   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:55.987205   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:55.987281   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:56.031322   75577 cri.go:89] found id: ""
	I0920 18:19:56.031355   75577 logs.go:276] 0 containers: []
	W0920 18:19:56.031363   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:56.031368   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:56.031420   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:56.066442   75577 cri.go:89] found id: ""
	I0920 18:19:56.066487   75577 logs.go:276] 0 containers: []
	W0920 18:19:56.066576   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:56.066617   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:56.066695   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:56.101818   75577 cri.go:89] found id: ""
	I0920 18:19:56.101864   75577 logs.go:276] 0 containers: []
	W0920 18:19:56.101876   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:56.101883   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:56.101947   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:56.139513   75577 cri.go:89] found id: ""
	I0920 18:19:56.139545   75577 logs.go:276] 0 containers: []
	W0920 18:19:56.139557   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:56.139565   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:56.139618   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:56.173638   75577 cri.go:89] found id: ""
	I0920 18:19:56.173669   75577 logs.go:276] 0 containers: []
	W0920 18:19:56.173680   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:56.173688   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:56.173752   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:56.210657   75577 cri.go:89] found id: ""
	I0920 18:19:56.210689   75577 logs.go:276] 0 containers: []
	W0920 18:19:56.210700   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:56.210709   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:56.210768   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:56.246035   75577 cri.go:89] found id: ""
	I0920 18:19:56.246063   75577 logs.go:276] 0 containers: []
	W0920 18:19:56.246071   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:56.246077   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:56.246123   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:56.280766   75577 cri.go:89] found id: ""
	I0920 18:19:56.280796   75577 logs.go:276] 0 containers: []
	W0920 18:19:56.280807   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:56.280818   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:56.280834   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:56.320511   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:56.320540   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:56.373746   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:56.373785   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:56.389294   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:56.389322   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:56.460079   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:56.460100   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:56.460112   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:19:54.692781   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:57.192706   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:59.025015   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:01.025196   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:59.302190   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:01.801923   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:19:59.044541   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:19:59.058395   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:19:59.058464   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:19:59.097442   75577 cri.go:89] found id: ""
	I0920 18:19:59.097482   75577 logs.go:276] 0 containers: []
	W0920 18:19:59.097495   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:19:59.097512   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:19:59.097593   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:19:59.133091   75577 cri.go:89] found id: ""
	I0920 18:19:59.133116   75577 logs.go:276] 0 containers: []
	W0920 18:19:59.133128   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:19:59.133135   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:19:59.133264   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:19:59.166902   75577 cri.go:89] found id: ""
	I0920 18:19:59.166927   75577 logs.go:276] 0 containers: []
	W0920 18:19:59.166938   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:19:59.166945   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:19:59.167008   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:19:59.202551   75577 cri.go:89] found id: ""
	I0920 18:19:59.202573   75577 logs.go:276] 0 containers: []
	W0920 18:19:59.202581   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:19:59.202586   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:19:59.202633   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:19:59.235197   75577 cri.go:89] found id: ""
	I0920 18:19:59.235222   75577 logs.go:276] 0 containers: []
	W0920 18:19:59.235230   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:19:59.235236   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:19:59.235286   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:19:59.269148   75577 cri.go:89] found id: ""
	I0920 18:19:59.269176   75577 logs.go:276] 0 containers: []
	W0920 18:19:59.269187   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:19:59.269196   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:19:59.269262   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:19:59.307670   75577 cri.go:89] found id: ""
	I0920 18:19:59.307692   75577 logs.go:276] 0 containers: []
	W0920 18:19:59.307699   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:19:59.307705   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:19:59.307753   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:19:59.340554   75577 cri.go:89] found id: ""
	I0920 18:19:59.340589   75577 logs.go:276] 0 containers: []
	W0920 18:19:59.340599   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:19:59.340610   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:19:59.340623   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:59.382339   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:19:59.382470   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:19:59.432176   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:19:59.432211   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:19:59.445889   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:19:59.445922   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:19:59.516564   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:19:59.516590   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:19:59.516609   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:02.101918   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:02.115064   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:02.115128   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:02.148718   75577 cri.go:89] found id: ""
	I0920 18:20:02.148749   75577 logs.go:276] 0 containers: []
	W0920 18:20:02.148757   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:02.148764   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:02.148822   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:02.182673   75577 cri.go:89] found id: ""
	I0920 18:20:02.182711   75577 logs.go:276] 0 containers: []
	W0920 18:20:02.182722   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:02.182729   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:02.182797   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:02.218734   75577 cri.go:89] found id: ""
	I0920 18:20:02.218771   75577 logs.go:276] 0 containers: []
	W0920 18:20:02.218782   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:02.218791   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:02.218856   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:02.258956   75577 cri.go:89] found id: ""
	I0920 18:20:02.258981   75577 logs.go:276] 0 containers: []
	W0920 18:20:02.258989   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:02.258995   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:02.259066   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:02.294879   75577 cri.go:89] found id: ""
	I0920 18:20:02.294910   75577 logs.go:276] 0 containers: []
	W0920 18:20:02.294919   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:02.294925   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:02.294971   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:02.331010   75577 cri.go:89] found id: ""
	I0920 18:20:02.331039   75577 logs.go:276] 0 containers: []
	W0920 18:20:02.331049   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:02.331057   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:02.331122   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:02.365023   75577 cri.go:89] found id: ""
	I0920 18:20:02.365056   75577 logs.go:276] 0 containers: []
	W0920 18:20:02.365066   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:02.365072   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:02.365122   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:02.400501   75577 cri.go:89] found id: ""
	I0920 18:20:02.400528   75577 logs.go:276] 0 containers: []
	W0920 18:20:02.400537   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:02.400545   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:02.400556   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:02.450597   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:02.450636   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:02.467637   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:02.467671   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:02.540668   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:02.540690   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:02.540706   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:02.629706   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:02.629752   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:19:59.192799   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:01.691537   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:03.692613   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:03.524517   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:05.525418   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:04.300845   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:06.301250   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:05.168119   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:05.182954   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:05.183043   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:05.221938   75577 cri.go:89] found id: ""
	I0920 18:20:05.221971   75577 logs.go:276] 0 containers: []
	W0920 18:20:05.221980   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:05.221990   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:05.222051   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:05.258454   75577 cri.go:89] found id: ""
	I0920 18:20:05.258479   75577 logs.go:276] 0 containers: []
	W0920 18:20:05.258487   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:05.258492   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:05.258540   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:05.293091   75577 cri.go:89] found id: ""
	I0920 18:20:05.293125   75577 logs.go:276] 0 containers: []
	W0920 18:20:05.293138   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:05.293146   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:05.293196   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:05.332992   75577 cri.go:89] found id: ""
	I0920 18:20:05.333025   75577 logs.go:276] 0 containers: []
	W0920 18:20:05.333034   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:05.333040   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:05.333088   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:05.368743   75577 cri.go:89] found id: ""
	I0920 18:20:05.368778   75577 logs.go:276] 0 containers: []
	W0920 18:20:05.368790   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:05.368798   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:05.368859   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:05.404913   75577 cri.go:89] found id: ""
	I0920 18:20:05.404941   75577 logs.go:276] 0 containers: []
	W0920 18:20:05.404948   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:05.404954   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:05.405003   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:05.442111   75577 cri.go:89] found id: ""
	I0920 18:20:05.442143   75577 logs.go:276] 0 containers: []
	W0920 18:20:05.442154   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:05.442163   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:05.442228   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:05.478808   75577 cri.go:89] found id: ""
	I0920 18:20:05.478842   75577 logs.go:276] 0 containers: []
	W0920 18:20:05.478853   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:05.478865   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:05.478879   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:05.531653   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:05.531691   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:05.545181   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:05.545210   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:05.615009   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:05.615041   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:05.615059   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:05.690842   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:05.690871   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:06.193177   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:08.691596   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:08.034009   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:10.524419   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:08.301457   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:10.801788   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:08.230851   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:08.244539   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:08.244609   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:08.281123   75577 cri.go:89] found id: ""
	I0920 18:20:08.281155   75577 logs.go:276] 0 containers: []
	W0920 18:20:08.281167   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:08.281174   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:08.281226   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:08.319704   75577 cri.go:89] found id: ""
	I0920 18:20:08.319740   75577 logs.go:276] 0 containers: []
	W0920 18:20:08.319754   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:08.319763   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:08.319828   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:08.354589   75577 cri.go:89] found id: ""
	I0920 18:20:08.354619   75577 logs.go:276] 0 containers: []
	W0920 18:20:08.354631   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:08.354638   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:08.354703   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:08.391580   75577 cri.go:89] found id: ""
	I0920 18:20:08.391603   75577 logs.go:276] 0 containers: []
	W0920 18:20:08.391612   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:08.391617   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:08.391666   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:08.425596   75577 cri.go:89] found id: ""
	I0920 18:20:08.425622   75577 logs.go:276] 0 containers: []
	W0920 18:20:08.425630   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:08.425638   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:08.425704   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:08.458720   75577 cri.go:89] found id: ""
	I0920 18:20:08.458747   75577 logs.go:276] 0 containers: []
	W0920 18:20:08.458758   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:08.458764   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:08.458812   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:08.496104   75577 cri.go:89] found id: ""
	I0920 18:20:08.496137   75577 logs.go:276] 0 containers: []
	W0920 18:20:08.496148   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:08.496155   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:08.496210   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:08.530961   75577 cri.go:89] found id: ""
	I0920 18:20:08.530989   75577 logs.go:276] 0 containers: []
	W0920 18:20:08.531000   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:08.531010   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:08.531023   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:08.568512   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:08.568541   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:08.619716   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:08.619754   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:08.634358   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:08.634390   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:08.721465   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:08.721488   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:08.721501   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:11.303942   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:11.316686   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:11.316759   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:11.353137   75577 cri.go:89] found id: ""
	I0920 18:20:11.353161   75577 logs.go:276] 0 containers: []
	W0920 18:20:11.353169   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:11.353176   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:11.353229   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:11.388271   75577 cri.go:89] found id: ""
	I0920 18:20:11.388298   75577 logs.go:276] 0 containers: []
	W0920 18:20:11.388315   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:11.388322   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:11.388388   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:11.422673   75577 cri.go:89] found id: ""
	I0920 18:20:11.422700   75577 logs.go:276] 0 containers: []
	W0920 18:20:11.422708   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:11.422714   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:11.422768   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:11.458869   75577 cri.go:89] found id: ""
	I0920 18:20:11.458900   75577 logs.go:276] 0 containers: []
	W0920 18:20:11.458910   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:11.458917   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:11.459068   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:11.494128   75577 cri.go:89] found id: ""
	I0920 18:20:11.494161   75577 logs.go:276] 0 containers: []
	W0920 18:20:11.494172   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:11.494180   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:11.494246   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:11.529111   75577 cri.go:89] found id: ""
	I0920 18:20:11.529135   75577 logs.go:276] 0 containers: []
	W0920 18:20:11.529150   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:11.529157   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:11.529223   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:11.562287   75577 cri.go:89] found id: ""
	I0920 18:20:11.562314   75577 logs.go:276] 0 containers: []
	W0920 18:20:11.562323   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:11.562329   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:11.562381   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:11.600066   75577 cri.go:89] found id: ""
	I0920 18:20:11.600106   75577 logs.go:276] 0 containers: []
	W0920 18:20:11.600117   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:11.600128   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:11.600143   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:11.681628   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:11.681665   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:11.722173   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:11.722205   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:11.773132   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:11.773171   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:11.787183   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:11.787215   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:11.856304   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:10.695174   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:13.191743   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:13.025069   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:15.525020   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:12.803677   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:15.301431   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:14.356881   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:14.371658   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:14.371729   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:14.406983   75577 cri.go:89] found id: ""
	I0920 18:20:14.407009   75577 logs.go:276] 0 containers: []
	W0920 18:20:14.407017   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:14.407022   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:14.407075   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:14.481048   75577 cri.go:89] found id: ""
	I0920 18:20:14.481075   75577 logs.go:276] 0 containers: []
	W0920 18:20:14.481086   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:14.481094   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:14.481156   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:14.513683   75577 cri.go:89] found id: ""
	I0920 18:20:14.513711   75577 logs.go:276] 0 containers: []
	W0920 18:20:14.513719   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:14.513725   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:14.513797   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:14.553330   75577 cri.go:89] found id: ""
	I0920 18:20:14.553363   75577 logs.go:276] 0 containers: []
	W0920 18:20:14.553375   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:14.553381   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:14.553446   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:14.590803   75577 cri.go:89] found id: ""
	I0920 18:20:14.590837   75577 logs.go:276] 0 containers: []
	W0920 18:20:14.590848   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:14.590856   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:14.590927   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:14.625100   75577 cri.go:89] found id: ""
	I0920 18:20:14.625130   75577 logs.go:276] 0 containers: []
	W0920 18:20:14.625141   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:14.625151   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:14.625219   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:14.659313   75577 cri.go:89] found id: ""
	I0920 18:20:14.659342   75577 logs.go:276] 0 containers: []
	W0920 18:20:14.659351   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:14.659357   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:14.659418   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:14.694901   75577 cri.go:89] found id: ""
	I0920 18:20:14.694931   75577 logs.go:276] 0 containers: []
	W0920 18:20:14.694939   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:14.694951   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:14.694966   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:14.708406   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:14.708437   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:14.785174   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:14.785200   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:14.785214   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:14.873622   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:14.873666   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:14.919130   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:14.919166   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:17.468595   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:17.483397   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:17.483473   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:17.522415   75577 cri.go:89] found id: ""
	I0920 18:20:17.522445   75577 logs.go:276] 0 containers: []
	W0920 18:20:17.522455   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:17.522463   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:17.522523   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:17.558957   75577 cri.go:89] found id: ""
	I0920 18:20:17.558991   75577 logs.go:276] 0 containers: []
	W0920 18:20:17.559002   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:17.559010   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:17.559174   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:17.595183   75577 cri.go:89] found id: ""
	I0920 18:20:17.595217   75577 logs.go:276] 0 containers: []
	W0920 18:20:17.595229   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:17.595237   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:17.595302   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:17.631727   75577 cri.go:89] found id: ""
	I0920 18:20:17.631757   75577 logs.go:276] 0 containers: []
	W0920 18:20:17.631768   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:17.631775   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:17.631894   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:17.666374   75577 cri.go:89] found id: ""
	I0920 18:20:17.666409   75577 logs.go:276] 0 containers: []
	W0920 18:20:17.666420   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:17.666427   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:17.666488   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:17.702256   75577 cri.go:89] found id: ""
	I0920 18:20:17.702281   75577 logs.go:276] 0 containers: []
	W0920 18:20:17.702291   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:17.702299   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:17.702370   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:15.191971   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:17.192990   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:17.525552   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:20.025229   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:17.802045   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:20.302538   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:17.742133   75577 cri.go:89] found id: ""
	I0920 18:20:17.742161   75577 logs.go:276] 0 containers: []
	W0920 18:20:17.742172   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:17.742179   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:17.742249   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:17.781269   75577 cri.go:89] found id: ""
	I0920 18:20:17.781300   75577 logs.go:276] 0 containers: []
	W0920 18:20:17.781311   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:17.781321   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:17.781336   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:17.835886   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:17.835923   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:17.851365   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:17.851396   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:17.932682   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:17.932711   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:17.932726   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:18.013680   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:18.013731   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:20.555074   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:20.568007   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:20.568071   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:20.602250   75577 cri.go:89] found id: ""
	I0920 18:20:20.602278   75577 logs.go:276] 0 containers: []
	W0920 18:20:20.602287   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:20.602293   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:20.602353   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:20.636711   75577 cri.go:89] found id: ""
	I0920 18:20:20.636739   75577 logs.go:276] 0 containers: []
	W0920 18:20:20.636748   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:20.636753   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:20.636807   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:20.669529   75577 cri.go:89] found id: ""
	I0920 18:20:20.669570   75577 logs.go:276] 0 containers: []
	W0920 18:20:20.669583   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:20.669593   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:20.669651   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:20.710330   75577 cri.go:89] found id: ""
	I0920 18:20:20.710364   75577 logs.go:276] 0 containers: []
	W0920 18:20:20.710372   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:20.710378   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:20.710427   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:20.747664   75577 cri.go:89] found id: ""
	I0920 18:20:20.747690   75577 logs.go:276] 0 containers: []
	W0920 18:20:20.747699   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:20.747705   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:20.747760   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:20.785680   75577 cri.go:89] found id: ""
	I0920 18:20:20.785717   75577 logs.go:276] 0 containers: []
	W0920 18:20:20.785726   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:20.785732   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:20.785794   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:20.822570   75577 cri.go:89] found id: ""
	I0920 18:20:20.822599   75577 logs.go:276] 0 containers: []
	W0920 18:20:20.822608   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:20.822613   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:20.822659   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:20.856739   75577 cri.go:89] found id: ""
	I0920 18:20:20.856772   75577 logs.go:276] 0 containers: []
	W0920 18:20:20.856786   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:20.856806   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:20.856822   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:20.896606   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:20.896643   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:20.948275   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:20.948313   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:20.962576   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:20.962606   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:21.033511   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:21.033534   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:21.033547   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:19.691723   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:21.694099   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:22.524982   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:24.527065   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:27.026374   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:22.803300   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:25.302416   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:23.615731   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:23.628619   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:23.628698   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:23.664344   75577 cri.go:89] found id: ""
	I0920 18:20:23.664367   75577 logs.go:276] 0 containers: []
	W0920 18:20:23.664375   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:23.664381   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:23.664431   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:23.698791   75577 cri.go:89] found id: ""
	I0920 18:20:23.698823   75577 logs.go:276] 0 containers: []
	W0920 18:20:23.698832   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:23.698839   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:23.698889   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:23.732552   75577 cri.go:89] found id: ""
	I0920 18:20:23.732590   75577 logs.go:276] 0 containers: []
	W0920 18:20:23.732602   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:23.732610   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:23.732676   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:23.769463   75577 cri.go:89] found id: ""
	I0920 18:20:23.769490   75577 logs.go:276] 0 containers: []
	W0920 18:20:23.769501   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:23.769508   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:23.769567   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:23.807329   75577 cri.go:89] found id: ""
	I0920 18:20:23.807361   75577 logs.go:276] 0 containers: []
	W0920 18:20:23.807374   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:23.807382   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:23.807442   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:23.847889   75577 cri.go:89] found id: ""
	I0920 18:20:23.847913   75577 logs.go:276] 0 containers: []
	W0920 18:20:23.847920   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:23.847927   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:23.847985   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:23.883287   75577 cri.go:89] found id: ""
	I0920 18:20:23.883314   75577 logs.go:276] 0 containers: []
	W0920 18:20:23.883322   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:23.883335   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:23.883389   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:23.920006   75577 cri.go:89] found id: ""
	I0920 18:20:23.920034   75577 logs.go:276] 0 containers: []
	W0920 18:20:23.920045   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:23.920057   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:23.920070   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:23.995572   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:23.995608   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:24.035953   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:24.035983   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:24.085803   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:24.085860   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:24.100226   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:24.100255   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:24.173555   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:26.673726   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:26.687372   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:26.687449   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:26.726544   75577 cri.go:89] found id: ""
	I0920 18:20:26.726574   75577 logs.go:276] 0 containers: []
	W0920 18:20:26.726583   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:26.726590   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:26.726651   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:26.761542   75577 cri.go:89] found id: ""
	I0920 18:20:26.761571   75577 logs.go:276] 0 containers: []
	W0920 18:20:26.761580   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:26.761587   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:26.761639   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:26.803869   75577 cri.go:89] found id: ""
	I0920 18:20:26.803896   75577 logs.go:276] 0 containers: []
	W0920 18:20:26.803904   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:26.803910   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:26.803970   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:26.854835   75577 cri.go:89] found id: ""
	I0920 18:20:26.854876   75577 logs.go:276] 0 containers: []
	W0920 18:20:26.854888   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:26.854896   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:26.854958   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:26.896113   75577 cri.go:89] found id: ""
	I0920 18:20:26.896151   75577 logs.go:276] 0 containers: []
	W0920 18:20:26.896162   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:26.896170   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:26.896255   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:26.942828   75577 cri.go:89] found id: ""
	I0920 18:20:26.942853   75577 logs.go:276] 0 containers: []
	W0920 18:20:26.942863   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:26.942930   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:26.943007   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:26.977246   75577 cri.go:89] found id: ""
	I0920 18:20:26.977278   75577 logs.go:276] 0 containers: []
	W0920 18:20:26.977289   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:26.977296   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:26.977367   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:27.012408   75577 cri.go:89] found id: ""
	I0920 18:20:27.012440   75577 logs.go:276] 0 containers: []
	W0920 18:20:27.012451   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:27.012462   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:27.012477   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:27.063970   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:27.064017   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:27.078082   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:27.078119   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:27.148050   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:27.148079   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:27.148094   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:27.230836   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:27.230880   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:24.192842   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:26.196350   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:28.692682   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:29.525508   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:32.025275   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:27.802268   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:30.301519   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:29.771845   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:29.785479   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:29.785553   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:29.823101   75577 cri.go:89] found id: ""
	I0920 18:20:29.823132   75577 logs.go:276] 0 containers: []
	W0920 18:20:29.823143   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:29.823150   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:29.823228   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:29.858679   75577 cri.go:89] found id: ""
	I0920 18:20:29.858713   75577 logs.go:276] 0 containers: []
	W0920 18:20:29.858724   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:29.858732   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:29.858796   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:29.901044   75577 cri.go:89] found id: ""
	I0920 18:20:29.901073   75577 logs.go:276] 0 containers: []
	W0920 18:20:29.901083   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:29.901091   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:29.901160   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:29.937743   75577 cri.go:89] found id: ""
	I0920 18:20:29.937775   75577 logs.go:276] 0 containers: []
	W0920 18:20:29.937792   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:29.937800   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:29.937884   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:29.972813   75577 cri.go:89] found id: ""
	I0920 18:20:29.972838   75577 logs.go:276] 0 containers: []
	W0920 18:20:29.972846   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:29.972852   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:29.972901   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:30.008024   75577 cri.go:89] found id: ""
	I0920 18:20:30.008053   75577 logs.go:276] 0 containers: []
	W0920 18:20:30.008062   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:30.008068   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:30.008117   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:30.048544   75577 cri.go:89] found id: ""
	I0920 18:20:30.048577   75577 logs.go:276] 0 containers: []
	W0920 18:20:30.048585   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:30.048591   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:30.048643   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:30.084501   75577 cri.go:89] found id: ""
	I0920 18:20:30.084525   75577 logs.go:276] 0 containers: []
	W0920 18:20:30.084534   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:30.084545   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:30.084559   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:30.136234   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:30.136279   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:30.149557   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:30.149591   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:30.229765   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:30.229792   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:30.229806   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:30.307786   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:30.307825   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:31.191475   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:33.192864   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:34.025515   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:36.027276   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:32.801952   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:34.802033   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:37.301059   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:32.845220   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:32.859734   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:32.859810   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:32.896390   75577 cri.go:89] found id: ""
	I0920 18:20:32.896434   75577 logs.go:276] 0 containers: []
	W0920 18:20:32.896446   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:32.896464   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:32.896538   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:32.934548   75577 cri.go:89] found id: ""
	I0920 18:20:32.934572   75577 logs.go:276] 0 containers: []
	W0920 18:20:32.934580   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:32.934587   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:32.934640   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:32.982987   75577 cri.go:89] found id: ""
	I0920 18:20:32.983013   75577 logs.go:276] 0 containers: []
	W0920 18:20:32.983020   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:32.983026   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:32.983079   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:33.019059   75577 cri.go:89] found id: ""
	I0920 18:20:33.019085   75577 logs.go:276] 0 containers: []
	W0920 18:20:33.019093   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:33.019099   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:33.019148   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:33.057704   75577 cri.go:89] found id: ""
	I0920 18:20:33.057738   75577 logs.go:276] 0 containers: []
	W0920 18:20:33.057750   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:33.057759   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:33.057821   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:33.093702   75577 cri.go:89] found id: ""
	I0920 18:20:33.093732   75577 logs.go:276] 0 containers: []
	W0920 18:20:33.093743   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:33.093751   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:33.093809   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:33.128476   75577 cri.go:89] found id: ""
	I0920 18:20:33.128504   75577 logs.go:276] 0 containers: []
	W0920 18:20:33.128516   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:33.128523   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:33.128591   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:33.165513   75577 cri.go:89] found id: ""
	I0920 18:20:33.165542   75577 logs.go:276] 0 containers: []
	W0920 18:20:33.165550   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:33.165559   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:33.165569   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:33.219613   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:33.219650   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:33.232291   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:33.232317   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:33.304172   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:33.304197   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:33.304212   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:33.380057   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:33.380094   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:35.920842   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:35.935500   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:35.935593   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:35.974445   75577 cri.go:89] found id: ""
	I0920 18:20:35.974471   75577 logs.go:276] 0 containers: []
	W0920 18:20:35.974479   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:35.974485   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:35.974548   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:36.009487   75577 cri.go:89] found id: ""
	I0920 18:20:36.009518   75577 logs.go:276] 0 containers: []
	W0920 18:20:36.009530   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:36.009538   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:36.009706   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:36.049428   75577 cri.go:89] found id: ""
	I0920 18:20:36.049451   75577 logs.go:276] 0 containers: []
	W0920 18:20:36.049463   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:36.049469   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:36.049518   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:36.083973   75577 cri.go:89] found id: ""
	I0920 18:20:36.084011   75577 logs.go:276] 0 containers: []
	W0920 18:20:36.084022   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:36.084030   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:36.084119   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:36.117952   75577 cri.go:89] found id: ""
	I0920 18:20:36.117985   75577 logs.go:276] 0 containers: []
	W0920 18:20:36.117993   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:36.117998   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:36.118052   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:36.151222   75577 cri.go:89] found id: ""
	I0920 18:20:36.151256   75577 logs.go:276] 0 containers: []
	W0920 18:20:36.151265   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:36.151271   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:36.151319   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:36.184665   75577 cri.go:89] found id: ""
	I0920 18:20:36.184697   75577 logs.go:276] 0 containers: []
	W0920 18:20:36.184708   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:36.184716   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:36.184771   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:36.222974   75577 cri.go:89] found id: ""
	I0920 18:20:36.223003   75577 logs.go:276] 0 containers: []
	W0920 18:20:36.223012   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:36.223021   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:36.223033   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:36.300403   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:36.300424   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:36.300437   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:36.381220   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:36.381260   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:36.419010   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:36.419042   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:36.471758   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:36.471799   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:35.192949   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:37.693384   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:38.526219   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:41.024309   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:39.301511   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:41.801558   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:38.985359   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:38.998817   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:38.998898   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:39.036705   75577 cri.go:89] found id: ""
	I0920 18:20:39.036737   75577 logs.go:276] 0 containers: []
	W0920 18:20:39.036747   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:39.036755   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:39.036815   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:39.074254   75577 cri.go:89] found id: ""
	I0920 18:20:39.074285   75577 logs.go:276] 0 containers: []
	W0920 18:20:39.074294   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:39.074300   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:39.074367   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:39.108385   75577 cri.go:89] found id: ""
	I0920 18:20:39.108420   75577 logs.go:276] 0 containers: []
	W0920 18:20:39.108493   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:39.108506   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:39.108561   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:39.142283   75577 cri.go:89] found id: ""
	I0920 18:20:39.142313   75577 logs.go:276] 0 containers: []
	W0920 18:20:39.142325   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:39.142332   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:39.142396   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:39.176881   75577 cri.go:89] found id: ""
	I0920 18:20:39.176918   75577 logs.go:276] 0 containers: []
	W0920 18:20:39.176933   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:39.176941   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:39.177002   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:39.217734   75577 cri.go:89] found id: ""
	I0920 18:20:39.217759   75577 logs.go:276] 0 containers: []
	W0920 18:20:39.217767   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:39.217773   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:39.217852   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:39.250823   75577 cri.go:89] found id: ""
	I0920 18:20:39.250852   75577 logs.go:276] 0 containers: []
	W0920 18:20:39.250860   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:39.250868   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:39.250936   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:39.287494   75577 cri.go:89] found id: ""
	I0920 18:20:39.287519   75577 logs.go:276] 0 containers: []
	W0920 18:20:39.287528   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:39.287540   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:39.287552   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:39.343091   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:39.343126   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:39.359028   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:39.359063   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:39.425293   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:39.425321   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:39.425336   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:39.503303   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:39.503350   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:42.044784   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:42.057764   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:42.057876   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:42.091798   75577 cri.go:89] found id: ""
	I0920 18:20:42.091825   75577 logs.go:276] 0 containers: []
	W0920 18:20:42.091833   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:42.091839   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:42.091888   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:42.125325   75577 cri.go:89] found id: ""
	I0920 18:20:42.125352   75577 logs.go:276] 0 containers: []
	W0920 18:20:42.125362   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:42.125370   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:42.125423   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:42.158641   75577 cri.go:89] found id: ""
	I0920 18:20:42.158680   75577 logs.go:276] 0 containers: []
	W0920 18:20:42.158692   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:42.158701   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:42.158771   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:42.194065   75577 cri.go:89] found id: ""
	I0920 18:20:42.194090   75577 logs.go:276] 0 containers: []
	W0920 18:20:42.194101   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:42.194109   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:42.194174   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:42.227237   75577 cri.go:89] found id: ""
	I0920 18:20:42.227262   75577 logs.go:276] 0 containers: []
	W0920 18:20:42.227270   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:42.227275   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:42.227324   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:42.260205   75577 cri.go:89] found id: ""
	I0920 18:20:42.260240   75577 logs.go:276] 0 containers: []
	W0920 18:20:42.260251   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:42.260258   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:42.260325   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:42.300124   75577 cri.go:89] found id: ""
	I0920 18:20:42.300159   75577 logs.go:276] 0 containers: []
	W0920 18:20:42.300169   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:42.300175   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:42.300273   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:42.335638   75577 cri.go:89] found id: ""
	I0920 18:20:42.335674   75577 logs.go:276] 0 containers: []
	W0920 18:20:42.335685   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:42.335695   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:42.335710   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:42.386176   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:42.386214   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:42.400113   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:42.400147   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:42.479877   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:42.479900   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:42.479912   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:42.554654   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:42.554701   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:40.191251   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:42.192910   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:43.026185   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:45.525362   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:43.801927   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:45.802309   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:45.093962   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:45.107747   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:45.107811   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:45.147091   75577 cri.go:89] found id: ""
	I0920 18:20:45.147120   75577 logs.go:276] 0 containers: []
	W0920 18:20:45.147132   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:45.147140   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:45.147205   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:45.185332   75577 cri.go:89] found id: ""
	I0920 18:20:45.185396   75577 logs.go:276] 0 containers: []
	W0920 18:20:45.185431   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:45.185446   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:45.185523   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:45.224496   75577 cri.go:89] found id: ""
	I0920 18:20:45.224525   75577 logs.go:276] 0 containers: []
	W0920 18:20:45.224535   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:45.224542   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:45.224612   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:45.260686   75577 cri.go:89] found id: ""
	I0920 18:20:45.260718   75577 logs.go:276] 0 containers: []
	W0920 18:20:45.260729   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:45.260737   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:45.260801   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:45.301231   75577 cri.go:89] found id: ""
	I0920 18:20:45.301259   75577 logs.go:276] 0 containers: []
	W0920 18:20:45.301269   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:45.301277   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:45.301343   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:45.346441   75577 cri.go:89] found id: ""
	I0920 18:20:45.346468   75577 logs.go:276] 0 containers: []
	W0920 18:20:45.346476   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:45.346482   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:45.346537   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:45.385042   75577 cri.go:89] found id: ""
	I0920 18:20:45.385071   75577 logs.go:276] 0 containers: []
	W0920 18:20:45.385082   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:45.385090   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:45.385149   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:45.422208   75577 cri.go:89] found id: ""
	I0920 18:20:45.422238   75577 logs.go:276] 0 containers: []
	W0920 18:20:45.422249   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:45.422260   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:45.422275   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:45.472692   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:45.472742   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:45.487981   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:45.488011   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:45.563012   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:45.563035   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:45.563051   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:45.640750   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:45.640786   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:44.691791   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:46.692359   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:48.693165   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:48.024959   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:50.025944   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:48.302699   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:50.801740   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:48.181093   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:48.194433   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:48.194516   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:48.234349   75577 cri.go:89] found id: ""
	I0920 18:20:48.234383   75577 logs.go:276] 0 containers: []
	W0920 18:20:48.234394   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:48.234403   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:48.234467   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:48.269424   75577 cri.go:89] found id: ""
	I0920 18:20:48.269450   75577 logs.go:276] 0 containers: []
	W0920 18:20:48.269457   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:48.269462   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:48.269514   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:48.303968   75577 cri.go:89] found id: ""
	I0920 18:20:48.303990   75577 logs.go:276] 0 containers: []
	W0920 18:20:48.303997   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:48.304002   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:48.304061   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:48.339149   75577 cri.go:89] found id: ""
	I0920 18:20:48.339180   75577 logs.go:276] 0 containers: []
	W0920 18:20:48.339191   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:48.339198   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:48.339259   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:48.374529   75577 cri.go:89] found id: ""
	I0920 18:20:48.374559   75577 logs.go:276] 0 containers: []
	W0920 18:20:48.374571   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:48.374578   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:48.374644   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:48.409170   75577 cri.go:89] found id: ""
	I0920 18:20:48.409203   75577 logs.go:276] 0 containers: []
	W0920 18:20:48.409211   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:48.409217   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:48.409292   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:48.443960   75577 cri.go:89] found id: ""
	I0920 18:20:48.443990   75577 logs.go:276] 0 containers: []
	W0920 18:20:48.444009   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:48.444017   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:48.444074   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:48.480934   75577 cri.go:89] found id: ""
	I0920 18:20:48.480965   75577 logs.go:276] 0 containers: []
	W0920 18:20:48.480978   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:48.480990   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:48.481006   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:48.533261   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:48.533295   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:48.546460   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:48.546488   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:48.615183   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:48.615206   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:48.615219   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:48.695299   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:48.695337   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:51.240343   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:51.255327   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:51.255411   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:51.294611   75577 cri.go:89] found id: ""
	I0920 18:20:51.294642   75577 logs.go:276] 0 containers: []
	W0920 18:20:51.294650   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:51.294656   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:51.294710   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:51.328569   75577 cri.go:89] found id: ""
	I0920 18:20:51.328598   75577 logs.go:276] 0 containers: []
	W0920 18:20:51.328613   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:51.328621   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:51.328675   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:51.364255   75577 cri.go:89] found id: ""
	I0920 18:20:51.364283   75577 logs.go:276] 0 containers: []
	W0920 18:20:51.364291   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:51.364297   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:51.364347   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:51.406178   75577 cri.go:89] found id: ""
	I0920 18:20:51.406204   75577 logs.go:276] 0 containers: []
	W0920 18:20:51.406215   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:51.406223   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:51.406284   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:51.449491   75577 cri.go:89] found id: ""
	I0920 18:20:51.449519   75577 logs.go:276] 0 containers: []
	W0920 18:20:51.449529   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:51.449536   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:51.449600   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:51.483243   75577 cri.go:89] found id: ""
	I0920 18:20:51.483269   75577 logs.go:276] 0 containers: []
	W0920 18:20:51.483278   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:51.483284   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:51.483334   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:51.517280   75577 cri.go:89] found id: ""
	I0920 18:20:51.517304   75577 logs.go:276] 0 containers: []
	W0920 18:20:51.517311   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:51.517316   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:51.517378   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:51.553517   75577 cri.go:89] found id: ""
	I0920 18:20:51.553545   75577 logs.go:276] 0 containers: []
	W0920 18:20:51.553556   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:51.553565   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:51.553576   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:51.607330   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:51.607369   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:51.620628   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:51.620662   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:51.687586   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:51.687618   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:51.687638   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:51.764882   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:51.764924   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:51.191732   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:53.193078   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:52.524529   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:54.526138   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:57.025365   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:52.802328   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:54.804955   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:57.301987   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:54.307276   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:54.321777   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:54.321860   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:54.355565   75577 cri.go:89] found id: ""
	I0920 18:20:54.355598   75577 logs.go:276] 0 containers: []
	W0920 18:20:54.355609   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:54.355618   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:54.355680   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:54.391184   75577 cri.go:89] found id: ""
	I0920 18:20:54.391216   75577 logs.go:276] 0 containers: []
	W0920 18:20:54.391227   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:54.391236   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:54.391301   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:54.425680   75577 cri.go:89] found id: ""
	I0920 18:20:54.425709   75577 logs.go:276] 0 containers: []
	W0920 18:20:54.425718   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:54.425725   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:54.425775   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:54.461514   75577 cri.go:89] found id: ""
	I0920 18:20:54.461541   75577 logs.go:276] 0 containers: []
	W0920 18:20:54.461552   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:54.461559   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:54.461625   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:54.495290   75577 cri.go:89] found id: ""
	I0920 18:20:54.495319   75577 logs.go:276] 0 containers: []
	W0920 18:20:54.495327   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:54.495333   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:54.495384   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:54.530014   75577 cri.go:89] found id: ""
	I0920 18:20:54.530038   75577 logs.go:276] 0 containers: []
	W0920 18:20:54.530046   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:54.530052   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:54.530103   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:54.562559   75577 cri.go:89] found id: ""
	I0920 18:20:54.562597   75577 logs.go:276] 0 containers: []
	W0920 18:20:54.562611   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:54.562621   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:54.562694   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:54.599894   75577 cri.go:89] found id: ""
	I0920 18:20:54.599920   75577 logs.go:276] 0 containers: []
	W0920 18:20:54.599928   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:54.599946   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:54.599982   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:54.636853   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:54.636880   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:54.687932   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:54.687978   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:54.701642   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:54.701673   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:54.769649   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:54.769678   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:54.769695   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:20:57.356015   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:20:57.368860   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:20:57.368923   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:20:57.409339   75577 cri.go:89] found id: ""
	I0920 18:20:57.409365   75577 logs.go:276] 0 containers: []
	W0920 18:20:57.409375   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:20:57.409382   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:20:57.409444   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:20:57.443055   75577 cri.go:89] found id: ""
	I0920 18:20:57.443085   75577 logs.go:276] 0 containers: []
	W0920 18:20:57.443095   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:20:57.443102   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:20:57.443158   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:20:57.481816   75577 cri.go:89] found id: ""
	I0920 18:20:57.481859   75577 logs.go:276] 0 containers: []
	W0920 18:20:57.481871   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:20:57.481879   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:20:57.481942   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:20:57.517327   75577 cri.go:89] found id: ""
	I0920 18:20:57.517361   75577 logs.go:276] 0 containers: []
	W0920 18:20:57.517372   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:20:57.517379   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:20:57.517442   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:20:57.555121   75577 cri.go:89] found id: ""
	I0920 18:20:57.555151   75577 logs.go:276] 0 containers: []
	W0920 18:20:57.555159   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:20:57.555164   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:20:57.555222   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:20:57.592632   75577 cri.go:89] found id: ""
	I0920 18:20:57.592666   75577 logs.go:276] 0 containers: []
	W0920 18:20:57.592679   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:20:57.592685   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:20:57.592734   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:20:57.627519   75577 cri.go:89] found id: ""
	I0920 18:20:57.627556   75577 logs.go:276] 0 containers: []
	W0920 18:20:57.627567   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:20:57.627574   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:20:57.627636   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:20:57.663817   75577 cri.go:89] found id: ""
	I0920 18:20:57.663844   75577 logs.go:276] 0 containers: []
	W0920 18:20:57.663853   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:20:57.663862   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:20:57.663877   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:20:57.704896   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:20:57.704924   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:20:55.692746   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:58.191973   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:59.524732   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:01.525346   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:59.801537   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:01.802088   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:57.757933   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:20:57.757972   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:20:57.772646   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:20:57.772673   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:20:57.850634   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:20:57.850665   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:20:57.850681   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:00.431504   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:00.445175   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:00.445241   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:00.482050   75577 cri.go:89] found id: ""
	I0920 18:21:00.482076   75577 logs.go:276] 0 containers: []
	W0920 18:21:00.482088   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:00.482095   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:00.482150   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:00.515793   75577 cri.go:89] found id: ""
	I0920 18:21:00.515822   75577 logs.go:276] 0 containers: []
	W0920 18:21:00.515833   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:00.515841   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:00.515903   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:00.550250   75577 cri.go:89] found id: ""
	I0920 18:21:00.550278   75577 logs.go:276] 0 containers: []
	W0920 18:21:00.550288   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:00.550296   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:00.550374   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:00.585977   75577 cri.go:89] found id: ""
	I0920 18:21:00.586011   75577 logs.go:276] 0 containers: []
	W0920 18:21:00.586034   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:00.586051   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:00.586118   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:00.621828   75577 cri.go:89] found id: ""
	I0920 18:21:00.621869   75577 logs.go:276] 0 containers: []
	W0920 18:21:00.621879   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:00.621886   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:00.621937   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:00.655492   75577 cri.go:89] found id: ""
	I0920 18:21:00.655528   75577 logs.go:276] 0 containers: []
	W0920 18:21:00.655540   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:00.655600   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:00.655673   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:00.709445   75577 cri.go:89] found id: ""
	I0920 18:21:00.709476   75577 logs.go:276] 0 containers: []
	W0920 18:21:00.709488   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:00.709496   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:00.709566   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:00.744098   75577 cri.go:89] found id: ""
	I0920 18:21:00.744123   75577 logs.go:276] 0 containers: []
	W0920 18:21:00.744134   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:00.744144   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:00.744164   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:00.792576   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:00.792612   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:00.807766   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:00.807794   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:00.883626   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:00.883649   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:00.883663   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:00.966233   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:00.966269   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:00.692181   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:03.192227   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:04.024582   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:06.029376   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:04.301078   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:06.301727   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:03.505502   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:03.520407   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:03.520534   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:03.557891   75577 cri.go:89] found id: ""
	I0920 18:21:03.557926   75577 logs.go:276] 0 containers: []
	W0920 18:21:03.557936   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:03.557944   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:03.558006   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:03.593869   75577 cri.go:89] found id: ""
	I0920 18:21:03.593898   75577 logs.go:276] 0 containers: []
	W0920 18:21:03.593908   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:03.593916   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:03.593982   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:03.629270   75577 cri.go:89] found id: ""
	I0920 18:21:03.629302   75577 logs.go:276] 0 containers: []
	W0920 18:21:03.629311   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:03.629317   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:03.629366   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:03.664658   75577 cri.go:89] found id: ""
	I0920 18:21:03.664688   75577 logs.go:276] 0 containers: []
	W0920 18:21:03.664699   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:03.664706   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:03.664769   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:03.700846   75577 cri.go:89] found id: ""
	I0920 18:21:03.700868   75577 logs.go:276] 0 containers: []
	W0920 18:21:03.700875   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:03.700882   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:03.700941   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:03.736311   75577 cri.go:89] found id: ""
	I0920 18:21:03.736345   75577 logs.go:276] 0 containers: []
	W0920 18:21:03.736355   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:03.736363   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:03.736421   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:03.770760   75577 cri.go:89] found id: ""
	I0920 18:21:03.770788   75577 logs.go:276] 0 containers: []
	W0920 18:21:03.770800   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:03.770808   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:03.770868   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:03.808724   75577 cri.go:89] found id: ""
	I0920 18:21:03.808749   75577 logs.go:276] 0 containers: []
	W0920 18:21:03.808756   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:03.808764   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:03.808775   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:03.851231   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:03.851265   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:03.899607   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:03.899641   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:03.915051   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:03.915079   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:03.984016   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:03.984038   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:03.984053   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:06.564776   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:06.578524   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:06.578604   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:06.614305   75577 cri.go:89] found id: ""
	I0920 18:21:06.614340   75577 logs.go:276] 0 containers: []
	W0920 18:21:06.614351   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:06.614365   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:06.614427   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:06.648929   75577 cri.go:89] found id: ""
	I0920 18:21:06.648958   75577 logs.go:276] 0 containers: []
	W0920 18:21:06.648968   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:06.648976   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:06.649036   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:06.689975   75577 cri.go:89] found id: ""
	I0920 18:21:06.690016   75577 logs.go:276] 0 containers: []
	W0920 18:21:06.690027   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:06.690034   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:06.690092   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:06.729721   75577 cri.go:89] found id: ""
	I0920 18:21:06.729747   75577 logs.go:276] 0 containers: []
	W0920 18:21:06.729755   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:06.729762   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:06.729808   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:06.766310   75577 cri.go:89] found id: ""
	I0920 18:21:06.766343   75577 logs.go:276] 0 containers: []
	W0920 18:21:06.766354   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:06.766361   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:06.766437   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:06.800801   75577 cri.go:89] found id: ""
	I0920 18:21:06.800829   75577 logs.go:276] 0 containers: []
	W0920 18:21:06.800839   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:06.800847   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:06.800909   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:06.836391   75577 cri.go:89] found id: ""
	I0920 18:21:06.836429   75577 logs.go:276] 0 containers: []
	W0920 18:21:06.836447   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:06.836455   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:06.836521   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:06.873014   75577 cri.go:89] found id: ""
	I0920 18:21:06.873041   75577 logs.go:276] 0 containers: []
	W0920 18:21:06.873049   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:06.873057   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:06.873070   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:06.953084   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:06.953110   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:06.953122   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:07.032398   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:07.032434   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:07.082196   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:07.082224   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:07.159604   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:07.159640   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:05.691350   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:07.692127   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:08.526757   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:10.526990   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:08.801736   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:11.301001   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:09.675022   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:09.687924   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:09.687999   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:09.722963   75577 cri.go:89] found id: ""
	I0920 18:21:09.722994   75577 logs.go:276] 0 containers: []
	W0920 18:21:09.723005   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:09.723017   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:09.723084   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:09.756433   75577 cri.go:89] found id: ""
	I0920 18:21:09.756458   75577 logs.go:276] 0 containers: []
	W0920 18:21:09.756472   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:09.756486   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:09.756549   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:09.792200   75577 cri.go:89] found id: ""
	I0920 18:21:09.792232   75577 logs.go:276] 0 containers: []
	W0920 18:21:09.792248   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:09.792256   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:09.792323   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:09.829058   75577 cri.go:89] found id: ""
	I0920 18:21:09.829081   75577 logs.go:276] 0 containers: []
	W0920 18:21:09.829098   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:09.829104   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:09.829150   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:09.863225   75577 cri.go:89] found id: ""
	I0920 18:21:09.863252   75577 logs.go:276] 0 containers: []
	W0920 18:21:09.863259   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:09.863265   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:09.863312   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:09.897682   75577 cri.go:89] found id: ""
	I0920 18:21:09.897708   75577 logs.go:276] 0 containers: []
	W0920 18:21:09.897725   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:09.897731   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:09.897789   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:09.936799   75577 cri.go:89] found id: ""
	I0920 18:21:09.936826   75577 logs.go:276] 0 containers: []
	W0920 18:21:09.936836   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:09.936843   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:09.936904   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:09.970502   75577 cri.go:89] found id: ""
	I0920 18:21:09.970539   75577 logs.go:276] 0 containers: []
	W0920 18:21:09.970550   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:09.970560   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:09.970574   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:09.983676   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:09.983703   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:10.058844   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:10.058865   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:10.058874   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:10.136998   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:10.137040   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:10.178902   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:10.178933   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:12.729619   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:09.692361   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:12.192257   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:13.025403   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:15.025667   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:13.302087   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:15.802019   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:12.743816   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:12.743876   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:12.779407   75577 cri.go:89] found id: ""
	I0920 18:21:12.779431   75577 logs.go:276] 0 containers: []
	W0920 18:21:12.779439   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:12.779446   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:12.779503   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:12.820700   75577 cri.go:89] found id: ""
	I0920 18:21:12.820730   75577 logs.go:276] 0 containers: []
	W0920 18:21:12.820740   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:12.820748   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:12.820812   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:12.862179   75577 cri.go:89] found id: ""
	I0920 18:21:12.862210   75577 logs.go:276] 0 containers: []
	W0920 18:21:12.862221   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:12.862228   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:12.862284   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:12.908976   75577 cri.go:89] found id: ""
	I0920 18:21:12.908999   75577 logs.go:276] 0 containers: []
	W0920 18:21:12.909007   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:12.909013   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:12.909072   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:12.953653   75577 cri.go:89] found id: ""
	I0920 18:21:12.953688   75577 logs.go:276] 0 containers: []
	W0920 18:21:12.953696   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:12.953702   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:12.953757   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:12.998519   75577 cri.go:89] found id: ""
	I0920 18:21:12.998546   75577 logs.go:276] 0 containers: []
	W0920 18:21:12.998557   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:12.998564   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:12.998640   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:13.035518   75577 cri.go:89] found id: ""
	I0920 18:21:13.035541   75577 logs.go:276] 0 containers: []
	W0920 18:21:13.035549   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:13.035555   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:13.035609   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:13.073551   75577 cri.go:89] found id: ""
	I0920 18:21:13.073591   75577 logs.go:276] 0 containers: []
	W0920 18:21:13.073605   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:13.073617   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:13.073638   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:13.125118   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:13.125155   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:13.139384   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:13.139415   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:13.204980   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:13.205006   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:13.205021   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:13.289400   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:13.289446   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:15.829405   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:15.842291   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:15.842354   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:15.875886   75577 cri.go:89] found id: ""
	I0920 18:21:15.875921   75577 logs.go:276] 0 containers: []
	W0920 18:21:15.875930   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:15.875937   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:15.875990   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:15.911706   75577 cri.go:89] found id: ""
	I0920 18:21:15.911746   75577 logs.go:276] 0 containers: []
	W0920 18:21:15.911760   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:15.911768   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:15.911831   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:15.950117   75577 cri.go:89] found id: ""
	I0920 18:21:15.950150   75577 logs.go:276] 0 containers: []
	W0920 18:21:15.950161   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:15.950170   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:15.950243   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:15.986029   75577 cri.go:89] found id: ""
	I0920 18:21:15.986066   75577 logs.go:276] 0 containers: []
	W0920 18:21:15.986083   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:15.986091   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:15.986159   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:16.020307   75577 cri.go:89] found id: ""
	I0920 18:21:16.020335   75577 logs.go:276] 0 containers: []
	W0920 18:21:16.020346   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:16.020354   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:16.020412   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:16.054744   75577 cri.go:89] found id: ""
	I0920 18:21:16.054769   75577 logs.go:276] 0 containers: []
	W0920 18:21:16.054777   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:16.054782   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:16.054831   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:16.091756   75577 cri.go:89] found id: ""
	I0920 18:21:16.091789   75577 logs.go:276] 0 containers: []
	W0920 18:21:16.091800   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:16.091807   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:16.091868   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:16.127818   75577 cri.go:89] found id: ""
	I0920 18:21:16.127843   75577 logs.go:276] 0 containers: []
	W0920 18:21:16.127851   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:16.127861   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:16.127877   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:16.200114   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:16.200138   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:16.200149   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:16.279473   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:16.279508   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:16.319139   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:16.319171   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:16.370721   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:16.370769   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:14.192782   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:16.192866   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:18.691599   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:17.524026   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:19.524385   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:21.525244   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:18.301696   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:20.301993   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:18.884966   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:18.900270   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:18.900355   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:18.938541   75577 cri.go:89] found id: ""
	I0920 18:21:18.938582   75577 logs.go:276] 0 containers: []
	W0920 18:21:18.938594   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:18.938602   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:18.938673   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:18.972343   75577 cri.go:89] found id: ""
	I0920 18:21:18.972380   75577 logs.go:276] 0 containers: []
	W0920 18:21:18.972391   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:18.972400   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:18.972458   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:19.011996   75577 cri.go:89] found id: ""
	I0920 18:21:19.012037   75577 logs.go:276] 0 containers: []
	W0920 18:21:19.012048   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:19.012054   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:19.012105   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:19.053789   75577 cri.go:89] found id: ""
	I0920 18:21:19.053818   75577 logs.go:276] 0 containers: []
	W0920 18:21:19.053839   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:19.053849   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:19.053907   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:19.094830   75577 cri.go:89] found id: ""
	I0920 18:21:19.094862   75577 logs.go:276] 0 containers: []
	W0920 18:21:19.094872   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:19.094881   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:19.094941   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:19.133894   75577 cri.go:89] found id: ""
	I0920 18:21:19.133923   75577 logs.go:276] 0 containers: []
	W0920 18:21:19.133934   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:19.133943   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:19.134001   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:19.171640   75577 cri.go:89] found id: ""
	I0920 18:21:19.171662   75577 logs.go:276] 0 containers: []
	W0920 18:21:19.171670   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:19.171676   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:19.171730   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:19.207821   75577 cri.go:89] found id: ""
	I0920 18:21:19.207852   75577 logs.go:276] 0 containers: []
	W0920 18:21:19.207861   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:19.207869   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:19.207880   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:19.292486   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:19.292530   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:19.332664   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:19.332695   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:19.383793   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:19.383828   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:19.399234   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:19.399269   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:19.470636   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:21.971772   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:21.985558   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:21.985630   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:22.018745   75577 cri.go:89] found id: ""
	I0920 18:21:22.018775   75577 logs.go:276] 0 containers: []
	W0920 18:21:22.018785   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:22.018795   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:22.018854   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:22.053586   75577 cri.go:89] found id: ""
	I0920 18:21:22.053617   75577 logs.go:276] 0 containers: []
	W0920 18:21:22.053627   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:22.053635   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:22.053697   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:22.088290   75577 cri.go:89] found id: ""
	I0920 18:21:22.088320   75577 logs.go:276] 0 containers: []
	W0920 18:21:22.088337   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:22.088344   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:22.088394   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:22.121026   75577 cri.go:89] found id: ""
	I0920 18:21:22.121050   75577 logs.go:276] 0 containers: []
	W0920 18:21:22.121057   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:22.121063   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:22.121117   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:22.153669   75577 cri.go:89] found id: ""
	I0920 18:21:22.153702   75577 logs.go:276] 0 containers: []
	W0920 18:21:22.153715   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:22.153725   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:22.153793   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:22.192179   75577 cri.go:89] found id: ""
	I0920 18:21:22.192208   75577 logs.go:276] 0 containers: []
	W0920 18:21:22.192218   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:22.192226   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:22.192294   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:22.226115   75577 cri.go:89] found id: ""
	I0920 18:21:22.226142   75577 logs.go:276] 0 containers: []
	W0920 18:21:22.226153   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:22.226161   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:22.226231   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:22.259032   75577 cri.go:89] found id: ""
	I0920 18:21:22.259059   75577 logs.go:276] 0 containers: []
	W0920 18:21:22.259070   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:22.259080   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:22.259094   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:22.308989   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:22.309020   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:22.322084   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:22.322113   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:22.397567   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:22.397587   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:22.397598   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:22.476551   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:22.476596   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:20.691837   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:23.191939   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:24.024785   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:26.024824   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:22.801102   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:24.801697   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:26.801789   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:25.017659   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:25.032637   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:25.032698   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:25.067671   75577 cri.go:89] found id: ""
	I0920 18:21:25.067701   75577 logs.go:276] 0 containers: []
	W0920 18:21:25.067711   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:25.067718   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:25.067774   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:25.109354   75577 cri.go:89] found id: ""
	I0920 18:21:25.109385   75577 logs.go:276] 0 containers: []
	W0920 18:21:25.109396   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:25.109403   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:25.109463   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:25.142888   75577 cri.go:89] found id: ""
	I0920 18:21:25.142924   75577 logs.go:276] 0 containers: []
	W0920 18:21:25.142935   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:25.142942   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:25.143004   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:25.179005   75577 cri.go:89] found id: ""
	I0920 18:21:25.179032   75577 logs.go:276] 0 containers: []
	W0920 18:21:25.179043   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:25.179050   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:25.179114   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:25.211639   75577 cri.go:89] found id: ""
	I0920 18:21:25.211662   75577 logs.go:276] 0 containers: []
	W0920 18:21:25.211669   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:25.211676   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:25.211729   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:25.246678   75577 cri.go:89] found id: ""
	I0920 18:21:25.246709   75577 logs.go:276] 0 containers: []
	W0920 18:21:25.246718   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:25.246724   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:25.246780   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:25.284187   75577 cri.go:89] found id: ""
	I0920 18:21:25.284216   75577 logs.go:276] 0 containers: []
	W0920 18:21:25.284240   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:25.284247   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:25.284309   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:25.318882   75577 cri.go:89] found id: ""
	I0920 18:21:25.318917   75577 logs.go:276] 0 containers: []
	W0920 18:21:25.318929   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:25.318940   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:25.318954   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:25.357017   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:25.357046   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:25.407584   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:25.407627   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:25.421405   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:25.421437   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:25.492849   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:25.492870   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:25.492881   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:25.192551   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:27.691730   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:28.025042   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:30.025806   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:29.301637   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:31.302080   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:28.076101   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:28.088741   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:28.088805   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:28.124889   75577 cri.go:89] found id: ""
	I0920 18:21:28.124925   75577 logs.go:276] 0 containers: []
	W0920 18:21:28.124944   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:28.124952   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:28.125013   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:28.158590   75577 cri.go:89] found id: ""
	I0920 18:21:28.158615   75577 logs.go:276] 0 containers: []
	W0920 18:21:28.158623   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:28.158629   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:28.158677   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:28.195636   75577 cri.go:89] found id: ""
	I0920 18:21:28.195670   75577 logs.go:276] 0 containers: []
	W0920 18:21:28.195683   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:28.195692   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:28.195762   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:28.228714   75577 cri.go:89] found id: ""
	I0920 18:21:28.228759   75577 logs.go:276] 0 containers: []
	W0920 18:21:28.228771   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:28.228780   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:28.228840   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:28.261488   75577 cri.go:89] found id: ""
	I0920 18:21:28.261519   75577 logs.go:276] 0 containers: []
	W0920 18:21:28.261531   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:28.261538   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:28.261606   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:28.293534   75577 cri.go:89] found id: ""
	I0920 18:21:28.293560   75577 logs.go:276] 0 containers: []
	W0920 18:21:28.293568   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:28.293573   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:28.293629   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:28.330095   75577 cri.go:89] found id: ""
	I0920 18:21:28.330120   75577 logs.go:276] 0 containers: []
	W0920 18:21:28.330128   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:28.330134   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:28.330191   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:28.365643   75577 cri.go:89] found id: ""
	I0920 18:21:28.365675   75577 logs.go:276] 0 containers: []
	W0920 18:21:28.365684   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:28.365693   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:28.365712   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:28.379982   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:28.380007   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:28.456355   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:28.456386   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:28.456402   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:28.538725   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:28.538759   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:28.576067   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:28.576104   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:31.129705   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:31.145814   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:31.145919   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:31.181469   75577 cri.go:89] found id: ""
	I0920 18:21:31.181497   75577 logs.go:276] 0 containers: []
	W0920 18:21:31.181507   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:31.181514   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:31.181562   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:31.216391   75577 cri.go:89] found id: ""
	I0920 18:21:31.216416   75577 logs.go:276] 0 containers: []
	W0920 18:21:31.216426   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:31.216433   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:31.216492   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:31.250236   75577 cri.go:89] found id: ""
	I0920 18:21:31.250266   75577 logs.go:276] 0 containers: []
	W0920 18:21:31.250277   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:31.250285   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:31.250357   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:31.283406   75577 cri.go:89] found id: ""
	I0920 18:21:31.283430   75577 logs.go:276] 0 containers: []
	W0920 18:21:31.283439   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:31.283446   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:31.283519   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:31.317276   75577 cri.go:89] found id: ""
	I0920 18:21:31.317308   75577 logs.go:276] 0 containers: []
	W0920 18:21:31.317327   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:31.317335   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:31.317400   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:31.351563   75577 cri.go:89] found id: ""
	I0920 18:21:31.351588   75577 logs.go:276] 0 containers: []
	W0920 18:21:31.351595   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:31.351600   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:31.351652   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:31.385932   75577 cri.go:89] found id: ""
	I0920 18:21:31.385972   75577 logs.go:276] 0 containers: []
	W0920 18:21:31.385984   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:31.385992   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:31.386056   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:31.422679   75577 cri.go:89] found id: ""
	I0920 18:21:31.422718   75577 logs.go:276] 0 containers: []
	W0920 18:21:31.422729   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:31.422740   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:31.422755   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:31.475827   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:31.475867   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:31.489324   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:31.489354   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:31.560357   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:31.560377   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:31.560390   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:31.642137   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:31.642171   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:30.191820   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:32.692178   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:32.524953   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:35.025736   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:33.801586   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:36.301145   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:34.179063   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:34.193077   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:34.193157   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:34.227451   75577 cri.go:89] found id: ""
	I0920 18:21:34.227484   75577 logs.go:276] 0 containers: []
	W0920 18:21:34.227495   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:34.227503   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:34.227571   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:34.263740   75577 cri.go:89] found id: ""
	I0920 18:21:34.263766   75577 logs.go:276] 0 containers: []
	W0920 18:21:34.263774   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:34.263779   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:34.263838   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:34.298094   75577 cri.go:89] found id: ""
	I0920 18:21:34.298121   75577 logs.go:276] 0 containers: []
	W0920 18:21:34.298132   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:34.298139   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:34.298202   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:34.334253   75577 cri.go:89] found id: ""
	I0920 18:21:34.334281   75577 logs.go:276] 0 containers: []
	W0920 18:21:34.334290   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:34.334297   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:34.334361   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:34.370656   75577 cri.go:89] found id: ""
	I0920 18:21:34.370688   75577 logs.go:276] 0 containers: []
	W0920 18:21:34.370699   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:34.370706   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:34.370759   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:34.405658   75577 cri.go:89] found id: ""
	I0920 18:21:34.405681   75577 logs.go:276] 0 containers: []
	W0920 18:21:34.405689   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:34.405695   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:34.405748   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:34.442284   75577 cri.go:89] found id: ""
	I0920 18:21:34.442311   75577 logs.go:276] 0 containers: []
	W0920 18:21:34.442319   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:34.442325   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:34.442394   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:34.477895   75577 cri.go:89] found id: ""
	I0920 18:21:34.477927   75577 logs.go:276] 0 containers: []
	W0920 18:21:34.477939   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:34.477950   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:34.477964   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:34.531225   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:34.531256   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:34.544325   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:34.544359   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:34.615914   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:34.615933   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:34.615948   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:34.692119   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:34.692160   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:37.228930   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:37.242070   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:37.242151   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:37.276489   75577 cri.go:89] found id: ""
	I0920 18:21:37.276531   75577 logs.go:276] 0 containers: []
	W0920 18:21:37.276543   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:37.276551   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:37.276617   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:37.311106   75577 cri.go:89] found id: ""
	I0920 18:21:37.311140   75577 logs.go:276] 0 containers: []
	W0920 18:21:37.311148   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:37.311156   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:37.311217   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:37.343147   75577 cri.go:89] found id: ""
	I0920 18:21:37.343173   75577 logs.go:276] 0 containers: []
	W0920 18:21:37.343181   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:37.343188   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:37.343237   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:37.378858   75577 cri.go:89] found id: ""
	I0920 18:21:37.378934   75577 logs.go:276] 0 containers: []
	W0920 18:21:37.378947   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:37.378955   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:37.379005   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:37.412321   75577 cri.go:89] found id: ""
	I0920 18:21:37.412355   75577 logs.go:276] 0 containers: []
	W0920 18:21:37.412366   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:37.412374   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:37.412443   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:37.447478   75577 cri.go:89] found id: ""
	I0920 18:21:37.447510   75577 logs.go:276] 0 containers: []
	W0920 18:21:37.447520   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:37.447526   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:37.447580   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:37.481172   75577 cri.go:89] found id: ""
	I0920 18:21:37.481201   75577 logs.go:276] 0 containers: []
	W0920 18:21:37.481209   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:37.481216   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:37.481269   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:37.518001   75577 cri.go:89] found id: ""
	I0920 18:21:37.518032   75577 logs.go:276] 0 containers: []
	W0920 18:21:37.518041   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:37.518050   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:37.518062   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:37.567675   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:37.567707   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:37.582279   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:37.582308   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:37.654514   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:37.654546   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:37.654563   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:37.735895   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:37.735929   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:35.192406   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:37.692460   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:37.524744   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:39.526068   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:42.024627   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:38.301432   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:40.302489   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:40.276416   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:40.291651   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:40.291713   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:40.328309   75577 cri.go:89] found id: ""
	I0920 18:21:40.328346   75577 logs.go:276] 0 containers: []
	W0920 18:21:40.328359   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:40.328378   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:40.328441   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:40.368126   75577 cri.go:89] found id: ""
	I0920 18:21:40.368154   75577 logs.go:276] 0 containers: []
	W0920 18:21:40.368162   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:40.368167   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:40.368217   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:40.408324   75577 cri.go:89] found id: ""
	I0920 18:21:40.408359   75577 logs.go:276] 0 containers: []
	W0920 18:21:40.408371   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:40.408380   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:40.408448   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:40.453867   75577 cri.go:89] found id: ""
	I0920 18:21:40.453892   75577 logs.go:276] 0 containers: []
	W0920 18:21:40.453900   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:40.453906   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:40.453969   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:40.500625   75577 cri.go:89] found id: ""
	I0920 18:21:40.500660   75577 logs.go:276] 0 containers: []
	W0920 18:21:40.500670   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:40.500678   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:40.500750   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:40.533998   75577 cri.go:89] found id: ""
	I0920 18:21:40.534028   75577 logs.go:276] 0 containers: []
	W0920 18:21:40.534039   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:40.534048   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:40.534111   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:40.566279   75577 cri.go:89] found id: ""
	I0920 18:21:40.566308   75577 logs.go:276] 0 containers: []
	W0920 18:21:40.566319   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:40.566326   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:40.566392   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:40.599154   75577 cri.go:89] found id: ""
	I0920 18:21:40.599179   75577 logs.go:276] 0 containers: []
	W0920 18:21:40.599186   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:40.599194   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:40.599210   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:40.668568   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:40.668596   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:40.668608   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:40.747895   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:40.747969   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:40.789568   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:40.789604   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:40.839816   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:40.839852   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:39.693537   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:42.192617   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:44.025059   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:46.524700   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:42.801135   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:44.802083   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:46.802537   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:43.354847   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:43.369077   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:43.369167   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:43.404667   75577 cri.go:89] found id: ""
	I0920 18:21:43.404698   75577 logs.go:276] 0 containers: []
	W0920 18:21:43.404710   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:43.404718   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:43.404778   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:43.440968   75577 cri.go:89] found id: ""
	I0920 18:21:43.441001   75577 logs.go:276] 0 containers: []
	W0920 18:21:43.441012   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:43.441021   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:43.441088   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:43.474481   75577 cri.go:89] found id: ""
	I0920 18:21:43.474511   75577 logs.go:276] 0 containers: []
	W0920 18:21:43.474520   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:43.474529   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:43.474592   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:43.508196   75577 cri.go:89] found id: ""
	I0920 18:21:43.508228   75577 logs.go:276] 0 containers: []
	W0920 18:21:43.508241   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:43.508248   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:43.508307   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:43.543679   75577 cri.go:89] found id: ""
	I0920 18:21:43.543710   75577 logs.go:276] 0 containers: []
	W0920 18:21:43.543721   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:43.543728   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:43.543788   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:43.577115   75577 cri.go:89] found id: ""
	I0920 18:21:43.577138   75577 logs.go:276] 0 containers: []
	W0920 18:21:43.577145   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:43.577152   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:43.577198   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:43.611552   75577 cri.go:89] found id: ""
	I0920 18:21:43.611588   75577 logs.go:276] 0 containers: []
	W0920 18:21:43.611602   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:43.611616   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:43.611685   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:43.647913   75577 cri.go:89] found id: ""
	I0920 18:21:43.647946   75577 logs.go:276] 0 containers: []
	W0920 18:21:43.647957   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:43.647969   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:43.647983   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:43.661014   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:43.661043   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:43.736584   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:43.736607   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:43.736620   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:43.814340   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:43.814380   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:43.855968   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:43.855996   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:46.410577   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:46.423733   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:46.423800   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:46.456819   75577 cri.go:89] found id: ""
	I0920 18:21:46.456847   75577 logs.go:276] 0 containers: []
	W0920 18:21:46.456855   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:46.456861   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:46.456927   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:46.493259   75577 cri.go:89] found id: ""
	I0920 18:21:46.493291   75577 logs.go:276] 0 containers: []
	W0920 18:21:46.493301   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:46.493307   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:46.493372   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:46.531200   75577 cri.go:89] found id: ""
	I0920 18:21:46.531233   75577 logs.go:276] 0 containers: []
	W0920 18:21:46.531241   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:46.531247   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:46.531294   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:46.567508   75577 cri.go:89] found id: ""
	I0920 18:21:46.567530   75577 logs.go:276] 0 containers: []
	W0920 18:21:46.567538   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:46.567544   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:46.567601   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:46.600257   75577 cri.go:89] found id: ""
	I0920 18:21:46.600290   75577 logs.go:276] 0 containers: []
	W0920 18:21:46.600303   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:46.600311   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:46.600375   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:46.635568   75577 cri.go:89] found id: ""
	I0920 18:21:46.635598   75577 logs.go:276] 0 containers: []
	W0920 18:21:46.635606   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:46.635612   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:46.635668   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:46.668257   75577 cri.go:89] found id: ""
	I0920 18:21:46.668304   75577 logs.go:276] 0 containers: []
	W0920 18:21:46.668316   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:46.668324   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:46.668392   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:46.703631   75577 cri.go:89] found id: ""
	I0920 18:21:46.703654   75577 logs.go:276] 0 containers: []
	W0920 18:21:46.703662   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:46.703671   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:46.703680   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:46.780232   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:46.780295   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:46.816623   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:46.816647   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:46.868911   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:46.868954   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:46.882716   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:46.882743   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:46.965166   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:44.692655   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:47.192969   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:48.524828   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:50.525045   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:49.301263   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:51.802343   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:49.466238   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:49.480167   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:49.480228   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:49.513507   75577 cri.go:89] found id: ""
	I0920 18:21:49.513537   75577 logs.go:276] 0 containers: []
	W0920 18:21:49.513549   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:49.513558   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:49.513620   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:49.548708   75577 cri.go:89] found id: ""
	I0920 18:21:49.548739   75577 logs.go:276] 0 containers: []
	W0920 18:21:49.548750   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:49.548758   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:49.548810   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:49.583586   75577 cri.go:89] found id: ""
	I0920 18:21:49.583614   75577 logs.go:276] 0 containers: []
	W0920 18:21:49.583625   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:49.583633   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:49.583695   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:49.617574   75577 cri.go:89] found id: ""
	I0920 18:21:49.617605   75577 logs.go:276] 0 containers: []
	W0920 18:21:49.617616   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:49.617623   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:49.617684   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:49.656517   75577 cri.go:89] found id: ""
	I0920 18:21:49.656552   75577 logs.go:276] 0 containers: []
	W0920 18:21:49.656563   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:49.656571   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:49.656634   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:49.692848   75577 cri.go:89] found id: ""
	I0920 18:21:49.692870   75577 logs.go:276] 0 containers: []
	W0920 18:21:49.692877   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:49.692883   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:49.692950   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:49.728593   75577 cri.go:89] found id: ""
	I0920 18:21:49.728620   75577 logs.go:276] 0 containers: []
	W0920 18:21:49.728630   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:49.728643   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:49.728705   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:49.764187   75577 cri.go:89] found id: ""
	I0920 18:21:49.764218   75577 logs.go:276] 0 containers: []
	W0920 18:21:49.764229   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:49.764242   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:49.764260   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:49.837741   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:49.837764   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:49.837777   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:49.914941   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:49.914978   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:49.957609   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:49.957639   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:50.012075   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:50.012115   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:52.527722   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:52.542125   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:52.542206   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:52.579884   75577 cri.go:89] found id: ""
	I0920 18:21:52.579910   75577 logs.go:276] 0 containers: []
	W0920 18:21:52.579919   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:52.579924   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:52.579982   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:52.619138   75577 cri.go:89] found id: ""
	I0920 18:21:52.619167   75577 logs.go:276] 0 containers: []
	W0920 18:21:52.619180   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:52.619188   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:52.619246   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:52.654460   75577 cri.go:89] found id: ""
	I0920 18:21:52.654498   75577 logs.go:276] 0 containers: []
	W0920 18:21:52.654508   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:52.654515   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:52.654578   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:52.708015   75577 cri.go:89] found id: ""
	I0920 18:21:52.708042   75577 logs.go:276] 0 containers: []
	W0920 18:21:52.708051   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:52.708057   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:52.708127   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:49.692309   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:52.192466   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:53.025700   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:55.026088   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:53.802620   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:55.802856   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:52.743917   75577 cri.go:89] found id: ""
	I0920 18:21:52.743952   75577 logs.go:276] 0 containers: []
	W0920 18:21:52.743964   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:52.743972   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:52.744025   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:52.783458   75577 cri.go:89] found id: ""
	I0920 18:21:52.783481   75577 logs.go:276] 0 containers: []
	W0920 18:21:52.783488   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:52.783495   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:52.783552   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:52.817716   75577 cri.go:89] found id: ""
	I0920 18:21:52.817749   75577 logs.go:276] 0 containers: []
	W0920 18:21:52.817762   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:52.817771   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:52.817882   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:52.857141   75577 cri.go:89] found id: ""
	I0920 18:21:52.857169   75577 logs.go:276] 0 containers: []
	W0920 18:21:52.857180   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:52.857190   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:52.857204   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:52.910555   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:52.910597   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:52.923843   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:52.923873   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:52.994263   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:52.994296   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:52.994313   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:53.079782   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:53.079829   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:55.619418   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:55.633854   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:55.633922   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:55.669778   75577 cri.go:89] found id: ""
	I0920 18:21:55.669813   75577 logs.go:276] 0 containers: []
	W0920 18:21:55.669824   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:55.669853   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:55.669920   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:55.705687   75577 cri.go:89] found id: ""
	I0920 18:21:55.705724   75577 logs.go:276] 0 containers: []
	W0920 18:21:55.705736   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:55.705746   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:55.705811   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:55.739249   75577 cri.go:89] found id: ""
	I0920 18:21:55.739288   75577 logs.go:276] 0 containers: []
	W0920 18:21:55.739296   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:55.739302   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:55.739438   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:55.775077   75577 cri.go:89] found id: ""
	I0920 18:21:55.775109   75577 logs.go:276] 0 containers: []
	W0920 18:21:55.775121   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:55.775130   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:55.775181   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:55.810291   75577 cri.go:89] found id: ""
	I0920 18:21:55.810329   75577 logs.go:276] 0 containers: []
	W0920 18:21:55.810340   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:55.810349   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:55.810415   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:55.844440   75577 cri.go:89] found id: ""
	I0920 18:21:55.844468   75577 logs.go:276] 0 containers: []
	W0920 18:21:55.844476   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:55.844482   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:55.844551   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:55.878611   75577 cri.go:89] found id: ""
	I0920 18:21:55.878647   75577 logs.go:276] 0 containers: []
	W0920 18:21:55.878659   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:55.878667   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:55.878733   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:55.922064   75577 cri.go:89] found id: ""
	I0920 18:21:55.922101   75577 logs.go:276] 0 containers: []
	W0920 18:21:55.922114   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:55.922127   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:55.922147   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:55.979406   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:55.979445   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:55.993082   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:55.993111   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:56.065650   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:21:56.065701   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:56.065717   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:56.142640   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:56.142684   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:54.691482   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:56.691912   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:57.525280   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:00.025224   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:02.025722   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:58.300359   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:00.301252   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:02.302209   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:58.683524   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:58.698775   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:58.698833   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:58.737369   75577 cri.go:89] found id: ""
	I0920 18:21:58.737398   75577 logs.go:276] 0 containers: []
	W0920 18:21:58.737409   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:21:58.737417   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:58.737476   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:58.779504   75577 cri.go:89] found id: ""
	I0920 18:21:58.779533   75577 logs.go:276] 0 containers: []
	W0920 18:21:58.779544   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:21:58.779552   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:58.779610   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:58.814363   75577 cri.go:89] found id: ""
	I0920 18:21:58.814393   75577 logs.go:276] 0 containers: []
	W0920 18:21:58.814401   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:21:58.814407   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:58.814454   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:58.846219   75577 cri.go:89] found id: ""
	I0920 18:21:58.846242   75577 logs.go:276] 0 containers: []
	W0920 18:21:58.846251   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:21:58.846256   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:58.846307   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:58.882387   75577 cri.go:89] found id: ""
	I0920 18:21:58.882423   75577 logs.go:276] 0 containers: []
	W0920 18:21:58.882431   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:21:58.882437   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:58.882497   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:58.918632   75577 cri.go:89] found id: ""
	I0920 18:21:58.918668   75577 logs.go:276] 0 containers: []
	W0920 18:21:58.918679   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:21:58.918686   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:58.918751   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:58.953521   75577 cri.go:89] found id: ""
	I0920 18:21:58.953547   75577 logs.go:276] 0 containers: []
	W0920 18:21:58.953557   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:21:58.953564   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:21:58.953624   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:21:58.987412   75577 cri.go:89] found id: ""
	I0920 18:21:58.987439   75577 logs.go:276] 0 containers: []
	W0920 18:21:58.987447   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:21:58.987457   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:58.987471   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:59.077108   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:21:59.077169   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:59.141723   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:59.141758   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:59.208035   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:59.208081   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:59.221380   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:59.221404   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:21:59.297151   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:22:01.797684   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:22:01.812275   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:22:01.812338   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:22:01.846431   75577 cri.go:89] found id: ""
	I0920 18:22:01.846459   75577 logs.go:276] 0 containers: []
	W0920 18:22:01.846477   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:22:01.846483   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:22:01.846535   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:22:01.889016   75577 cri.go:89] found id: ""
	I0920 18:22:01.889049   75577 logs.go:276] 0 containers: []
	W0920 18:22:01.889061   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:22:01.889069   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:22:01.889144   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:22:01.926048   75577 cri.go:89] found id: ""
	I0920 18:22:01.926078   75577 logs.go:276] 0 containers: []
	W0920 18:22:01.926090   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:22:01.926108   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:22:01.926185   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:22:01.964510   75577 cri.go:89] found id: ""
	I0920 18:22:01.964536   75577 logs.go:276] 0 containers: []
	W0920 18:22:01.964547   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:22:01.964555   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:22:01.964624   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:22:02.005549   75577 cri.go:89] found id: ""
	I0920 18:22:02.005575   75577 logs.go:276] 0 containers: []
	W0920 18:22:02.005583   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:22:02.005588   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:22:02.005642   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:22:02.047359   75577 cri.go:89] found id: ""
	I0920 18:22:02.047385   75577 logs.go:276] 0 containers: []
	W0920 18:22:02.047393   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:22:02.047399   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:22:02.047455   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:22:02.082961   75577 cri.go:89] found id: ""
	I0920 18:22:02.082999   75577 logs.go:276] 0 containers: []
	W0920 18:22:02.083008   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:22:02.083015   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:22:02.083062   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:22:02.117696   75577 cri.go:89] found id: ""
	I0920 18:22:02.117732   75577 logs.go:276] 0 containers: []
	W0920 18:22:02.117741   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:22:02.117753   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:22:02.117769   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:22:02.131282   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:22:02.131311   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:22:02.200738   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:22:02.200760   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:22:02.200776   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:22:02.281162   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:22:02.281200   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:22:02.321854   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:22:02.321888   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:59.191434   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:01.192475   75086 pod_ready.go:103] pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:01.685900   75086 pod_ready.go:82] duration metric: took 4m0.000752648s for pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace to be "Ready" ...
	E0920 18:22:01.685945   75086 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-dwnt6" in "kube-system" namespace to be "Ready" (will not retry!)
	I0920 18:22:01.685972   75086 pod_ready.go:39] duration metric: took 4m13.051752581s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:22:01.686033   75086 kubeadm.go:597] duration metric: took 4m20.498687791s to restartPrimaryControlPlane
	W0920 18:22:01.686114   75086 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0920 18:22:01.686150   75086 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0920 18:22:04.026729   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:06.524404   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:04.803249   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:07.301696   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:04.874353   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:22:04.889710   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:22:04.889773   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:22:04.945719   75577 cri.go:89] found id: ""
	I0920 18:22:04.945745   75577 logs.go:276] 0 containers: []
	W0920 18:22:04.945753   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:22:04.945759   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:22:04.945802   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:22:04.995492   75577 cri.go:89] found id: ""
	I0920 18:22:04.995535   75577 logs.go:276] 0 containers: []
	W0920 18:22:04.995547   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:22:04.995555   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:22:04.995615   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:22:05.037827   75577 cri.go:89] found id: ""
	I0920 18:22:05.037871   75577 logs.go:276] 0 containers: []
	W0920 18:22:05.037882   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:22:05.037888   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:22:05.037935   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:22:05.077652   75577 cri.go:89] found id: ""
	I0920 18:22:05.077691   75577 logs.go:276] 0 containers: []
	W0920 18:22:05.077704   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:22:05.077712   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:22:05.077772   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:22:05.114472   75577 cri.go:89] found id: ""
	I0920 18:22:05.114509   75577 logs.go:276] 0 containers: []
	W0920 18:22:05.114520   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:22:05.114527   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:22:05.114590   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:22:05.150809   75577 cri.go:89] found id: ""
	I0920 18:22:05.150841   75577 logs.go:276] 0 containers: []
	W0920 18:22:05.150853   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:22:05.150860   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:22:05.150908   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:22:05.188880   75577 cri.go:89] found id: ""
	I0920 18:22:05.188910   75577 logs.go:276] 0 containers: []
	W0920 18:22:05.188921   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:22:05.188929   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:22:05.188990   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:22:05.224990   75577 cri.go:89] found id: ""
	I0920 18:22:05.225015   75577 logs.go:276] 0 containers: []
	W0920 18:22:05.225023   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:22:05.225032   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:22:05.225043   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:22:05.298000   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:22:05.298023   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:22:05.298037   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:22:05.382969   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:22:05.383008   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:22:05.424911   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:22:05.424940   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:22:05.475988   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:22:05.476024   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:22:08.525098   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:10.525321   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:09.801936   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:12.301073   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:07.990660   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:22:08.005572   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:22:08.005637   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:22:08.044354   75577 cri.go:89] found id: ""
	I0920 18:22:08.044381   75577 logs.go:276] 0 containers: []
	W0920 18:22:08.044390   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:22:08.044401   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:22:08.044449   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:22:08.082899   75577 cri.go:89] found id: ""
	I0920 18:22:08.082928   75577 logs.go:276] 0 containers: []
	W0920 18:22:08.082939   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:22:08.082948   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:22:08.083009   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:22:08.129297   75577 cri.go:89] found id: ""
	I0920 18:22:08.129325   75577 logs.go:276] 0 containers: []
	W0920 18:22:08.129335   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:22:08.129343   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:22:08.129404   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:22:08.168744   75577 cri.go:89] found id: ""
	I0920 18:22:08.168774   75577 logs.go:276] 0 containers: []
	W0920 18:22:08.168787   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:22:08.168792   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:22:08.168849   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:22:08.203830   75577 cri.go:89] found id: ""
	I0920 18:22:08.203860   75577 logs.go:276] 0 containers: []
	W0920 18:22:08.203871   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:22:08.203879   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:22:08.203940   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:22:08.238226   75577 cri.go:89] found id: ""
	I0920 18:22:08.238250   75577 logs.go:276] 0 containers: []
	W0920 18:22:08.238258   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:22:08.238263   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:22:08.238331   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:22:08.271457   75577 cri.go:89] found id: ""
	I0920 18:22:08.271485   75577 logs.go:276] 0 containers: []
	W0920 18:22:08.271495   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:22:08.271502   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:22:08.271563   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:22:08.306661   75577 cri.go:89] found id: ""
	I0920 18:22:08.306692   75577 logs.go:276] 0 containers: []
	W0920 18:22:08.306703   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:22:08.306712   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:22:08.306730   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:22:08.357760   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:22:08.357793   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:22:08.372625   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:22:08.372660   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:22:08.442477   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:22:08.442501   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:22:08.442517   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:22:08.528381   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:22:08.528412   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:22:11.064842   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:22:11.078086   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:22:11.078158   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:22:11.113454   75577 cri.go:89] found id: ""
	I0920 18:22:11.113485   75577 logs.go:276] 0 containers: []
	W0920 18:22:11.113497   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:22:11.113511   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:22:11.113575   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:22:11.148039   75577 cri.go:89] found id: ""
	I0920 18:22:11.148064   75577 logs.go:276] 0 containers: []
	W0920 18:22:11.148072   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:22:11.148078   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:22:11.148127   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:22:11.182667   75577 cri.go:89] found id: ""
	I0920 18:22:11.182697   75577 logs.go:276] 0 containers: []
	W0920 18:22:11.182708   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:22:11.182715   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:22:11.182776   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:22:11.217022   75577 cri.go:89] found id: ""
	I0920 18:22:11.217055   75577 logs.go:276] 0 containers: []
	W0920 18:22:11.217067   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:22:11.217075   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:22:11.217141   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:22:11.253970   75577 cri.go:89] found id: ""
	I0920 18:22:11.254001   75577 logs.go:276] 0 containers: []
	W0920 18:22:11.254012   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:22:11.254019   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:22:11.254085   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:22:11.288070   75577 cri.go:89] found id: ""
	I0920 18:22:11.288103   75577 logs.go:276] 0 containers: []
	W0920 18:22:11.288114   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:22:11.288121   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:22:11.288189   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:22:11.324215   75577 cri.go:89] found id: ""
	I0920 18:22:11.324240   75577 logs.go:276] 0 containers: []
	W0920 18:22:11.324254   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:22:11.324261   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:22:11.324319   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:22:11.359377   75577 cri.go:89] found id: ""
	I0920 18:22:11.359406   75577 logs.go:276] 0 containers: []
	W0920 18:22:11.359414   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:22:11.359423   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:22:11.359433   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:22:11.416479   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:22:11.416520   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:22:11.430240   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:22:11.430269   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:22:11.501531   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:22:11.501553   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:22:11.501565   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:22:11.580748   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:22:11.580787   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:22:13.024330   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:15.025065   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:17.026642   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:14.301652   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:16.802017   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:14.119084   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:22:14.132428   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:22:14.132505   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:22:14.167674   75577 cri.go:89] found id: ""
	I0920 18:22:14.167699   75577 logs.go:276] 0 containers: []
	W0920 18:22:14.167710   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:22:14.167717   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:22:14.167772   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:22:14.204570   75577 cri.go:89] found id: ""
	I0920 18:22:14.204595   75577 logs.go:276] 0 containers: []
	W0920 18:22:14.204603   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:22:14.204608   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:22:14.204655   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:22:14.241687   75577 cri.go:89] found id: ""
	I0920 18:22:14.241716   75577 logs.go:276] 0 containers: []
	W0920 18:22:14.241724   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:22:14.241731   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:22:14.241789   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:22:14.275776   75577 cri.go:89] found id: ""
	I0920 18:22:14.275802   75577 logs.go:276] 0 containers: []
	W0920 18:22:14.275810   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:22:14.275816   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:22:14.275872   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:22:14.309565   75577 cri.go:89] found id: ""
	I0920 18:22:14.309589   75577 logs.go:276] 0 containers: []
	W0920 18:22:14.309596   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:22:14.309602   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:22:14.309662   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:22:14.341858   75577 cri.go:89] found id: ""
	I0920 18:22:14.341884   75577 logs.go:276] 0 containers: []
	W0920 18:22:14.341892   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:22:14.341898   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:22:14.341963   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:22:14.377863   75577 cri.go:89] found id: ""
	I0920 18:22:14.377896   75577 logs.go:276] 0 containers: []
	W0920 18:22:14.377906   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:22:14.377912   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:22:14.377988   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:22:14.413182   75577 cri.go:89] found id: ""
	I0920 18:22:14.413207   75577 logs.go:276] 0 containers: []
	W0920 18:22:14.413214   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:22:14.413236   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:22:14.413253   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:22:14.466782   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:22:14.466820   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:22:14.480627   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:22:14.480668   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:22:14.552071   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:22:14.552104   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:22:14.552121   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:22:14.626481   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:22:14.626518   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:22:17.163264   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:22:17.181609   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:22:17.181684   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:22:17.219871   75577 cri.go:89] found id: ""
	I0920 18:22:17.219899   75577 logs.go:276] 0 containers: []
	W0920 18:22:17.219923   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:22:17.219932   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:22:17.220013   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:22:17.256315   75577 cri.go:89] found id: ""
	I0920 18:22:17.256346   75577 logs.go:276] 0 containers: []
	W0920 18:22:17.256356   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:22:17.256364   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:22:17.256434   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:22:17.294326   75577 cri.go:89] found id: ""
	I0920 18:22:17.294352   75577 logs.go:276] 0 containers: []
	W0920 18:22:17.294360   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:22:17.294366   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:22:17.294421   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:22:17.336461   75577 cri.go:89] found id: ""
	I0920 18:22:17.336491   75577 logs.go:276] 0 containers: []
	W0920 18:22:17.336500   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:22:17.336506   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:22:17.336568   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:22:17.376242   75577 cri.go:89] found id: ""
	I0920 18:22:17.376283   75577 logs.go:276] 0 containers: []
	W0920 18:22:17.376295   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:22:17.376302   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:22:17.376363   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:22:17.409147   75577 cri.go:89] found id: ""
	I0920 18:22:17.409180   75577 logs.go:276] 0 containers: []
	W0920 18:22:17.409190   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:22:17.409198   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:22:17.409269   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:22:17.444698   75577 cri.go:89] found id: ""
	I0920 18:22:17.444725   75577 logs.go:276] 0 containers: []
	W0920 18:22:17.444734   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:22:17.444738   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:22:17.444791   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:22:17.485914   75577 cri.go:89] found id: ""
	I0920 18:22:17.485948   75577 logs.go:276] 0 containers: []
	W0920 18:22:17.485959   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:22:17.485970   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:22:17.485983   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:22:17.523567   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:22:17.523601   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:22:17.588864   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:22:17.588905   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:22:17.605052   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:22:17.605092   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:22:17.677012   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:22:17.677039   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:22:17.677056   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:22:19.525154   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:22.025422   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:18.802052   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:20.804024   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:20.252258   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:22:20.267607   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:22:20.267698   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:22:20.302260   75577 cri.go:89] found id: ""
	I0920 18:22:20.302291   75577 logs.go:276] 0 containers: []
	W0920 18:22:20.302301   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:22:20.302309   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:22:20.302373   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:22:20.335343   75577 cri.go:89] found id: ""
	I0920 18:22:20.335377   75577 logs.go:276] 0 containers: []
	W0920 18:22:20.335389   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:22:20.335397   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:22:20.335460   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:22:20.369612   75577 cri.go:89] found id: ""
	I0920 18:22:20.369641   75577 logs.go:276] 0 containers: []
	W0920 18:22:20.369649   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:22:20.369655   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:22:20.369703   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:22:20.403703   75577 cri.go:89] found id: ""
	I0920 18:22:20.403732   75577 logs.go:276] 0 containers: []
	W0920 18:22:20.403740   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:22:20.403746   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:22:20.403804   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:22:20.436288   75577 cri.go:89] found id: ""
	I0920 18:22:20.436316   75577 logs.go:276] 0 containers: []
	W0920 18:22:20.436328   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:22:20.436336   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:22:20.436399   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:22:20.471536   75577 cri.go:89] found id: ""
	I0920 18:22:20.471572   75577 logs.go:276] 0 containers: []
	W0920 18:22:20.471584   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:22:20.471593   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:22:20.471657   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:22:20.506012   75577 cri.go:89] found id: ""
	I0920 18:22:20.506107   75577 logs.go:276] 0 containers: []
	W0920 18:22:20.506134   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:22:20.506143   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:22:20.506199   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:22:20.542614   75577 cri.go:89] found id: ""
	I0920 18:22:20.542650   75577 logs.go:276] 0 containers: []
	W0920 18:22:20.542660   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:22:20.542671   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:22:20.542687   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:22:20.596316   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:22:20.596357   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:22:20.610438   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:22:20.610465   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:22:20.688061   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:22:20.688079   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:22:20.688091   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:22:20.768249   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:22:20.768296   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:22:24.525259   75264 pod_ready.go:103] pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:25.524857   75264 pod_ready.go:82] duration metric: took 4m0.006563805s for pod "metrics-server-6867b74b74-vtl79" in "kube-system" namespace to be "Ready" ...
	E0920 18:22:25.524883   75264 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0920 18:22:25.524891   75264 pod_ready.go:39] duration metric: took 4m8.542422056s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:22:25.524906   75264 api_server.go:52] waiting for apiserver process to appear ...
	I0920 18:22:25.524977   75264 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:22:25.525029   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:22:25.579894   75264 cri.go:89] found id: "0ea0cfbd9902ae8e3797c77592bfe05a5f8162f75a391f5c2dc992eb5154c4ad"
	I0920 18:22:25.579914   75264 cri.go:89] found id: ""
	I0920 18:22:25.579923   75264 logs.go:276] 1 containers: [0ea0cfbd9902ae8e3797c77592bfe05a5f8162f75a391f5c2dc992eb5154c4ad]
	I0920 18:22:25.579979   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:25.585095   75264 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:22:25.585176   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:22:25.621769   75264 cri.go:89] found id: "65da0bae1c849a395cc241962bb6cbdf9f3edf11cdc95f4495ac2e427f9602d4"
	I0920 18:22:25.621796   75264 cri.go:89] found id: ""
	I0920 18:22:25.621805   75264 logs.go:276] 1 containers: [65da0bae1c849a395cc241962bb6cbdf9f3edf11cdc95f4495ac2e427f9602d4]
	I0920 18:22:25.621881   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:25.626318   75264 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:22:25.626400   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:22:25.662732   75264 cri.go:89] found id: "606f7c8a9a095f499ae422de76f76105607db72e57b9b0a6db3e3c5a81656c5c"
	I0920 18:22:25.662753   75264 cri.go:89] found id: ""
	I0920 18:22:25.662760   75264 logs.go:276] 1 containers: [606f7c8a9a095f499ae422de76f76105607db72e57b9b0a6db3e3c5a81656c5c]
	I0920 18:22:25.662818   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:25.667212   75264 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:22:25.667299   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:22:25.703639   75264 cri.go:89] found id: "6ba313deffc6185e59dac50051e933f6b2ea70d630cf36b2b8c2c07042f793ba"
	I0920 18:22:25.703663   75264 cri.go:89] found id: ""
	I0920 18:22:25.703670   75264 logs.go:276] 1 containers: [6ba313deffc6185e59dac50051e933f6b2ea70d630cf36b2b8c2c07042f793ba]
	I0920 18:22:25.703721   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:25.708034   75264 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:22:25.708115   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:22:25.745358   75264 cri.go:89] found id: "702f7f440eb6016c2c758a6be4e1072274ab6dcd169d7a081e5464869a7c299c"
	I0920 18:22:25.745387   75264 cri.go:89] found id: ""
	I0920 18:22:25.745397   75264 logs.go:276] 1 containers: [702f7f440eb6016c2c758a6be4e1072274ab6dcd169d7a081e5464869a7c299c]
	I0920 18:22:25.745455   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:25.749702   75264 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:22:25.749787   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:22:25.790592   75264 cri.go:89] found id: "4ca303b795795bd920f261c88630f30fd51a91b9694a9a6e67af8164bab8242b"
	I0920 18:22:25.790615   75264 cri.go:89] found id: ""
	I0920 18:22:25.790623   75264 logs.go:276] 1 containers: [4ca303b795795bd920f261c88630f30fd51a91b9694a9a6e67af8164bab8242b]
	I0920 18:22:25.790686   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:25.794993   75264 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:22:25.795062   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:22:25.836621   75264 cri.go:89] found id: ""
	I0920 18:22:25.836646   75264 logs.go:276] 0 containers: []
	W0920 18:22:25.836654   75264 logs.go:278] No container was found matching "kindnet"
	I0920 18:22:25.836661   75264 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0920 18:22:25.836723   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0920 18:22:25.874191   75264 cri.go:89] found id: "001bdc98537f0917f021c14a5d0733bd31cf19be1b4862502458a7a90e4300da"
	I0920 18:22:25.874215   75264 cri.go:89] found id: "c42201b6e3d55b6fd8beaf04d416658b228bf627e519ddb022dad6d17219cc1c"
	I0920 18:22:25.874220   75264 cri.go:89] found id: ""
	I0920 18:22:25.874229   75264 logs.go:276] 2 containers: [001bdc98537f0917f021c14a5d0733bd31cf19be1b4862502458a7a90e4300da c42201b6e3d55b6fd8beaf04d416658b228bf627e519ddb022dad6d17219cc1c]
	I0920 18:22:25.874316   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:25.878788   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:25.882840   75264 logs.go:123] Gathering logs for kube-proxy [702f7f440eb6016c2c758a6be4e1072274ab6dcd169d7a081e5464869a7c299c] ...
	I0920 18:22:25.882869   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 702f7f440eb6016c2c758a6be4e1072274ab6dcd169d7a081e5464869a7c299c"
	I0920 18:22:25.921609   75264 logs.go:123] Gathering logs for kube-controller-manager [4ca303b795795bd920f261c88630f30fd51a91b9694a9a6e67af8164bab8242b] ...
	I0920 18:22:25.921640   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ca303b795795bd920f261c88630f30fd51a91b9694a9a6e67af8164bab8242b"
	I0920 18:22:25.987199   75264 logs.go:123] Gathering logs for kubelet ...
	I0920 18:22:25.987236   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:22:26.061857   75264 logs.go:123] Gathering logs for kube-apiserver [0ea0cfbd9902ae8e3797c77592bfe05a5f8162f75a391f5c2dc992eb5154c4ad] ...
	I0920 18:22:26.061897   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ea0cfbd9902ae8e3797c77592bfe05a5f8162f75a391f5c2dc992eb5154c4ad"
	I0920 18:22:26.104901   75264 logs.go:123] Gathering logs for etcd [65da0bae1c849a395cc241962bb6cbdf9f3edf11cdc95f4495ac2e427f9602d4] ...
	I0920 18:22:26.104935   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 65da0bae1c849a395cc241962bb6cbdf9f3edf11cdc95f4495ac2e427f9602d4"
	I0920 18:22:26.160795   75264 logs.go:123] Gathering logs for kube-scheduler [6ba313deffc6185e59dac50051e933f6b2ea70d630cf36b2b8c2c07042f793ba] ...
	I0920 18:22:26.160833   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ba313deffc6185e59dac50051e933f6b2ea70d630cf36b2b8c2c07042f793ba"
	I0920 18:22:26.199286   75264 logs.go:123] Gathering logs for storage-provisioner [001bdc98537f0917f021c14a5d0733bd31cf19be1b4862502458a7a90e4300da] ...
	I0920 18:22:26.199316   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 001bdc98537f0917f021c14a5d0733bd31cf19be1b4862502458a7a90e4300da"
	I0920 18:22:26.239560   75264 logs.go:123] Gathering logs for storage-provisioner [c42201b6e3d55b6fd8beaf04d416658b228bf627e519ddb022dad6d17219cc1c] ...
	I0920 18:22:26.239591   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c42201b6e3d55b6fd8beaf04d416658b228bf627e519ddb022dad6d17219cc1c"
	I0920 18:22:26.275424   75264 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:22:26.275450   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:22:26.786849   75264 logs.go:123] Gathering logs for container status ...
	I0920 18:22:26.786895   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:22:26.848751   75264 logs.go:123] Gathering logs for dmesg ...
	I0920 18:22:26.848783   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:22:26.867329   75264 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:22:26.867375   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 18:22:27.017385   75264 logs.go:123] Gathering logs for coredns [606f7c8a9a095f499ae422de76f76105607db72e57b9b0a6db3e3c5a81656c5c] ...
	I0920 18:22:27.017419   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 606f7c8a9a095f499ae422de76f76105607db72e57b9b0a6db3e3c5a81656c5c"
	I0920 18:22:23.300658   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:25.302131   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:27.303929   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:23.307149   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:22:23.319889   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:22:23.319972   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:22:23.356613   75577 cri.go:89] found id: ""
	I0920 18:22:23.356645   75577 logs.go:276] 0 containers: []
	W0920 18:22:23.356656   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:22:23.356663   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:22:23.356728   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:22:23.391632   75577 cri.go:89] found id: ""
	I0920 18:22:23.391663   75577 logs.go:276] 0 containers: []
	W0920 18:22:23.391675   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:22:23.391683   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:22:23.391748   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:22:23.426908   75577 cri.go:89] found id: ""
	I0920 18:22:23.426936   75577 logs.go:276] 0 containers: []
	W0920 18:22:23.426946   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:22:23.426952   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:22:23.427013   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:22:23.461890   75577 cri.go:89] found id: ""
	I0920 18:22:23.461925   75577 logs.go:276] 0 containers: []
	W0920 18:22:23.461938   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:22:23.461947   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:22:23.462014   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:22:23.503517   75577 cri.go:89] found id: ""
	I0920 18:22:23.503549   75577 logs.go:276] 0 containers: []
	W0920 18:22:23.503560   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:22:23.503566   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:22:23.503615   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:22:23.540699   75577 cri.go:89] found id: ""
	I0920 18:22:23.540722   75577 logs.go:276] 0 containers: []
	W0920 18:22:23.540731   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:22:23.540736   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:22:23.540783   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:22:23.575485   75577 cri.go:89] found id: ""
	I0920 18:22:23.575509   75577 logs.go:276] 0 containers: []
	W0920 18:22:23.575517   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:22:23.575523   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:22:23.575576   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:22:23.610998   75577 cri.go:89] found id: ""
	I0920 18:22:23.611028   75577 logs.go:276] 0 containers: []
	W0920 18:22:23.611039   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:22:23.611051   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:22:23.611065   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:22:23.687072   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:22:23.687109   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:22:23.724790   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:22:23.724824   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:22:23.778905   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:22:23.778945   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:22:23.794298   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:22:23.794329   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:22:23.873341   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:22:26.374541   75577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:22:26.388151   75577 kubeadm.go:597] duration metric: took 4m2.956358154s to restartPrimaryControlPlane
	W0920 18:22:26.388229   75577 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0920 18:22:26.388260   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0920 18:22:27.454521   75577 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.066231187s)
	I0920 18:22:27.454602   75577 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:22:27.469409   75577 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 18:22:27.480314   75577 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 18:22:27.491389   75577 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 18:22:27.491416   75577 kubeadm.go:157] found existing configuration files:
	
	I0920 18:22:27.491468   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 18:22:27.501448   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 18:22:27.501534   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 18:22:27.511370   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 18:22:27.521767   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 18:22:27.521855   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 18:22:27.531717   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 18:22:27.540517   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 18:22:27.540575   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 18:22:27.549644   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 18:22:27.558412   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 18:22:27.558480   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 18:22:27.567702   75577 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 18:22:28.070939   75086 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.384759009s)
	I0920 18:22:28.071025   75086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:22:28.087712   75086 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 18:22:28.097709   75086 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 18:22:28.107217   75086 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 18:22:28.107243   75086 kubeadm.go:157] found existing configuration files:
	
	I0920 18:22:28.107297   75086 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 18:22:28.116947   75086 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 18:22:28.117013   75086 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 18:22:28.126559   75086 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 18:22:28.135261   75086 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 18:22:28.135338   75086 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 18:22:28.144456   75086 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 18:22:28.153412   75086 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 18:22:28.153465   75086 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 18:22:28.165825   75086 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 18:22:28.175003   75086 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 18:22:28.175068   75086 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 18:22:28.184917   75086 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 18:22:28.236825   75086 kubeadm.go:310] W0920 18:22:28.216092    2540 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 18:22:28.237654   75086 kubeadm.go:310] W0920 18:22:28.216986    2540 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 18:22:28.352695   75086 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 18:22:29.557496   75264 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:22:29.578981   75264 api_server.go:72] duration metric: took 4m20.322735753s to wait for apiserver process to appear ...
	I0920 18:22:29.579009   75264 api_server.go:88] waiting for apiserver healthz status ...
	I0920 18:22:29.579044   75264 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:22:29.579093   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:22:29.616252   75264 cri.go:89] found id: "0ea0cfbd9902ae8e3797c77592bfe05a5f8162f75a391f5c2dc992eb5154c4ad"
	I0920 18:22:29.616281   75264 cri.go:89] found id: ""
	I0920 18:22:29.616292   75264 logs.go:276] 1 containers: [0ea0cfbd9902ae8e3797c77592bfe05a5f8162f75a391f5c2dc992eb5154c4ad]
	I0920 18:22:29.616345   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:29.620605   75264 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:22:29.620678   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:22:29.658077   75264 cri.go:89] found id: "65da0bae1c849a395cc241962bb6cbdf9f3edf11cdc95f4495ac2e427f9602d4"
	I0920 18:22:29.658106   75264 cri.go:89] found id: ""
	I0920 18:22:29.658114   75264 logs.go:276] 1 containers: [65da0bae1c849a395cc241962bb6cbdf9f3edf11cdc95f4495ac2e427f9602d4]
	I0920 18:22:29.658170   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:29.662227   75264 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:22:29.662288   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:22:29.695906   75264 cri.go:89] found id: "606f7c8a9a095f499ae422de76f76105607db72e57b9b0a6db3e3c5a81656c5c"
	I0920 18:22:29.695927   75264 cri.go:89] found id: ""
	I0920 18:22:29.695934   75264 logs.go:276] 1 containers: [606f7c8a9a095f499ae422de76f76105607db72e57b9b0a6db3e3c5a81656c5c]
	I0920 18:22:29.695979   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:29.699986   75264 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:22:29.700083   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:22:29.736560   75264 cri.go:89] found id: "6ba313deffc6185e59dac50051e933f6b2ea70d630cf36b2b8c2c07042f793ba"
	I0920 18:22:29.736589   75264 cri.go:89] found id: ""
	I0920 18:22:29.736600   75264 logs.go:276] 1 containers: [6ba313deffc6185e59dac50051e933f6b2ea70d630cf36b2b8c2c07042f793ba]
	I0920 18:22:29.736660   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:29.740751   75264 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:22:29.740803   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:22:29.778930   75264 cri.go:89] found id: "702f7f440eb6016c2c758a6be4e1072274ab6dcd169d7a081e5464869a7c299c"
	I0920 18:22:29.778952   75264 cri.go:89] found id: ""
	I0920 18:22:29.778959   75264 logs.go:276] 1 containers: [702f7f440eb6016c2c758a6be4e1072274ab6dcd169d7a081e5464869a7c299c]
	I0920 18:22:29.779011   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:29.783022   75264 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:22:29.783092   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:22:29.829491   75264 cri.go:89] found id: "4ca303b795795bd920f261c88630f30fd51a91b9694a9a6e67af8164bab8242b"
	I0920 18:22:29.829521   75264 cri.go:89] found id: ""
	I0920 18:22:29.829531   75264 logs.go:276] 1 containers: [4ca303b795795bd920f261c88630f30fd51a91b9694a9a6e67af8164bab8242b]
	I0920 18:22:29.829597   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:29.833853   75264 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:22:29.833924   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:22:29.872376   75264 cri.go:89] found id: ""
	I0920 18:22:29.872404   75264 logs.go:276] 0 containers: []
	W0920 18:22:29.872412   75264 logs.go:278] No container was found matching "kindnet"
	I0920 18:22:29.872419   75264 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0920 18:22:29.872482   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0920 18:22:29.923096   75264 cri.go:89] found id: "001bdc98537f0917f021c14a5d0733bd31cf19be1b4862502458a7a90e4300da"
	I0920 18:22:29.923136   75264 cri.go:89] found id: "c42201b6e3d55b6fd8beaf04d416658b228bf627e519ddb022dad6d17219cc1c"
	I0920 18:22:29.923143   75264 cri.go:89] found id: ""
	I0920 18:22:29.923152   75264 logs.go:276] 2 containers: [001bdc98537f0917f021c14a5d0733bd31cf19be1b4862502458a7a90e4300da c42201b6e3d55b6fd8beaf04d416658b228bf627e519ddb022dad6d17219cc1c]
	I0920 18:22:29.923226   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:29.930023   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:29.935091   75264 logs.go:123] Gathering logs for coredns [606f7c8a9a095f499ae422de76f76105607db72e57b9b0a6db3e3c5a81656c5c] ...
	I0920 18:22:29.935119   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 606f7c8a9a095f499ae422de76f76105607db72e57b9b0a6db3e3c5a81656c5c"
	I0920 18:22:29.974064   75264 logs.go:123] Gathering logs for kube-scheduler [6ba313deffc6185e59dac50051e933f6b2ea70d630cf36b2b8c2c07042f793ba] ...
	I0920 18:22:29.974101   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ba313deffc6185e59dac50051e933f6b2ea70d630cf36b2b8c2c07042f793ba"
	I0920 18:22:30.018770   75264 logs.go:123] Gathering logs for kube-controller-manager [4ca303b795795bd920f261c88630f30fd51a91b9694a9a6e67af8164bab8242b] ...
	I0920 18:22:30.018805   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ca303b795795bd920f261c88630f30fd51a91b9694a9a6e67af8164bab8242b"
	I0920 18:22:30.077578   75264 logs.go:123] Gathering logs for storage-provisioner [c42201b6e3d55b6fd8beaf04d416658b228bf627e519ddb022dad6d17219cc1c] ...
	I0920 18:22:30.077625   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c42201b6e3d55b6fd8beaf04d416658b228bf627e519ddb022dad6d17219cc1c"
	I0920 18:22:30.123874   75264 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:22:30.123910   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:22:30.553215   75264 logs.go:123] Gathering logs for kube-proxy [702f7f440eb6016c2c758a6be4e1072274ab6dcd169d7a081e5464869a7c299c] ...
	I0920 18:22:30.553261   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 702f7f440eb6016c2c758a6be4e1072274ab6dcd169d7a081e5464869a7c299c"
	I0920 18:22:30.597397   75264 logs.go:123] Gathering logs for storage-provisioner [001bdc98537f0917f021c14a5d0733bd31cf19be1b4862502458a7a90e4300da] ...
	I0920 18:22:30.597428   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 001bdc98537f0917f021c14a5d0733bd31cf19be1b4862502458a7a90e4300da"
	I0920 18:22:30.643777   75264 logs.go:123] Gathering logs for container status ...
	I0920 18:22:30.643807   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:22:30.688174   75264 logs.go:123] Gathering logs for kubelet ...
	I0920 18:22:30.688205   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:22:30.760547   75264 logs.go:123] Gathering logs for dmesg ...
	I0920 18:22:30.760591   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:22:30.776702   75264 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:22:30.776729   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 18:22:30.913200   75264 logs.go:123] Gathering logs for kube-apiserver [0ea0cfbd9902ae8e3797c77592bfe05a5f8162f75a391f5c2dc992eb5154c4ad] ...
	I0920 18:22:30.913240   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ea0cfbd9902ae8e3797c77592bfe05a5f8162f75a391f5c2dc992eb5154c4ad"
	I0920 18:22:30.957548   75264 logs.go:123] Gathering logs for etcd [65da0bae1c849a395cc241962bb6cbdf9f3edf11cdc95f4495ac2e427f9602d4] ...
	I0920 18:22:30.957589   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 65da0bae1c849a395cc241962bb6cbdf9f3edf11cdc95f4495ac2e427f9602d4"
	I0920 18:22:29.803036   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:32.301986   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:27.790386   75577 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 18:22:36.572379   75086 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 18:22:36.572452   75086 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 18:22:36.572558   75086 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 18:22:36.572702   75086 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 18:22:36.572826   75086 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 18:22:36.572926   75086 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 18:22:36.574903   75086 out.go:235]   - Generating certificates and keys ...
	I0920 18:22:36.575032   75086 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 18:22:36.575117   75086 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 18:22:36.575224   75086 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 18:22:36.575342   75086 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 18:22:36.575446   75086 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 18:22:36.575535   75086 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 18:22:36.575606   75086 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 18:22:36.575687   75086 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 18:22:36.575790   75086 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 18:22:36.575867   75086 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 18:22:36.575903   75086 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 18:22:36.575970   75086 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 18:22:36.576054   75086 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 18:22:36.576141   75086 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 18:22:36.576223   75086 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 18:22:36.576317   75086 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 18:22:36.576373   75086 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 18:22:36.576442   75086 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 18:22:36.576506   75086 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 18:22:36.577982   75086 out.go:235]   - Booting up control plane ...
	I0920 18:22:36.578071   75086 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 18:22:36.578147   75086 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 18:22:36.578204   75086 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 18:22:36.578312   75086 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 18:22:36.578400   75086 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 18:22:36.578438   75086 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 18:22:36.578568   75086 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 18:22:36.578660   75086 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 18:22:36.578719   75086 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.174126ms
	I0920 18:22:36.578788   75086 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 18:22:36.578839   75086 kubeadm.go:310] [api-check] The API server is healthy after 5.501599925s
	I0920 18:22:36.578933   75086 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 18:22:36.579037   75086 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 18:22:36.579090   75086 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 18:22:36.579268   75086 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-768431 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 18:22:36.579335   75086 kubeadm.go:310] [bootstrap-token] Using token: junoe1.zmowu95mn9yb1z1f
	I0920 18:22:36.580810   75086 out.go:235]   - Configuring RBAC rules ...
	I0920 18:22:36.580919   75086 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 18:22:36.580989   75086 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 18:22:36.581127   75086 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 18:22:36.581290   75086 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 18:22:36.581456   75086 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 18:22:36.581589   75086 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 18:22:36.581742   75086 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 18:22:36.581827   75086 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 18:22:36.581916   75086 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 18:22:36.581925   75086 kubeadm.go:310] 
	I0920 18:22:36.581980   75086 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 18:22:36.581986   75086 kubeadm.go:310] 
	I0920 18:22:36.582086   75086 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 18:22:36.582097   75086 kubeadm.go:310] 
	I0920 18:22:36.582137   75086 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 18:22:36.582204   75086 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 18:22:36.582287   75086 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 18:22:36.582297   75086 kubeadm.go:310] 
	I0920 18:22:36.582389   75086 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 18:22:36.582398   75086 kubeadm.go:310] 
	I0920 18:22:36.582479   75086 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 18:22:36.582495   75086 kubeadm.go:310] 
	I0920 18:22:36.582540   75086 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 18:22:36.582605   75086 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 18:22:36.582682   75086 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 18:22:36.582697   75086 kubeadm.go:310] 
	I0920 18:22:36.582796   75086 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 18:22:36.582892   75086 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 18:22:36.582901   75086 kubeadm.go:310] 
	I0920 18:22:36.583026   75086 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token junoe1.zmowu95mn9yb1z1f \
	I0920 18:22:36.583163   75086 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:736f3fb521855813decb4a7214f2b8beff5f81c3971b60decfa20a8807626a2d \
	I0920 18:22:36.583199   75086 kubeadm.go:310] 	--control-plane 
	I0920 18:22:36.583208   75086 kubeadm.go:310] 
	I0920 18:22:36.583316   75086 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 18:22:36.583335   75086 kubeadm.go:310] 
	I0920 18:22:36.583489   75086 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token junoe1.zmowu95mn9yb1z1f \
	I0920 18:22:36.583648   75086 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:736f3fb521855813decb4a7214f2b8beff5f81c3971b60decfa20a8807626a2d 
	I0920 18:22:36.583664   75086 cni.go:84] Creating CNI manager for ""
	I0920 18:22:36.583673   75086 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:22:36.585378   75086 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 18:22:33.523585   75264 api_server.go:253] Checking apiserver healthz at https://192.168.72.190:8444/healthz ...
	I0920 18:22:33.528658   75264 api_server.go:279] https://192.168.72.190:8444/healthz returned 200:
	ok
	I0920 18:22:33.529826   75264 api_server.go:141] control plane version: v1.31.1
	I0920 18:22:33.529861   75264 api_server.go:131] duration metric: took 3.95084435s to wait for apiserver health ...
	I0920 18:22:33.529870   75264 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 18:22:33.529894   75264 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:22:33.529938   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:22:33.571762   75264 cri.go:89] found id: "0ea0cfbd9902ae8e3797c77592bfe05a5f8162f75a391f5c2dc992eb5154c4ad"
	I0920 18:22:33.571783   75264 cri.go:89] found id: ""
	I0920 18:22:33.571789   75264 logs.go:276] 1 containers: [0ea0cfbd9902ae8e3797c77592bfe05a5f8162f75a391f5c2dc992eb5154c4ad]
	I0920 18:22:33.571840   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:33.576180   75264 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:22:33.576268   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:22:33.621887   75264 cri.go:89] found id: "65da0bae1c849a395cc241962bb6cbdf9f3edf11cdc95f4495ac2e427f9602d4"
	I0920 18:22:33.621915   75264 cri.go:89] found id: ""
	I0920 18:22:33.621926   75264 logs.go:276] 1 containers: [65da0bae1c849a395cc241962bb6cbdf9f3edf11cdc95f4495ac2e427f9602d4]
	I0920 18:22:33.622013   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:33.626552   75264 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:22:33.626609   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:22:33.664879   75264 cri.go:89] found id: "606f7c8a9a095f499ae422de76f76105607db72e57b9b0a6db3e3c5a81656c5c"
	I0920 18:22:33.664902   75264 cri.go:89] found id: ""
	I0920 18:22:33.664912   75264 logs.go:276] 1 containers: [606f7c8a9a095f499ae422de76f76105607db72e57b9b0a6db3e3c5a81656c5c]
	I0920 18:22:33.664980   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:33.669155   75264 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:22:33.669231   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:22:33.710930   75264 cri.go:89] found id: "6ba313deffc6185e59dac50051e933f6b2ea70d630cf36b2b8c2c07042f793ba"
	I0920 18:22:33.710957   75264 cri.go:89] found id: ""
	I0920 18:22:33.710968   75264 logs.go:276] 1 containers: [6ba313deffc6185e59dac50051e933f6b2ea70d630cf36b2b8c2c07042f793ba]
	I0920 18:22:33.711030   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:33.715618   75264 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:22:33.715689   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:22:33.751646   75264 cri.go:89] found id: "702f7f440eb6016c2c758a6be4e1072274ab6dcd169d7a081e5464869a7c299c"
	I0920 18:22:33.751672   75264 cri.go:89] found id: ""
	I0920 18:22:33.751690   75264 logs.go:276] 1 containers: [702f7f440eb6016c2c758a6be4e1072274ab6dcd169d7a081e5464869a7c299c]
	I0920 18:22:33.751758   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:33.756376   75264 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:22:33.756441   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:22:33.803246   75264 cri.go:89] found id: "4ca303b795795bd920f261c88630f30fd51a91b9694a9a6e67af8164bab8242b"
	I0920 18:22:33.803267   75264 cri.go:89] found id: ""
	I0920 18:22:33.803276   75264 logs.go:276] 1 containers: [4ca303b795795bd920f261c88630f30fd51a91b9694a9a6e67af8164bab8242b]
	I0920 18:22:33.803334   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:33.807825   75264 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:22:33.807892   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:22:33.849531   75264 cri.go:89] found id: ""
	I0920 18:22:33.849559   75264 logs.go:276] 0 containers: []
	W0920 18:22:33.849569   75264 logs.go:278] No container was found matching "kindnet"
	I0920 18:22:33.849583   75264 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0920 18:22:33.849645   75264 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0920 18:22:33.891058   75264 cri.go:89] found id: "001bdc98537f0917f021c14a5d0733bd31cf19be1b4862502458a7a90e4300da"
	I0920 18:22:33.891084   75264 cri.go:89] found id: "c42201b6e3d55b6fd8beaf04d416658b228bf627e519ddb022dad6d17219cc1c"
	I0920 18:22:33.891090   75264 cri.go:89] found id: ""
	I0920 18:22:33.891099   75264 logs.go:276] 2 containers: [001bdc98537f0917f021c14a5d0733bd31cf19be1b4862502458a7a90e4300da c42201b6e3d55b6fd8beaf04d416658b228bf627e519ddb022dad6d17219cc1c]
	I0920 18:22:33.891157   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:33.896028   75264 ssh_runner.go:195] Run: which crictl
	I0920 18:22:33.900174   75264 logs.go:123] Gathering logs for container status ...
	I0920 18:22:33.900209   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:22:33.944777   75264 logs.go:123] Gathering logs for dmesg ...
	I0920 18:22:33.944803   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:22:33.960062   75264 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:22:33.960094   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 18:22:34.087507   75264 logs.go:123] Gathering logs for etcd [65da0bae1c849a395cc241962bb6cbdf9f3edf11cdc95f4495ac2e427f9602d4] ...
	I0920 18:22:34.087556   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 65da0bae1c849a395cc241962bb6cbdf9f3edf11cdc95f4495ac2e427f9602d4"
	I0920 18:22:34.140051   75264 logs.go:123] Gathering logs for kube-proxy [702f7f440eb6016c2c758a6be4e1072274ab6dcd169d7a081e5464869a7c299c] ...
	I0920 18:22:34.140088   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 702f7f440eb6016c2c758a6be4e1072274ab6dcd169d7a081e5464869a7c299c"
	I0920 18:22:34.183540   75264 logs.go:123] Gathering logs for kube-controller-manager [4ca303b795795bd920f261c88630f30fd51a91b9694a9a6e67af8164bab8242b] ...
	I0920 18:22:34.183582   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ca303b795795bd920f261c88630f30fd51a91b9694a9a6e67af8164bab8242b"
	I0920 18:22:34.251978   75264 logs.go:123] Gathering logs for storage-provisioner [001bdc98537f0917f021c14a5d0733bd31cf19be1b4862502458a7a90e4300da] ...
	I0920 18:22:34.252025   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 001bdc98537f0917f021c14a5d0733bd31cf19be1b4862502458a7a90e4300da"
	I0920 18:22:34.296522   75264 logs.go:123] Gathering logs for storage-provisioner [c42201b6e3d55b6fd8beaf04d416658b228bf627e519ddb022dad6d17219cc1c] ...
	I0920 18:22:34.296562   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c42201b6e3d55b6fd8beaf04d416658b228bf627e519ddb022dad6d17219cc1c"
	I0920 18:22:34.338227   75264 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:22:34.338254   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:22:34.703986   75264 logs.go:123] Gathering logs for kubelet ...
	I0920 18:22:34.704037   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:22:34.788091   75264 logs.go:123] Gathering logs for kube-apiserver [0ea0cfbd9902ae8e3797c77592bfe05a5f8162f75a391f5c2dc992eb5154c4ad] ...
	I0920 18:22:34.788139   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ea0cfbd9902ae8e3797c77592bfe05a5f8162f75a391f5c2dc992eb5154c4ad"
	I0920 18:22:34.861403   75264 logs.go:123] Gathering logs for coredns [606f7c8a9a095f499ae422de76f76105607db72e57b9b0a6db3e3c5a81656c5c] ...
	I0920 18:22:34.861435   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 606f7c8a9a095f499ae422de76f76105607db72e57b9b0a6db3e3c5a81656c5c"
	I0920 18:22:34.906740   75264 logs.go:123] Gathering logs for kube-scheduler [6ba313deffc6185e59dac50051e933f6b2ea70d630cf36b2b8c2c07042f793ba] ...
	I0920 18:22:34.906773   75264 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ba313deffc6185e59dac50051e933f6b2ea70d630cf36b2b8c2c07042f793ba"
	I0920 18:22:37.454205   75264 system_pods.go:59] 8 kube-system pods found
	I0920 18:22:37.454243   75264 system_pods.go:61] "coredns-7c65d6cfc9-dmdfb" [8e4ffdb3-37e0-4498-bdf5-d6e7dbcad020] Running
	I0920 18:22:37.454252   75264 system_pods.go:61] "etcd-default-k8s-diff-port-553719" [e8de0e96-5f3a-4f3f-ae69-c5da7d3c2eb7] Running
	I0920 18:22:37.454257   75264 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-553719" [5164e26c-7fcd-41dc-865f-429046a9ad61] Running
	I0920 18:22:37.454263   75264 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-553719" [b2c00a76-9645-4c63-9812-43e6ed9f1d4f] Running
	I0920 18:22:37.454268   75264 system_pods.go:61] "kube-proxy-p9crq" [83e0f53d-6960-42c4-904d-ea85ba9160f4] Running
	I0920 18:22:37.454273   75264 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-553719" [42155f7b-95bb-467b-acf5-89eb4f80cf76] Running
	I0920 18:22:37.454289   75264 system_pods.go:61] "metrics-server-6867b74b74-vtl79" [29e0b6eb-22a9-4e37-97f9-83b48cc38193] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 18:22:37.454296   75264 system_pods.go:61] "storage-provisioner" [6fad2d07-f99e-45ac-9657-bce6d73d7fce] Running
	I0920 18:22:37.454310   75264 system_pods.go:74] duration metric: took 3.924431145s to wait for pod list to return data ...
	I0920 18:22:37.454324   75264 default_sa.go:34] waiting for default service account to be created ...
	I0920 18:22:37.457599   75264 default_sa.go:45] found service account: "default"
	I0920 18:22:37.457619   75264 default_sa.go:55] duration metric: took 3.289936ms for default service account to be created ...
	I0920 18:22:37.457627   75264 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 18:22:37.461977   75264 system_pods.go:86] 8 kube-system pods found
	I0920 18:22:37.462001   75264 system_pods.go:89] "coredns-7c65d6cfc9-dmdfb" [8e4ffdb3-37e0-4498-bdf5-d6e7dbcad020] Running
	I0920 18:22:37.462006   75264 system_pods.go:89] "etcd-default-k8s-diff-port-553719" [e8de0e96-5f3a-4f3f-ae69-c5da7d3c2eb7] Running
	I0920 18:22:37.462011   75264 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-553719" [5164e26c-7fcd-41dc-865f-429046a9ad61] Running
	I0920 18:22:37.462015   75264 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-553719" [b2c00a76-9645-4c63-9812-43e6ed9f1d4f] Running
	I0920 18:22:37.462019   75264 system_pods.go:89] "kube-proxy-p9crq" [83e0f53d-6960-42c4-904d-ea85ba9160f4] Running
	I0920 18:22:37.462022   75264 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-553719" [42155f7b-95bb-467b-acf5-89eb4f80cf76] Running
	I0920 18:22:37.462028   75264 system_pods.go:89] "metrics-server-6867b74b74-vtl79" [29e0b6eb-22a9-4e37-97f9-83b48cc38193] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 18:22:37.462032   75264 system_pods.go:89] "storage-provisioner" [6fad2d07-f99e-45ac-9657-bce6d73d7fce] Running
	I0920 18:22:37.462040   75264 system_pods.go:126] duration metric: took 4.409042ms to wait for k8s-apps to be running ...
	I0920 18:22:37.462047   75264 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 18:22:37.462087   75264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:22:37.479350   75264 system_svc.go:56] duration metric: took 17.294926ms WaitForService to wait for kubelet
	I0920 18:22:37.479380   75264 kubeadm.go:582] duration metric: took 4m28.223143222s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:22:37.479402   75264 node_conditions.go:102] verifying NodePressure condition ...
	I0920 18:22:37.482497   75264 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 18:22:37.482517   75264 node_conditions.go:123] node cpu capacity is 2
	I0920 18:22:37.482532   75264 node_conditions.go:105] duration metric: took 3.125658ms to run NodePressure ...
	I0920 18:22:37.482543   75264 start.go:241] waiting for startup goroutines ...
	I0920 18:22:37.482549   75264 start.go:246] waiting for cluster config update ...
	I0920 18:22:37.482561   75264 start.go:255] writing updated cluster config ...
	I0920 18:22:37.482817   75264 ssh_runner.go:195] Run: rm -f paused
	I0920 18:22:37.532982   75264 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 18:22:37.534936   75264 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-553719" cluster and "default" namespace by default
	I0920 18:22:34.302204   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:36.802991   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:36.586809   75086 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 18:22:36.599904   75086 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 18:22:36.620050   75086 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 18:22:36.620133   75086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:22:36.620149   75086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-768431 minikube.k8s.io/updated_at=2024_09_20T18_22_36_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=0626f22cf0d915d75e291a5bce701f94395056e1 minikube.k8s.io/name=embed-certs-768431 minikube.k8s.io/primary=true
	I0920 18:22:36.636625   75086 ops.go:34] apiserver oom_adj: -16
	I0920 18:22:36.817434   75086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:22:37.318133   75086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:22:37.818515   75086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:22:38.318005   75086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:22:38.817759   75086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:22:39.318481   75086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:22:39.817943   75086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:22:40.317957   75086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:22:40.423342   75086 kubeadm.go:1113] duration metric: took 3.803277177s to wait for elevateKubeSystemPrivileges
	I0920 18:22:40.423389   75086 kubeadm.go:394] duration metric: took 4m59.290098215s to StartCluster
	I0920 18:22:40.423413   75086 settings.go:142] acquiring lock: {Name:mk2a13c58cdc0faf8cddca5d6716175d45db9bfd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:22:40.423516   75086 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19672-8777/kubeconfig
	I0920 18:22:40.426054   75086 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/kubeconfig: {Name:mkf32a4c736808e023459b2f0e40188618a38db1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:22:40.426360   75086 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.202 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:22:40.426456   75086 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 18:22:40.426546   75086 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-768431"
	I0920 18:22:40.426566   75086 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-768431"
	W0920 18:22:40.426578   75086 addons.go:243] addon storage-provisioner should already be in state true
	I0920 18:22:40.426576   75086 addons.go:69] Setting default-storageclass=true in profile "embed-certs-768431"
	I0920 18:22:40.426608   75086 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-768431"
	I0920 18:22:40.426622   75086 host.go:66] Checking if "embed-certs-768431" exists ...
	I0920 18:22:40.426621   75086 addons.go:69] Setting metrics-server=true in profile "embed-certs-768431"
	I0920 18:22:40.426662   75086 config.go:182] Loaded profile config "embed-certs-768431": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:22:40.426669   75086 addons.go:234] Setting addon metrics-server=true in "embed-certs-768431"
	W0920 18:22:40.426699   75086 addons.go:243] addon metrics-server should already be in state true
	I0920 18:22:40.426739   75086 host.go:66] Checking if "embed-certs-768431" exists ...
	I0920 18:22:40.427075   75086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:22:40.427116   75086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:22:40.427120   75086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:22:40.427155   75086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:22:40.427161   75086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:22:40.427251   75086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:22:40.427977   75086 out.go:177] * Verifying Kubernetes components...
	I0920 18:22:40.429647   75086 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:22:40.443468   75086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40103
	I0920 18:22:40.443663   75086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37871
	I0920 18:22:40.444055   75086 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:22:40.444062   75086 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:22:40.444562   75086 main.go:141] libmachine: Using API Version  1
	I0920 18:22:40.444586   75086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:22:40.444732   75086 main.go:141] libmachine: Using API Version  1
	I0920 18:22:40.444750   75086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:22:40.445016   75086 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:22:40.445110   75086 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:22:40.445584   75086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:22:40.445634   75086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:22:40.445652   75086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:22:40.445654   75086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38771
	I0920 18:22:40.445684   75086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:22:40.446227   75086 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:22:40.446872   75086 main.go:141] libmachine: Using API Version  1
	I0920 18:22:40.446892   75086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:22:40.447280   75086 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:22:40.447692   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetState
	I0920 18:22:40.451332   75086 addons.go:234] Setting addon default-storageclass=true in "embed-certs-768431"
	W0920 18:22:40.451355   75086 addons.go:243] addon default-storageclass should already be in state true
	I0920 18:22:40.451382   75086 host.go:66] Checking if "embed-certs-768431" exists ...
	I0920 18:22:40.451753   75086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:22:40.451803   75086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:22:40.463317   75086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39353
	I0920 18:22:40.463816   75086 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:22:40.464544   75086 main.go:141] libmachine: Using API Version  1
	I0920 18:22:40.464577   75086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:22:40.464961   75086 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:22:40.467445   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetState
	I0920 18:22:40.469290   75086 main.go:141] libmachine: (embed-certs-768431) Calling .DriverName
	I0920 18:22:40.471212   75086 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0920 18:22:40.471862   75086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40689
	I0920 18:22:40.472254   75086 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:22:40.472515   75086 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 18:22:40.472535   75086 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 18:22:40.472557   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHHostname
	I0920 18:22:40.472718   75086 main.go:141] libmachine: Using API Version  1
	I0920 18:22:40.472741   75086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:22:40.473259   75086 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:22:40.473477   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetState
	I0920 18:22:40.473753   75086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36495
	I0920 18:22:40.474173   75086 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:22:40.474778   75086 main.go:141] libmachine: Using API Version  1
	I0920 18:22:40.474796   75086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:22:40.475166   75086 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:22:40.475726   75086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:22:40.475766   75086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:22:40.475984   75086 main.go:141] libmachine: (embed-certs-768431) Calling .DriverName
	I0920 18:22:40.476244   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:22:40.476583   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:22:40.476605   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:22:40.476773   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHPort
	I0920 18:22:40.476953   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:22:40.477053   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHUsername
	I0920 18:22:40.477196   75086 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/embed-certs-768431/id_rsa Username:docker}
	I0920 18:22:40.478244   75086 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:22:40.479443   75086 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 18:22:40.479459   75086 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 18:22:40.479476   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHHostname
	I0920 18:22:40.482119   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:22:40.482400   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:22:40.482423   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:22:40.482639   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHPort
	I0920 18:22:40.482840   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:22:40.482968   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHUsername
	I0920 18:22:40.483145   75086 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/embed-certs-768431/id_rsa Username:docker}
	I0920 18:22:40.494454   75086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35971
	I0920 18:22:40.494948   75086 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:22:40.495425   75086 main.go:141] libmachine: Using API Version  1
	I0920 18:22:40.495444   75086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:22:40.495798   75086 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:22:40.495990   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetState
	I0920 18:22:40.497591   75086 main.go:141] libmachine: (embed-certs-768431) Calling .DriverName
	I0920 18:22:40.497878   75086 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 18:22:40.497894   75086 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 18:22:40.497913   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHHostname
	I0920 18:22:40.500755   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:22:40.501130   75086 main.go:141] libmachine: (embed-certs-768431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f2:e2", ip: ""} in network mk-embed-certs-768431: {Iface:virbr3 ExpiryTime:2024-09-20 19:17:26 +0000 UTC Type:0 Mac:52:54:00:d2:f2:e2 Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:embed-certs-768431 Clientid:01:52:54:00:d2:f2:e2}
	I0920 18:22:40.501153   75086 main.go:141] libmachine: (embed-certs-768431) DBG | domain embed-certs-768431 has defined IP address 192.168.61.202 and MAC address 52:54:00:d2:f2:e2 in network mk-embed-certs-768431
	I0920 18:22:40.501355   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHPort
	I0920 18:22:40.501548   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHKeyPath
	I0920 18:22:40.501725   75086 main.go:141] libmachine: (embed-certs-768431) Calling .GetSSHUsername
	I0920 18:22:40.502091   75086 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/embed-certs-768431/id_rsa Username:docker}
	I0920 18:22:40.646738   75086 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:22:40.673285   75086 node_ready.go:35] waiting up to 6m0s for node "embed-certs-768431" to be "Ready" ...
	I0920 18:22:40.684618   75086 node_ready.go:49] node "embed-certs-768431" has status "Ready":"True"
	I0920 18:22:40.684643   75086 node_ready.go:38] duration metric: took 11.298825ms for node "embed-certs-768431" to be "Ready" ...
	I0920 18:22:40.684653   75086 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:22:40.689151   75086 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:22:40.734882   75086 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 18:22:40.744479   75086 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 18:22:40.884534   75086 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 18:22:40.884556   75086 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0920 18:22:41.018287   75086 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 18:22:41.018313   75086 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 18:22:41.091216   75086 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 18:22:41.091247   75086 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 18:22:41.132887   75086 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 18:22:41.538445   75086 main.go:141] libmachine: Making call to close driver server
	I0920 18:22:41.538472   75086 main.go:141] libmachine: (embed-certs-768431) Calling .Close
	I0920 18:22:41.538774   75086 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:22:41.538795   75086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:22:41.538806   75086 main.go:141] libmachine: Making call to close driver server
	I0920 18:22:41.538815   75086 main.go:141] libmachine: (embed-certs-768431) Calling .Close
	I0920 18:22:41.539010   75086 main.go:141] libmachine: Making call to close driver server
	I0920 18:22:41.539027   75086 main.go:141] libmachine: (embed-certs-768431) Calling .Close
	I0920 18:22:41.539118   75086 main.go:141] libmachine: (embed-certs-768431) DBG | Closing plugin on server side
	I0920 18:22:41.539137   75086 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:22:41.539205   75086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:22:41.539273   75086 main.go:141] libmachine: (embed-certs-768431) DBG | Closing plugin on server side
	I0920 18:22:41.539286   75086 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:22:41.539300   75086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:22:41.539309   75086 main.go:141] libmachine: Making call to close driver server
	I0920 18:22:41.539348   75086 main.go:141] libmachine: (embed-certs-768431) Calling .Close
	I0920 18:22:41.539629   75086 main.go:141] libmachine: (embed-certs-768431) DBG | Closing plugin on server side
	I0920 18:22:41.539693   75086 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:22:41.539712   75086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:22:41.559164   75086 main.go:141] libmachine: Making call to close driver server
	I0920 18:22:41.559191   75086 main.go:141] libmachine: (embed-certs-768431) Calling .Close
	I0920 18:22:41.559517   75086 main.go:141] libmachine: (embed-certs-768431) DBG | Closing plugin on server side
	I0920 18:22:41.559580   75086 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:22:41.559596   75086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:22:41.880601   75086 main.go:141] libmachine: Making call to close driver server
	I0920 18:22:41.880640   75086 main.go:141] libmachine: (embed-certs-768431) Calling .Close
	I0920 18:22:41.880998   75086 main.go:141] libmachine: (embed-certs-768431) DBG | Closing plugin on server side
	I0920 18:22:41.881026   75086 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:22:41.881039   75086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:22:41.881047   75086 main.go:141] libmachine: Making call to close driver server
	I0920 18:22:41.881052   75086 main.go:141] libmachine: (embed-certs-768431) Calling .Close
	I0920 18:22:41.881304   75086 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:22:41.881322   75086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:22:41.881336   75086 addons.go:475] Verifying addon metrics-server=true in "embed-certs-768431"
	I0920 18:22:41.883315   75086 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0920 18:22:39.302453   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:41.802428   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:41.885367   75086 addons.go:510] duration metric: took 1.45891561s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0920 18:22:42.696266   75086 pod_ready.go:103] pod "etcd-embed-certs-768431" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:44.300965   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:46.301969   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:45.195160   75086 pod_ready.go:103] pod "etcd-embed-certs-768431" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:46.200142   75086 pod_ready.go:93] pod "etcd-embed-certs-768431" in "kube-system" namespace has status "Ready":"True"
	I0920 18:22:46.200166   75086 pod_ready.go:82] duration metric: took 5.510989222s for pod "etcd-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:22:46.200175   75086 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:22:46.205619   75086 pod_ready.go:93] pod "kube-apiserver-embed-certs-768431" in "kube-system" namespace has status "Ready":"True"
	I0920 18:22:46.205647   75086 pod_ready.go:82] duration metric: took 5.464252ms for pod "kube-apiserver-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:22:46.205659   75086 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:22:46.210040   75086 pod_ready.go:93] pod "kube-controller-manager-embed-certs-768431" in "kube-system" namespace has status "Ready":"True"
	I0920 18:22:46.210066   75086 pod_ready.go:82] duration metric: took 4.397776ms for pod "kube-controller-manager-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:22:46.210079   75086 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:22:48.216587   75086 pod_ready.go:103] pod "kube-scheduler-embed-certs-768431" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:49.217996   75086 pod_ready.go:93] pod "kube-scheduler-embed-certs-768431" in "kube-system" namespace has status "Ready":"True"
	I0920 18:22:49.218019   75086 pod_ready.go:82] duration metric: took 3.007931797s for pod "kube-scheduler-embed-certs-768431" in "kube-system" namespace to be "Ready" ...
	I0920 18:22:49.218025   75086 pod_ready.go:39] duration metric: took 8.533358652s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:22:49.218039   75086 api_server.go:52] waiting for apiserver process to appear ...
	I0920 18:22:49.218100   75086 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:22:49.235456   75086 api_server.go:72] duration metric: took 8.809056301s to wait for apiserver process to appear ...
	I0920 18:22:49.235480   75086 api_server.go:88] waiting for apiserver healthz status ...
	I0920 18:22:49.235499   75086 api_server.go:253] Checking apiserver healthz at https://192.168.61.202:8443/healthz ...
	I0920 18:22:49.239562   75086 api_server.go:279] https://192.168.61.202:8443/healthz returned 200:
	ok
	I0920 18:22:49.240999   75086 api_server.go:141] control plane version: v1.31.1
	I0920 18:22:49.241024   75086 api_server.go:131] duration metric: took 5.537177ms to wait for apiserver health ...
	I0920 18:22:49.241033   75086 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 18:22:49.246297   75086 system_pods.go:59] 9 kube-system pods found
	I0920 18:22:49.246335   75086 system_pods.go:61] "coredns-7c65d6cfc9-g5tkc" [8877c0e8-6c8f-4a62-94bd-508982faee3c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 18:22:49.246342   75086 system_pods.go:61] "coredns-7c65d6cfc9-jkkdn" [f64288e4-009c-4ba8-93e7-30ca5296af46] Running
	I0920 18:22:49.246348   75086 system_pods.go:61] "etcd-embed-certs-768431" [f0ba7c9a-cc8b-43d4-aa48-a8f923d3c515] Running
	I0920 18:22:49.246352   75086 system_pods.go:61] "kube-apiserver-embed-certs-768431" [dba3b6cc-95db-47a0-ba41-3aa3ed3e8adb] Running
	I0920 18:22:49.246356   75086 system_pods.go:61] "kube-controller-manager-embed-certs-768431" [ff933e33-4c5c-4a8c-8ee4-6f0cb3ea4c3a] Running
	I0920 18:22:49.246359   75086 system_pods.go:61] "kube-proxy-c4527" [2e2d5102-0c42-4b87-8a27-dd53b8eb41f9] Running
	I0920 18:22:49.246362   75086 system_pods.go:61] "kube-scheduler-embed-certs-768431" [8cca3406-a395-4dcd-97ac-d8ce3e8deac6] Running
	I0920 18:22:49.246367   75086 system_pods.go:61] "metrics-server-6867b74b74-9snmf" [5fb654f5-5e73-436e-bc9d-04ef5077deb4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 18:22:49.246373   75086 system_pods.go:61] "storage-provisioner" [09227b45-ea2a-4fcc-b082-9978e5f00a4b] Running
	I0920 18:22:49.246379   75086 system_pods.go:74] duration metric: took 5.340615ms to wait for pod list to return data ...
	I0920 18:22:49.246388   75086 default_sa.go:34] waiting for default service account to be created ...
	I0920 18:22:49.249593   75086 default_sa.go:45] found service account: "default"
	I0920 18:22:49.249617   75086 default_sa.go:55] duration metric: took 3.222486ms for default service account to be created ...
	I0920 18:22:49.249626   75086 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 18:22:49.255747   75086 system_pods.go:86] 9 kube-system pods found
	I0920 18:22:49.255780   75086 system_pods.go:89] "coredns-7c65d6cfc9-g5tkc" [8877c0e8-6c8f-4a62-94bd-508982faee3c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 18:22:49.255790   75086 system_pods.go:89] "coredns-7c65d6cfc9-jkkdn" [f64288e4-009c-4ba8-93e7-30ca5296af46] Running
	I0920 18:22:49.255799   75086 system_pods.go:89] "etcd-embed-certs-768431" [f0ba7c9a-cc8b-43d4-aa48-a8f923d3c515] Running
	I0920 18:22:49.255805   75086 system_pods.go:89] "kube-apiserver-embed-certs-768431" [dba3b6cc-95db-47a0-ba41-3aa3ed3e8adb] Running
	I0920 18:22:49.255811   75086 system_pods.go:89] "kube-controller-manager-embed-certs-768431" [ff933e33-4c5c-4a8c-8ee4-6f0cb3ea4c3a] Running
	I0920 18:22:49.255817   75086 system_pods.go:89] "kube-proxy-c4527" [2e2d5102-0c42-4b87-8a27-dd53b8eb41f9] Running
	I0920 18:22:49.255826   75086 system_pods.go:89] "kube-scheduler-embed-certs-768431" [8cca3406-a395-4dcd-97ac-d8ce3e8deac6] Running
	I0920 18:22:49.255834   75086 system_pods.go:89] "metrics-server-6867b74b74-9snmf" [5fb654f5-5e73-436e-bc9d-04ef5077deb4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 18:22:49.255845   75086 system_pods.go:89] "storage-provisioner" [09227b45-ea2a-4fcc-b082-9978e5f00a4b] Running
	I0920 18:22:49.255852   75086 system_pods.go:126] duration metric: took 6.220419ms to wait for k8s-apps to be running ...
	I0920 18:22:49.255863   75086 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 18:22:49.255918   75086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:22:49.271916   75086 system_svc.go:56] duration metric: took 16.046857ms WaitForService to wait for kubelet
	I0920 18:22:49.271946   75086 kubeadm.go:582] duration metric: took 8.845551531s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:22:49.271962   75086 node_conditions.go:102] verifying NodePressure condition ...
	I0920 18:22:49.277150   75086 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 18:22:49.277190   75086 node_conditions.go:123] node cpu capacity is 2
	I0920 18:22:49.277203   75086 node_conditions.go:105] duration metric: took 5.234938ms to run NodePressure ...
	I0920 18:22:49.277216   75086 start.go:241] waiting for startup goroutines ...
	I0920 18:22:49.277226   75086 start.go:246] waiting for cluster config update ...
	I0920 18:22:49.277240   75086 start.go:255] writing updated cluster config ...
	I0920 18:22:49.278062   75086 ssh_runner.go:195] Run: rm -f paused
	I0920 18:22:49.329596   75086 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 18:22:49.331593   75086 out.go:177] * Done! kubectl is now configured to use "embed-certs-768431" cluster and "default" namespace by default
	I0920 18:22:48.802304   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:51.301288   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:53.801957   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:56.301310   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:22:58.302165   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:23:00.801931   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:23:03.301289   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:23:05.801777   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:23:08.301539   74753 pod_ready.go:103] pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace has status "Ready":"False"
	I0920 18:23:08.801990   74753 pod_ready.go:82] duration metric: took 4m0.006972987s for pod "metrics-server-6867b74b74-tfsff" in "kube-system" namespace to be "Ready" ...
	E0920 18:23:08.802010   74753 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0920 18:23:08.802019   74753 pod_ready.go:39] duration metric: took 4m2.40678087s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:23:08.802035   74753 api_server.go:52] waiting for apiserver process to appear ...
	I0920 18:23:08.802062   74753 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:23:08.802110   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:23:08.848687   74753 cri.go:89] found id: "334e4df5baa4f8f7d58af2308fab7f78de22f7e07d6dfbcbeaa0df2a52f6b207"
	I0920 18:23:08.848709   74753 cri.go:89] found id: ""
	I0920 18:23:08.848716   74753 logs.go:276] 1 containers: [334e4df5baa4f8f7d58af2308fab7f78de22f7e07d6dfbcbeaa0df2a52f6b207]
	I0920 18:23:08.848764   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:08.853180   74753 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:23:08.853239   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:23:08.889768   74753 cri.go:89] found id: "98aa96314cf8f863121b9953ad6335628481f536fcab49b7c00505a17eb35ff2"
	I0920 18:23:08.889793   74753 cri.go:89] found id: ""
	I0920 18:23:08.889803   74753 logs.go:276] 1 containers: [98aa96314cf8f863121b9953ad6335628481f536fcab49b7c00505a17eb35ff2]
	I0920 18:23:08.889869   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:08.893865   74753 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:23:08.893937   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:23:08.936592   74753 cri.go:89] found id: "35f0d8dd053d4c376ccf21215492d8509ff701e7d09b345fdbe06d505ba2d6b4"
	I0920 18:23:08.936619   74753 cri.go:89] found id: ""
	I0920 18:23:08.936629   74753 logs.go:276] 1 containers: [35f0d8dd053d4c376ccf21215492d8509ff701e7d09b345fdbe06d505ba2d6b4]
	I0920 18:23:08.936768   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:08.941222   74753 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:23:08.941311   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:23:08.978894   74753 cri.go:89] found id: "8153479cebb05353bec31c41778746c485cb294bf41c3f98e30559ad7da64c64"
	I0920 18:23:08.978918   74753 cri.go:89] found id: ""
	I0920 18:23:08.978929   74753 logs.go:276] 1 containers: [8153479cebb05353bec31c41778746c485cb294bf41c3f98e30559ad7da64c64]
	I0920 18:23:08.978977   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:08.982783   74753 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:23:08.982845   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:23:09.018400   74753 cri.go:89] found id: "6df198ca54e8004a532de9b15cfe7a48a659ae06623c00a78cad528762944497"
	I0920 18:23:09.018422   74753 cri.go:89] found id: ""
	I0920 18:23:09.018430   74753 logs.go:276] 1 containers: [6df198ca54e8004a532de9b15cfe7a48a659ae06623c00a78cad528762944497]
	I0920 18:23:09.018485   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:09.022403   74753 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:23:09.022470   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:23:09.062315   74753 cri.go:89] found id: "3ebf4c520d684127cd0b6146d44e863090b9c2db3ac866e9ce3ef2d10d2d6da1"
	I0920 18:23:09.062385   74753 cri.go:89] found id: ""
	I0920 18:23:09.062397   74753 logs.go:276] 1 containers: [3ebf4c520d684127cd0b6146d44e863090b9c2db3ac866e9ce3ef2d10d2d6da1]
	I0920 18:23:09.062453   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:09.066976   74753 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:23:09.067049   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:23:09.107467   74753 cri.go:89] found id: ""
	I0920 18:23:09.107504   74753 logs.go:276] 0 containers: []
	W0920 18:23:09.107515   74753 logs.go:278] No container was found matching "kindnet"
	I0920 18:23:09.107523   74753 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0920 18:23:09.107583   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0920 18:23:09.150489   74753 cri.go:89] found id: "179e4a02f3459a613d3307806b477d54581555b18babca62d0c5e553b9562442"
	I0920 18:23:09.150519   74753 cri.go:89] found id: "3eb9abdf57de51979118cfa77b613c22e28122fcb1da9e72fbf6f3a946ed3531"
	I0920 18:23:09.150526   74753 cri.go:89] found id: ""
	I0920 18:23:09.150536   74753 logs.go:276] 2 containers: [179e4a02f3459a613d3307806b477d54581555b18babca62d0c5e553b9562442 3eb9abdf57de51979118cfa77b613c22e28122fcb1da9e72fbf6f3a946ed3531]
	I0920 18:23:09.150589   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:09.154618   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:09.158682   74753 logs.go:123] Gathering logs for container status ...
	I0920 18:23:09.158708   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:23:09.198393   74753 logs.go:123] Gathering logs for kubelet ...
	I0920 18:23:09.198420   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:23:09.266161   74753 logs.go:123] Gathering logs for kube-apiserver [334e4df5baa4f8f7d58af2308fab7f78de22f7e07d6dfbcbeaa0df2a52f6b207] ...
	I0920 18:23:09.266197   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 334e4df5baa4f8f7d58af2308fab7f78de22f7e07d6dfbcbeaa0df2a52f6b207"
	I0920 18:23:09.311406   74753 logs.go:123] Gathering logs for etcd [98aa96314cf8f863121b9953ad6335628481f536fcab49b7c00505a17eb35ff2] ...
	I0920 18:23:09.311438   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 98aa96314cf8f863121b9953ad6335628481f536fcab49b7c00505a17eb35ff2"
	I0920 18:23:09.353233   74753 logs.go:123] Gathering logs for kube-controller-manager [3ebf4c520d684127cd0b6146d44e863090b9c2db3ac866e9ce3ef2d10d2d6da1] ...
	I0920 18:23:09.353271   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ebf4c520d684127cd0b6146d44e863090b9c2db3ac866e9ce3ef2d10d2d6da1"
	I0920 18:23:09.415966   74753 logs.go:123] Gathering logs for storage-provisioner [3eb9abdf57de51979118cfa77b613c22e28122fcb1da9e72fbf6f3a946ed3531] ...
	I0920 18:23:09.416010   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3eb9abdf57de51979118cfa77b613c22e28122fcb1da9e72fbf6f3a946ed3531"
	I0920 18:23:09.462018   74753 logs.go:123] Gathering logs for storage-provisioner [179e4a02f3459a613d3307806b477d54581555b18babca62d0c5e553b9562442] ...
	I0920 18:23:09.462054   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 179e4a02f3459a613d3307806b477d54581555b18babca62d0c5e553b9562442"
	I0920 18:23:09.499851   74753 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:23:09.499883   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:23:10.018536   74753 logs.go:123] Gathering logs for dmesg ...
	I0920 18:23:10.018586   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:23:10.033607   74753 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:23:10.033639   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 18:23:10.167598   74753 logs.go:123] Gathering logs for coredns [35f0d8dd053d4c376ccf21215492d8509ff701e7d09b345fdbe06d505ba2d6b4] ...
	I0920 18:23:10.167645   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35f0d8dd053d4c376ccf21215492d8509ff701e7d09b345fdbe06d505ba2d6b4"
	I0920 18:23:10.201819   74753 logs.go:123] Gathering logs for kube-scheduler [8153479cebb05353bec31c41778746c485cb294bf41c3f98e30559ad7da64c64] ...
	I0920 18:23:10.201856   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8153479cebb05353bec31c41778746c485cb294bf41c3f98e30559ad7da64c64"
	I0920 18:23:10.237079   74753 logs.go:123] Gathering logs for kube-proxy [6df198ca54e8004a532de9b15cfe7a48a659ae06623c00a78cad528762944497] ...
	I0920 18:23:10.237108   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6df198ca54e8004a532de9b15cfe7a48a659ae06623c00a78cad528762944497"
	I0920 18:23:12.778360   74753 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:23:12.796411   74753 api_server.go:72] duration metric: took 4m13.647076581s to wait for apiserver process to appear ...
	I0920 18:23:12.796438   74753 api_server.go:88] waiting for apiserver healthz status ...
	I0920 18:23:12.796475   74753 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:23:12.796525   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:23:12.833259   74753 cri.go:89] found id: "334e4df5baa4f8f7d58af2308fab7f78de22f7e07d6dfbcbeaa0df2a52f6b207"
	I0920 18:23:12.833279   74753 cri.go:89] found id: ""
	I0920 18:23:12.833287   74753 logs.go:276] 1 containers: [334e4df5baa4f8f7d58af2308fab7f78de22f7e07d6dfbcbeaa0df2a52f6b207]
	I0920 18:23:12.833334   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:12.837497   74753 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:23:12.837568   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:23:12.877602   74753 cri.go:89] found id: "98aa96314cf8f863121b9953ad6335628481f536fcab49b7c00505a17eb35ff2"
	I0920 18:23:12.877628   74753 cri.go:89] found id: ""
	I0920 18:23:12.877639   74753 logs.go:276] 1 containers: [98aa96314cf8f863121b9953ad6335628481f536fcab49b7c00505a17eb35ff2]
	I0920 18:23:12.877688   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:12.881869   74753 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:23:12.881951   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:23:12.924617   74753 cri.go:89] found id: "35f0d8dd053d4c376ccf21215492d8509ff701e7d09b345fdbe06d505ba2d6b4"
	I0920 18:23:12.924641   74753 cri.go:89] found id: ""
	I0920 18:23:12.924650   74753 logs.go:276] 1 containers: [35f0d8dd053d4c376ccf21215492d8509ff701e7d09b345fdbe06d505ba2d6b4]
	I0920 18:23:12.924710   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:12.929317   74753 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:23:12.929381   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:23:12.968642   74753 cri.go:89] found id: "8153479cebb05353bec31c41778746c485cb294bf41c3f98e30559ad7da64c64"
	I0920 18:23:12.968672   74753 cri.go:89] found id: ""
	I0920 18:23:12.968682   74753 logs.go:276] 1 containers: [8153479cebb05353bec31c41778746c485cb294bf41c3f98e30559ad7da64c64]
	I0920 18:23:12.968745   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:12.974588   74753 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:23:12.974665   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:23:13.010036   74753 cri.go:89] found id: "6df198ca54e8004a532de9b15cfe7a48a659ae06623c00a78cad528762944497"
	I0920 18:23:13.010067   74753 cri.go:89] found id: ""
	I0920 18:23:13.010078   74753 logs.go:276] 1 containers: [6df198ca54e8004a532de9b15cfe7a48a659ae06623c00a78cad528762944497]
	I0920 18:23:13.010136   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:13.014300   74753 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:23:13.014358   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:23:13.055560   74753 cri.go:89] found id: "3ebf4c520d684127cd0b6146d44e863090b9c2db3ac866e9ce3ef2d10d2d6da1"
	I0920 18:23:13.055591   74753 cri.go:89] found id: ""
	I0920 18:23:13.055601   74753 logs.go:276] 1 containers: [3ebf4c520d684127cd0b6146d44e863090b9c2db3ac866e9ce3ef2d10d2d6da1]
	I0920 18:23:13.055663   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:13.059828   74753 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:23:13.059887   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:23:13.100161   74753 cri.go:89] found id: ""
	I0920 18:23:13.100189   74753 logs.go:276] 0 containers: []
	W0920 18:23:13.100199   74753 logs.go:278] No container was found matching "kindnet"
	I0920 18:23:13.100207   74753 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0920 18:23:13.100269   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0920 18:23:13.136788   74753 cri.go:89] found id: "179e4a02f3459a613d3307806b477d54581555b18babca62d0c5e553b9562442"
	I0920 18:23:13.136818   74753 cri.go:89] found id: "3eb9abdf57de51979118cfa77b613c22e28122fcb1da9e72fbf6f3a946ed3531"
	I0920 18:23:13.136824   74753 cri.go:89] found id: ""
	I0920 18:23:13.136831   74753 logs.go:276] 2 containers: [179e4a02f3459a613d3307806b477d54581555b18babca62d0c5e553b9562442 3eb9abdf57de51979118cfa77b613c22e28122fcb1da9e72fbf6f3a946ed3531]
	I0920 18:23:13.136894   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:13.140956   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:13.145991   74753 logs.go:123] Gathering logs for container status ...
	I0920 18:23:13.146023   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:23:13.187274   74753 logs.go:123] Gathering logs for kubelet ...
	I0920 18:23:13.187303   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:23:13.257111   74753 logs.go:123] Gathering logs for dmesg ...
	I0920 18:23:13.257147   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:23:13.271678   74753 logs.go:123] Gathering logs for kube-controller-manager [3ebf4c520d684127cd0b6146d44e863090b9c2db3ac866e9ce3ef2d10d2d6da1] ...
	I0920 18:23:13.271704   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ebf4c520d684127cd0b6146d44e863090b9c2db3ac866e9ce3ef2d10d2d6da1"
	I0920 18:23:13.327163   74753 logs.go:123] Gathering logs for storage-provisioner [179e4a02f3459a613d3307806b477d54581555b18babca62d0c5e553b9562442] ...
	I0920 18:23:13.327191   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 179e4a02f3459a613d3307806b477d54581555b18babca62d0c5e553b9562442"
	I0920 18:23:13.362002   74753 logs.go:123] Gathering logs for storage-provisioner [3eb9abdf57de51979118cfa77b613c22e28122fcb1da9e72fbf6f3a946ed3531] ...
	I0920 18:23:13.362039   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3eb9abdf57de51979118cfa77b613c22e28122fcb1da9e72fbf6f3a946ed3531"
	I0920 18:23:13.409907   74753 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:23:13.409938   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:23:13.840233   74753 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:23:13.840275   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 18:23:13.956466   74753 logs.go:123] Gathering logs for kube-apiserver [334e4df5baa4f8f7d58af2308fab7f78de22f7e07d6dfbcbeaa0df2a52f6b207] ...
	I0920 18:23:13.956499   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 334e4df5baa4f8f7d58af2308fab7f78de22f7e07d6dfbcbeaa0df2a52f6b207"
	I0920 18:23:14.006028   74753 logs.go:123] Gathering logs for etcd [98aa96314cf8f863121b9953ad6335628481f536fcab49b7c00505a17eb35ff2] ...
	I0920 18:23:14.006063   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 98aa96314cf8f863121b9953ad6335628481f536fcab49b7c00505a17eb35ff2"
	I0920 18:23:14.051354   74753 logs.go:123] Gathering logs for coredns [35f0d8dd053d4c376ccf21215492d8509ff701e7d09b345fdbe06d505ba2d6b4] ...
	I0920 18:23:14.051382   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35f0d8dd053d4c376ccf21215492d8509ff701e7d09b345fdbe06d505ba2d6b4"
	I0920 18:23:14.086486   74753 logs.go:123] Gathering logs for kube-scheduler [8153479cebb05353bec31c41778746c485cb294bf41c3f98e30559ad7da64c64] ...
	I0920 18:23:14.086518   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8153479cebb05353bec31c41778746c485cb294bf41c3f98e30559ad7da64c64"
	I0920 18:23:14.128119   74753 logs.go:123] Gathering logs for kube-proxy [6df198ca54e8004a532de9b15cfe7a48a659ae06623c00a78cad528762944497] ...
	I0920 18:23:14.128149   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6df198ca54e8004a532de9b15cfe7a48a659ae06623c00a78cad528762944497"
	I0920 18:23:16.668901   74753 api_server.go:253] Checking apiserver healthz at https://192.168.50.47:8443/healthz ...
	I0920 18:23:16.674235   74753 api_server.go:279] https://192.168.50.47:8443/healthz returned 200:
	ok
	I0920 18:23:16.675431   74753 api_server.go:141] control plane version: v1.31.1
	I0920 18:23:16.675449   74753 api_server.go:131] duration metric: took 3.879005012s to wait for apiserver health ...
	I0920 18:23:16.675456   74753 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 18:23:16.675481   74753 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:23:16.675527   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:23:16.711645   74753 cri.go:89] found id: "334e4df5baa4f8f7d58af2308fab7f78de22f7e07d6dfbcbeaa0df2a52f6b207"
	I0920 18:23:16.711673   74753 cri.go:89] found id: ""
	I0920 18:23:16.711683   74753 logs.go:276] 1 containers: [334e4df5baa4f8f7d58af2308fab7f78de22f7e07d6dfbcbeaa0df2a52f6b207]
	I0920 18:23:16.711743   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:16.716466   74753 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:23:16.716536   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:23:16.763061   74753 cri.go:89] found id: "98aa96314cf8f863121b9953ad6335628481f536fcab49b7c00505a17eb35ff2"
	I0920 18:23:16.763085   74753 cri.go:89] found id: ""
	I0920 18:23:16.763095   74753 logs.go:276] 1 containers: [98aa96314cf8f863121b9953ad6335628481f536fcab49b7c00505a17eb35ff2]
	I0920 18:23:16.763155   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:16.767018   74753 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:23:16.767075   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:23:16.801111   74753 cri.go:89] found id: "35f0d8dd053d4c376ccf21215492d8509ff701e7d09b345fdbe06d505ba2d6b4"
	I0920 18:23:16.801131   74753 cri.go:89] found id: ""
	I0920 18:23:16.801138   74753 logs.go:276] 1 containers: [35f0d8dd053d4c376ccf21215492d8509ff701e7d09b345fdbe06d505ba2d6b4]
	I0920 18:23:16.801192   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:16.805372   74753 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:23:16.805436   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:23:16.843368   74753 cri.go:89] found id: "8153479cebb05353bec31c41778746c485cb294bf41c3f98e30559ad7da64c64"
	I0920 18:23:16.843399   74753 cri.go:89] found id: ""
	I0920 18:23:16.843410   74753 logs.go:276] 1 containers: [8153479cebb05353bec31c41778746c485cb294bf41c3f98e30559ad7da64c64]
	I0920 18:23:16.843461   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:16.847284   74753 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:23:16.847341   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:23:16.882749   74753 cri.go:89] found id: "6df198ca54e8004a532de9b15cfe7a48a659ae06623c00a78cad528762944497"
	I0920 18:23:16.882770   74753 cri.go:89] found id: ""
	I0920 18:23:16.882777   74753 logs.go:276] 1 containers: [6df198ca54e8004a532de9b15cfe7a48a659ae06623c00a78cad528762944497]
	I0920 18:23:16.882838   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:16.887003   74753 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:23:16.887083   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:23:16.923261   74753 cri.go:89] found id: "3ebf4c520d684127cd0b6146d44e863090b9c2db3ac866e9ce3ef2d10d2d6da1"
	I0920 18:23:16.923289   74753 cri.go:89] found id: ""
	I0920 18:23:16.923303   74753 logs.go:276] 1 containers: [3ebf4c520d684127cd0b6146d44e863090b9c2db3ac866e9ce3ef2d10d2d6da1]
	I0920 18:23:16.923357   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:16.927722   74753 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:23:16.927782   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:23:16.963753   74753 cri.go:89] found id: ""
	I0920 18:23:16.963780   74753 logs.go:276] 0 containers: []
	W0920 18:23:16.963791   74753 logs.go:278] No container was found matching "kindnet"
	I0920 18:23:16.963799   74753 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0920 18:23:16.963866   74753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0920 18:23:17.000548   74753 cri.go:89] found id: "179e4a02f3459a613d3307806b477d54581555b18babca62d0c5e553b9562442"
	I0920 18:23:17.000566   74753 cri.go:89] found id: "3eb9abdf57de51979118cfa77b613c22e28122fcb1da9e72fbf6f3a946ed3531"
	I0920 18:23:17.000571   74753 cri.go:89] found id: ""
	I0920 18:23:17.000578   74753 logs.go:276] 2 containers: [179e4a02f3459a613d3307806b477d54581555b18babca62d0c5e553b9562442 3eb9abdf57de51979118cfa77b613c22e28122fcb1da9e72fbf6f3a946ed3531]
	I0920 18:23:17.000627   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:17.004708   74753 ssh_runner.go:195] Run: which crictl
	I0920 18:23:17.008489   74753 logs.go:123] Gathering logs for coredns [35f0d8dd053d4c376ccf21215492d8509ff701e7d09b345fdbe06d505ba2d6b4] ...
	I0920 18:23:17.008518   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35f0d8dd053d4c376ccf21215492d8509ff701e7d09b345fdbe06d505ba2d6b4"
	I0920 18:23:17.045971   74753 logs.go:123] Gathering logs for kube-scheduler [8153479cebb05353bec31c41778746c485cb294bf41c3f98e30559ad7da64c64] ...
	I0920 18:23:17.046005   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8153479cebb05353bec31c41778746c485cb294bf41c3f98e30559ad7da64c64"
	I0920 18:23:17.083976   74753 logs.go:123] Gathering logs for kube-controller-manager [3ebf4c520d684127cd0b6146d44e863090b9c2db3ac866e9ce3ef2d10d2d6da1] ...
	I0920 18:23:17.084005   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ebf4c520d684127cd0b6146d44e863090b9c2db3ac866e9ce3ef2d10d2d6da1"
	I0920 18:23:17.137192   74753 logs.go:123] Gathering logs for storage-provisioner [179e4a02f3459a613d3307806b477d54581555b18babca62d0c5e553b9562442] ...
	I0920 18:23:17.137228   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 179e4a02f3459a613d3307806b477d54581555b18babca62d0c5e553b9562442"
	I0920 18:23:17.177203   74753 logs.go:123] Gathering logs for dmesg ...
	I0920 18:23:17.177234   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:23:17.192612   74753 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:23:17.192639   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 18:23:17.300681   74753 logs.go:123] Gathering logs for kube-apiserver [334e4df5baa4f8f7d58af2308fab7f78de22f7e07d6dfbcbeaa0df2a52f6b207] ...
	I0920 18:23:17.300714   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 334e4df5baa4f8f7d58af2308fab7f78de22f7e07d6dfbcbeaa0df2a52f6b207"
	I0920 18:23:17.346835   74753 logs.go:123] Gathering logs for storage-provisioner [3eb9abdf57de51979118cfa77b613c22e28122fcb1da9e72fbf6f3a946ed3531] ...
	I0920 18:23:17.346866   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3eb9abdf57de51979118cfa77b613c22e28122fcb1da9e72fbf6f3a946ed3531"
	I0920 18:23:17.382625   74753 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:23:17.382650   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:23:17.750803   74753 logs.go:123] Gathering logs for container status ...
	I0920 18:23:17.750853   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:23:17.801114   74753 logs.go:123] Gathering logs for kubelet ...
	I0920 18:23:17.801142   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:23:17.866547   74753 logs.go:123] Gathering logs for etcd [98aa96314cf8f863121b9953ad6335628481f536fcab49b7c00505a17eb35ff2] ...
	I0920 18:23:17.866589   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 98aa96314cf8f863121b9953ad6335628481f536fcab49b7c00505a17eb35ff2"
	I0920 18:23:17.909489   74753 logs.go:123] Gathering logs for kube-proxy [6df198ca54e8004a532de9b15cfe7a48a659ae06623c00a78cad528762944497] ...
	I0920 18:23:17.909519   74753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6df198ca54e8004a532de9b15cfe7a48a659ae06623c00a78cad528762944497"
	I0920 18:23:20.456014   74753 system_pods.go:59] 8 kube-system pods found
	I0920 18:23:20.456055   74753 system_pods.go:61] "coredns-7c65d6cfc9-j2t5h" [dd2636f1-3200-4f22-957c-046277c9be8c] Running
	I0920 18:23:20.456063   74753 system_pods.go:61] "etcd-no-preload-956403" [daf84be0-e673-4685-a998-47ec64664484] Running
	I0920 18:23:20.456068   74753 system_pods.go:61] "kube-apiserver-no-preload-956403" [416217e6-3c91-49ec-bd69-812a49b4afd9] Running
	I0920 18:23:20.456074   74753 system_pods.go:61] "kube-controller-manager-no-preload-956403" [30fae740-b073-4619-bca5-010ca90b2667] Running
	I0920 18:23:20.456079   74753 system_pods.go:61] "kube-proxy-sz4bm" [269600fb-ef65-4b17-8c07-76c79e35f5a8] Running
	I0920 18:23:20.456085   74753 system_pods.go:61] "kube-scheduler-no-preload-956403" [46432927-e506-4665-8825-c5f6a2ed0458] Running
	I0920 18:23:20.456095   74753 system_pods.go:61] "metrics-server-6867b74b74-tfsff" [599ba06a-6d4d-483b-b390-a3595a814757] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 18:23:20.456101   74753 system_pods.go:61] "storage-provisioner" [df661627-7d32-4467-9805-1ae65d4fa35c] Running
	I0920 18:23:20.456109   74753 system_pods.go:74] duration metric: took 3.780646412s to wait for pod list to return data ...
	I0920 18:23:20.456116   74753 default_sa.go:34] waiting for default service account to be created ...
	I0920 18:23:20.459571   74753 default_sa.go:45] found service account: "default"
	I0920 18:23:20.459598   74753 default_sa.go:55] duration metric: took 3.475906ms for default service account to be created ...
	I0920 18:23:20.459607   74753 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 18:23:20.464453   74753 system_pods.go:86] 8 kube-system pods found
	I0920 18:23:20.464489   74753 system_pods.go:89] "coredns-7c65d6cfc9-j2t5h" [dd2636f1-3200-4f22-957c-046277c9be8c] Running
	I0920 18:23:20.464495   74753 system_pods.go:89] "etcd-no-preload-956403" [daf84be0-e673-4685-a998-47ec64664484] Running
	I0920 18:23:20.464499   74753 system_pods.go:89] "kube-apiserver-no-preload-956403" [416217e6-3c91-49ec-bd69-812a49b4afd9] Running
	I0920 18:23:20.464544   74753 system_pods.go:89] "kube-controller-manager-no-preload-956403" [30fae740-b073-4619-bca5-010ca90b2667] Running
	I0920 18:23:20.464559   74753 system_pods.go:89] "kube-proxy-sz4bm" [269600fb-ef65-4b17-8c07-76c79e35f5a8] Running
	I0920 18:23:20.464563   74753 system_pods.go:89] "kube-scheduler-no-preload-956403" [46432927-e506-4665-8825-c5f6a2ed0458] Running
	I0920 18:23:20.464570   74753 system_pods.go:89] "metrics-server-6867b74b74-tfsff" [599ba06a-6d4d-483b-b390-a3595a814757] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 18:23:20.464577   74753 system_pods.go:89] "storage-provisioner" [df661627-7d32-4467-9805-1ae65d4fa35c] Running
	I0920 18:23:20.464583   74753 system_pods.go:126] duration metric: took 4.971732ms to wait for k8s-apps to be running ...
	I0920 18:23:20.464592   74753 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 18:23:20.464637   74753 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:23:20.483425   74753 system_svc.go:56] duration metric: took 18.826162ms WaitForService to wait for kubelet
	I0920 18:23:20.483452   74753 kubeadm.go:582] duration metric: took 4m21.334124064s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:23:20.483469   74753 node_conditions.go:102] verifying NodePressure condition ...
	I0920 18:23:20.486703   74753 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 18:23:20.486725   74753 node_conditions.go:123] node cpu capacity is 2
	I0920 18:23:20.486736   74753 node_conditions.go:105] duration metric: took 3.263035ms to run NodePressure ...
	I0920 18:23:20.486745   74753 start.go:241] waiting for startup goroutines ...
	I0920 18:23:20.486752   74753 start.go:246] waiting for cluster config update ...
	I0920 18:23:20.486763   74753 start.go:255] writing updated cluster config ...
	I0920 18:23:20.487023   74753 ssh_runner.go:195] Run: rm -f paused
	I0920 18:23:20.538569   74753 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 18:23:20.540783   74753 out.go:177] * Done! kubectl is now configured to use "no-preload-956403" cluster and "default" namespace by default
	I0920 18:24:23.891635   75577 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0920 18:24:23.891735   75577 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0920 18:24:23.893591   75577 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0920 18:24:23.893681   75577 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 18:24:23.893785   75577 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 18:24:23.893913   75577 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 18:24:23.894056   75577 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0920 18:24:23.894134   75577 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 18:24:23.895929   75577 out.go:235]   - Generating certificates and keys ...
	I0920 18:24:23.896024   75577 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 18:24:23.896127   75577 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 18:24:23.896245   75577 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 18:24:23.896329   75577 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 18:24:23.896426   75577 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 18:24:23.896510   75577 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 18:24:23.896603   75577 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 18:24:23.896686   75577 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 18:24:23.896783   75577 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 18:24:23.896865   75577 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 18:24:23.896916   75577 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 18:24:23.896984   75577 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 18:24:23.897054   75577 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 18:24:23.897112   75577 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 18:24:23.897174   75577 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 18:24:23.897237   75577 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 18:24:23.897367   75577 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 18:24:23.897488   75577 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 18:24:23.897554   75577 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 18:24:23.897660   75577 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 18:24:23.899056   75577 out.go:235]   - Booting up control plane ...
	I0920 18:24:23.899173   75577 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 18:24:23.899287   75577 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 18:24:23.899371   75577 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 18:24:23.899460   75577 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 18:24:23.899639   75577 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0920 18:24:23.899719   75577 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0920 18:24:23.899823   75577 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:24:23.900047   75577 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:24:23.900124   75577 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:24:23.900322   75577 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:24:23.900392   75577 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:24:23.900556   75577 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:24:23.900634   75577 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:24:23.900826   75577 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:24:23.900884   75577 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:24:23.901044   75577 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:24:23.901052   75577 kubeadm.go:310] 
	I0920 18:24:23.901094   75577 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0920 18:24:23.901129   75577 kubeadm.go:310] 		timed out waiting for the condition
	I0920 18:24:23.901135   75577 kubeadm.go:310] 
	I0920 18:24:23.901179   75577 kubeadm.go:310] 	This error is likely caused by:
	I0920 18:24:23.901209   75577 kubeadm.go:310] 		- The kubelet is not running
	I0920 18:24:23.901314   75577 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0920 18:24:23.901335   75577 kubeadm.go:310] 
	I0920 18:24:23.901458   75577 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0920 18:24:23.901515   75577 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0920 18:24:23.901547   75577 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0920 18:24:23.901556   75577 kubeadm.go:310] 
	I0920 18:24:23.901719   75577 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0920 18:24:23.901859   75577 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0920 18:24:23.901870   75577 kubeadm.go:310] 
	I0920 18:24:23.902030   75577 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0920 18:24:23.902122   75577 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0920 18:24:23.902186   75577 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0920 18:24:23.902264   75577 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0920 18:24:23.902299   75577 kubeadm.go:310] 
	W0920 18:24:23.902388   75577 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0920 18:24:23.902435   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0920 18:24:24.370490   75577 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:24:24.385383   75577 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 18:24:24.395443   75577 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 18:24:24.395463   75577 kubeadm.go:157] found existing configuration files:
	
	I0920 18:24:24.395505   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 18:24:24.404947   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 18:24:24.405021   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 18:24:24.415145   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 18:24:24.424199   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 18:24:24.424279   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 18:24:24.435108   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 18:24:24.446684   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 18:24:24.446742   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 18:24:24.457199   75577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 18:24:24.467240   75577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 18:24:24.467315   75577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 18:24:24.479293   75577 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 18:24:24.704601   75577 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 18:26:21.167677   75577 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0920 18:26:21.167760   75577 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0920 18:26:21.170176   75577 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0920 18:26:21.170249   75577 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 18:26:21.170353   75577 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 18:26:21.170485   75577 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 18:26:21.170653   75577 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0920 18:26:21.170778   75577 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 18:26:21.172479   75577 out.go:235]   - Generating certificates and keys ...
	I0920 18:26:21.172559   75577 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 18:26:21.172623   75577 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 18:26:21.172734   75577 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 18:26:21.172820   75577 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 18:26:21.172901   75577 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 18:26:21.172948   75577 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 18:26:21.173054   75577 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 18:26:21.173145   75577 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 18:26:21.173238   75577 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 18:26:21.173325   75577 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 18:26:21.173381   75577 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 18:26:21.173468   75577 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 18:26:21.173517   75577 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 18:26:21.173562   75577 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 18:26:21.173657   75577 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 18:26:21.173753   75577 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 18:26:21.173931   75577 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 18:26:21.174042   75577 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 18:26:21.174120   75577 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 18:26:21.174226   75577 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 18:26:21.175785   75577 out.go:235]   - Booting up control plane ...
	I0920 18:26:21.175897   75577 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 18:26:21.176062   75577 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 18:26:21.176179   75577 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 18:26:21.176283   75577 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 18:26:21.176482   75577 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0920 18:26:21.176567   75577 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0920 18:26:21.176654   75577 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:26:21.176850   75577 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:26:21.176952   75577 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:26:21.177146   75577 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:26:21.177255   75577 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:26:21.177460   75577 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:26:21.177527   75577 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:26:21.177681   75577 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:26:21.177758   75577 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:26:21.177952   75577 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:26:21.177966   75577 kubeadm.go:310] 
	I0920 18:26:21.178001   75577 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0920 18:26:21.178035   75577 kubeadm.go:310] 		timed out waiting for the condition
	I0920 18:26:21.178041   75577 kubeadm.go:310] 
	I0920 18:26:21.178070   75577 kubeadm.go:310] 	This error is likely caused by:
	I0920 18:26:21.178099   75577 kubeadm.go:310] 		- The kubelet is not running
	I0920 18:26:21.178239   75577 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0920 18:26:21.178251   75577 kubeadm.go:310] 
	I0920 18:26:21.178348   75577 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0920 18:26:21.178378   75577 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0920 18:26:21.178415   75577 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0920 18:26:21.178421   75577 kubeadm.go:310] 
	I0920 18:26:21.178505   75577 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0920 18:26:21.178581   75577 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0920 18:26:21.178590   75577 kubeadm.go:310] 
	I0920 18:26:21.178695   75577 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0920 18:26:21.178770   75577 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0920 18:26:21.178832   75577 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0920 18:26:21.178893   75577 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0920 18:26:21.178950   75577 kubeadm.go:394] duration metric: took 7m57.80706374s to StartCluster
	I0920 18:26:21.178961   75577 kubeadm.go:310] 
	I0920 18:26:21.178989   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:26:21.179047   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:26:21.229528   75577 cri.go:89] found id: ""
	I0920 18:26:21.229554   75577 logs.go:276] 0 containers: []
	W0920 18:26:21.229564   75577 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:26:21.229572   75577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:26:21.229644   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:26:21.265997   75577 cri.go:89] found id: ""
	I0920 18:26:21.266021   75577 logs.go:276] 0 containers: []
	W0920 18:26:21.266029   75577 logs.go:278] No container was found matching "etcd"
	I0920 18:26:21.266034   75577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:26:21.266081   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:26:21.309358   75577 cri.go:89] found id: ""
	I0920 18:26:21.309391   75577 logs.go:276] 0 containers: []
	W0920 18:26:21.309402   75577 logs.go:278] No container was found matching "coredns"
	I0920 18:26:21.309409   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:26:21.309469   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:26:21.349418   75577 cri.go:89] found id: ""
	I0920 18:26:21.349442   75577 logs.go:276] 0 containers: []
	W0920 18:26:21.349451   75577 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:26:21.349457   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:26:21.349537   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:26:21.388067   75577 cri.go:89] found id: ""
	I0920 18:26:21.388092   75577 logs.go:276] 0 containers: []
	W0920 18:26:21.388105   75577 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:26:21.388111   75577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:26:21.388165   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:26:21.428474   75577 cri.go:89] found id: ""
	I0920 18:26:21.428501   75577 logs.go:276] 0 containers: []
	W0920 18:26:21.428512   75577 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:26:21.428519   75577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:26:21.428580   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:26:21.466188   75577 cri.go:89] found id: ""
	I0920 18:26:21.466215   75577 logs.go:276] 0 containers: []
	W0920 18:26:21.466225   75577 logs.go:278] No container was found matching "kindnet"
	I0920 18:26:21.466232   75577 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:26:21.466291   75577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:26:21.504396   75577 cri.go:89] found id: ""
	I0920 18:26:21.504422   75577 logs.go:276] 0 containers: []
	W0920 18:26:21.504432   75577 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 18:26:21.504443   75577 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:26:21.504457   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:26:21.608057   75577 logs.go:123] Gathering logs for container status ...
	I0920 18:26:21.608098   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:26:21.672536   75577 logs.go:123] Gathering logs for kubelet ...
	I0920 18:26:21.672564   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:26:21.723219   75577 logs.go:123] Gathering logs for dmesg ...
	I0920 18:26:21.723251   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:26:21.736436   75577 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:26:21.736463   75577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:26:21.809055   75577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0920 18:26:21.809079   75577 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0920 18:26:21.809128   75577 out.go:270] * 
	W0920 18:26:21.809197   75577 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0920 18:26:21.809217   75577 out.go:270] * 
	W0920 18:26:21.810193   75577 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 18:26:21.814385   75577 out.go:201] 
	W0920 18:26:21.816118   75577 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0920 18:26:21.816186   75577 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0920 18:26:21.816225   75577 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0920 18:26:21.818478   75577 out.go:201] 
	
	
	==> CRI-O <==
	Sep 20 18:37:54 old-k8s-version-744025 crio[628]: time="2024-09-20 18:37:54.063213878Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857474063188431,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b5fec65d-1a74-4546-afd2-f9e18d71a35a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:37:54 old-k8s-version-744025 crio[628]: time="2024-09-20 18:37:54.063799919Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1d5aefa2-aa50-4724-8408-9667917f1027 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:37:54 old-k8s-version-744025 crio[628]: time="2024-09-20 18:37:54.063881517Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1d5aefa2-aa50-4724-8408-9667917f1027 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:37:54 old-k8s-version-744025 crio[628]: time="2024-09-20 18:37:54.063933408Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=1d5aefa2-aa50-4724-8408-9667917f1027 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:37:54 old-k8s-version-744025 crio[628]: time="2024-09-20 18:37:54.097255332Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=26dc756a-3247-4234-87f2-7c452e9fb094 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:37:54 old-k8s-version-744025 crio[628]: time="2024-09-20 18:37:54.097361585Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=26dc756a-3247-4234-87f2-7c452e9fb094 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:37:54 old-k8s-version-744025 crio[628]: time="2024-09-20 18:37:54.098782799Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0ef9cdc3-619c-465f-883d-de3f30ae9d4d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:37:54 old-k8s-version-744025 crio[628]: time="2024-09-20 18:37:54.099278431Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857474099246429,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0ef9cdc3-619c-465f-883d-de3f30ae9d4d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:37:54 old-k8s-version-744025 crio[628]: time="2024-09-20 18:37:54.099859957Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=85303903-565c-478d-8d94-532f1a01ac69 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:37:54 old-k8s-version-744025 crio[628]: time="2024-09-20 18:37:54.099921966Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=85303903-565c-478d-8d94-532f1a01ac69 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:37:54 old-k8s-version-744025 crio[628]: time="2024-09-20 18:37:54.099980393Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=85303903-565c-478d-8d94-532f1a01ac69 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:37:54 old-k8s-version-744025 crio[628]: time="2024-09-20 18:37:54.136599693Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f3372334-69fb-4e62-b436-85f6b95c89ea name=/runtime.v1.RuntimeService/Version
	Sep 20 18:37:54 old-k8s-version-744025 crio[628]: time="2024-09-20 18:37:54.136682298Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f3372334-69fb-4e62-b436-85f6b95c89ea name=/runtime.v1.RuntimeService/Version
	Sep 20 18:37:54 old-k8s-version-744025 crio[628]: time="2024-09-20 18:37:54.138570818Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0c9114c7-f381-49ed-9eee-e30082b7cd39 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:37:54 old-k8s-version-744025 crio[628]: time="2024-09-20 18:37:54.139028717Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857474138998280,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0c9114c7-f381-49ed-9eee-e30082b7cd39 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:37:54 old-k8s-version-744025 crio[628]: time="2024-09-20 18:37:54.139702741Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7aea320d-3d33-4f19-9e3f-07e9b3cf264b name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:37:54 old-k8s-version-744025 crio[628]: time="2024-09-20 18:37:54.139788186Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7aea320d-3d33-4f19-9e3f-07e9b3cf264b name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:37:54 old-k8s-version-744025 crio[628]: time="2024-09-20 18:37:54.139841345Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=7aea320d-3d33-4f19-9e3f-07e9b3cf264b name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:37:54 old-k8s-version-744025 crio[628]: time="2024-09-20 18:37:54.174124447Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=53e903a8-b4cb-4096-bab6-8dcc64c01b70 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:37:54 old-k8s-version-744025 crio[628]: time="2024-09-20 18:37:54.174216087Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=53e903a8-b4cb-4096-bab6-8dcc64c01b70 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:37:54 old-k8s-version-744025 crio[628]: time="2024-09-20 18:37:54.175354017Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8ed2272b-9c8d-457f-b8b8-549cb444919a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:37:54 old-k8s-version-744025 crio[628]: time="2024-09-20 18:37:54.175803045Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857474175780327,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8ed2272b-9c8d-457f-b8b8-549cb444919a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:37:54 old-k8s-version-744025 crio[628]: time="2024-09-20 18:37:54.176377205Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2c4d3b73-2f8c-4b0c-acb6-bb571794ee26 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:37:54 old-k8s-version-744025 crio[628]: time="2024-09-20 18:37:54.176445426Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2c4d3b73-2f8c-4b0c-acb6-bb571794ee26 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:37:54 old-k8s-version-744025 crio[628]: time="2024-09-20 18:37:54.176482091Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=2c4d3b73-2f8c-4b0c-acb6-bb571794ee26 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Sep20 18:17] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050746] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037709] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Sep20 18:18] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.985932] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.595098] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.239386] systemd-fstab-generator[555]: Ignoring "noauto" option for root device
	[  +0.063752] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.070306] systemd-fstab-generator[567]: Ignoring "noauto" option for root device
	[  +0.206728] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.121183] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.259648] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +6.744745] systemd-fstab-generator[874]: Ignoring "noauto" option for root device
	[  +0.100047] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.166710] systemd-fstab-generator[998]: Ignoring "noauto" option for root device
	[ +12.315396] kauditd_printk_skb: 46 callbacks suppressed
	[Sep20 18:22] systemd-fstab-generator[4992]: Ignoring "noauto" option for root device
	[Sep20 18:24] systemd-fstab-generator[5273]: Ignoring "noauto" option for root device
	[  +0.069847] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 18:37:54 up 19 min,  0 users,  load average: 0.08, 0.02, 0.03
	Linux old-k8s-version-744025 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Sep 20 18:37:51 old-k8s-version-744025 kubelet[6782]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc00009e0c0, 0xc000b53e60)
	Sep 20 18:37:51 old-k8s-version-744025 kubelet[6782]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Sep 20 18:37:51 old-k8s-version-744025 kubelet[6782]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Sep 20 18:37:51 old-k8s-version-744025 kubelet[6782]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Sep 20 18:37:51 old-k8s-version-744025 kubelet[6782]: goroutine 158 [select]:
	Sep 20 18:37:51 old-k8s-version-744025 kubelet[6782]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000d89ef0, 0x4f0ac20, 0xc000c5bdb0, 0x1, 0xc00009e0c0)
	Sep 20 18:37:51 old-k8s-version-744025 kubelet[6782]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Sep 20 18:37:51 old-k8s-version-744025 kubelet[6782]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc000c4e620, 0xc00009e0c0)
	Sep 20 18:37:51 old-k8s-version-744025 kubelet[6782]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Sep 20 18:37:51 old-k8s-version-744025 kubelet[6782]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Sep 20 18:37:51 old-k8s-version-744025 kubelet[6782]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Sep 20 18:37:51 old-k8s-version-744025 kubelet[6782]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000457710, 0xc000d80160)
	Sep 20 18:37:51 old-k8s-version-744025 kubelet[6782]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Sep 20 18:37:51 old-k8s-version-744025 kubelet[6782]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Sep 20 18:37:51 old-k8s-version-744025 kubelet[6782]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Sep 20 18:37:51 old-k8s-version-744025 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Sep 20 18:37:51 old-k8s-version-744025 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Sep 20 18:37:51 old-k8s-version-744025 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 140.
	Sep 20 18:37:51 old-k8s-version-744025 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Sep 20 18:37:51 old-k8s-version-744025 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Sep 20 18:37:51 old-k8s-version-744025 kubelet[6793]: I0920 18:37:51.896157    6793 server.go:416] Version: v1.20.0
	Sep 20 18:37:51 old-k8s-version-744025 kubelet[6793]: I0920 18:37:51.896461    6793 server.go:837] Client rotation is on, will bootstrap in background
	Sep 20 18:37:51 old-k8s-version-744025 kubelet[6793]: I0920 18:37:51.898509    6793 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Sep 20 18:37:51 old-k8s-version-744025 kubelet[6793]: I0920 18:37:51.899501    6793 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Sep 20 18:37:51 old-k8s-version-744025 kubelet[6793]: W0920 18:37:51.899520    6793 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-744025 -n old-k8s-version-744025
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-744025 -n old-k8s-version-744025: exit status 2 (239.323934ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-744025" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (146.68s)

                                                
                                    

Test pass (242/311)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 25.1
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.1/json-events 13.14
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.06
18 TestDownloadOnly/v1.31.1/DeleteAll 0.13
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.59
22 TestOffline 76.06
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 202.7
31 TestAddons/serial/GCPAuth/Namespaces 2.14
35 TestAddons/parallel/InspektorGadget 11.78
38 TestAddons/parallel/CSI 63.29
39 TestAddons/parallel/Headlamp 17.94
40 TestAddons/parallel/CloudSpanner 6.83
41 TestAddons/parallel/LocalPath 12.77
42 TestAddons/parallel/NvidiaDevicePlugin 6.8
43 TestAddons/parallel/Yakd 11.92
44 TestAddons/StoppedEnableDisable 92.92
45 TestCertOptions 69.44
46 TestCertExpiration 367.47
48 TestForceSystemdFlag 83.46
49 TestForceSystemdEnv 59.28
51 TestKVMDriverInstallOrUpdate 4.14
55 TestErrorSpam/setup 42.81
56 TestErrorSpam/start 0.34
57 TestErrorSpam/status 0.73
58 TestErrorSpam/pause 1.64
59 TestErrorSpam/unpause 1.68
60 TestErrorSpam/stop 5.16
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 56.55
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 33.73
67 TestFunctional/serial/KubeContext 0.04
68 TestFunctional/serial/KubectlGetPods 0.08
71 TestFunctional/serial/CacheCmd/cache/add_remote 4.41
72 TestFunctional/serial/CacheCmd/cache/add_local 2.15
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
74 TestFunctional/serial/CacheCmd/cache/list 0.05
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
76 TestFunctional/serial/CacheCmd/cache/cache_reload 1.77
77 TestFunctional/serial/CacheCmd/cache/delete 0.1
78 TestFunctional/serial/MinikubeKubectlCmd 0.16
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
80 TestFunctional/serial/ExtraConfig 34.83
81 TestFunctional/serial/ComponentHealth 0.07
82 TestFunctional/serial/LogsCmd 1.39
83 TestFunctional/serial/LogsFileCmd 1.38
84 TestFunctional/serial/InvalidService 3.95
86 TestFunctional/parallel/ConfigCmd 0.39
87 TestFunctional/parallel/DashboardCmd 22.76
88 TestFunctional/parallel/DryRun 0.3
89 TestFunctional/parallel/InternationalLanguage 0.17
90 TestFunctional/parallel/StatusCmd 1.09
94 TestFunctional/parallel/ServiceCmdConnect 11.65
95 TestFunctional/parallel/AddonsCmd 0.13
96 TestFunctional/parallel/PersistentVolumeClaim 38.48
98 TestFunctional/parallel/SSHCmd 0.48
99 TestFunctional/parallel/CpCmd 1.35
100 TestFunctional/parallel/MySQL 28.15
101 TestFunctional/parallel/FileSync 0.24
102 TestFunctional/parallel/CertSync 1.54
106 TestFunctional/parallel/NodeLabels 0.07
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.42
110 TestFunctional/parallel/License 0.59
111 TestFunctional/parallel/ServiceCmd/DeployApp 11.26
121 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
122 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
123 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
124 TestFunctional/parallel/ProfileCmd/profile_not_create 0.38
125 TestFunctional/parallel/ProfileCmd/profile_list 0.42
126 TestFunctional/parallel/ProfileCmd/profile_json_output 0.32
127 TestFunctional/parallel/MountCmd/any-port 15.71
128 TestFunctional/parallel/ServiceCmd/List 0.28
129 TestFunctional/parallel/ServiceCmd/JSONOutput 0.68
130 TestFunctional/parallel/ServiceCmd/HTTPS 0.34
131 TestFunctional/parallel/ServiceCmd/Format 0.35
132 TestFunctional/parallel/ServiceCmd/URL 0.28
133 TestFunctional/parallel/Version/short 0.06
134 TestFunctional/parallel/Version/components 0.57
135 TestFunctional/parallel/ImageCommands/ImageListShort 0.21
136 TestFunctional/parallel/ImageCommands/ImageListTable 0.3
137 TestFunctional/parallel/ImageCommands/ImageListJson 0.35
138 TestFunctional/parallel/ImageCommands/ImageListYaml 0.27
139 TestFunctional/parallel/ImageCommands/ImageBuild 5.27
140 TestFunctional/parallel/ImageCommands/Setup 1.74
141 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.44
142 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.88
143 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.91
144 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.89
145 TestFunctional/parallel/ImageCommands/ImageRemove 1.8
146 TestFunctional/parallel/MountCmd/specific-port 2.09
148 TestFunctional/parallel/MountCmd/VerifyCleanup 1.62
149 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.7
150 TestFunctional/delete_echo-server_images 0.03
151 TestFunctional/delete_my-image_image 0.02
152 TestFunctional/delete_minikube_cached_images 0.02
156 TestMultiControlPlane/serial/StartCluster 198.55
157 TestMultiControlPlane/serial/DeployApp 6.82
158 TestMultiControlPlane/serial/PingHostFromPods 1.2
159 TestMultiControlPlane/serial/AddWorkerNode 53.06
160 TestMultiControlPlane/serial/NodeLabels 0.07
161 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.84
162 TestMultiControlPlane/serial/CopyFile 12.65
166 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 4.15
168 TestMultiControlPlane/serial/DeleteSecondaryNode 16.77
169 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.62
171 TestMultiControlPlane/serial/RestartCluster 350.54
172 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.65
173 TestMultiControlPlane/serial/AddSecondaryNode 80.78
174 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.85
178 TestJSONOutput/start/Command 79.19
179 TestJSONOutput/start/Audit 0
181 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
182 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
184 TestJSONOutput/pause/Command 0.71
185 TestJSONOutput/pause/Audit 0
187 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/unpause/Command 0.62
191 TestJSONOutput/unpause/Audit 0
193 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/stop/Command 6.62
197 TestJSONOutput/stop/Audit 0
199 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
201 TestErrorJSONOutput 0.2
206 TestMainNoArgs 0.05
207 TestMinikubeProfile 87.56
210 TestMountStart/serial/StartWithMountFirst 31.33
211 TestMountStart/serial/VerifyMountFirst 0.43
212 TestMountStart/serial/StartWithMountSecond 28.16
213 TestMountStart/serial/VerifyMountSecond 0.38
214 TestMountStart/serial/DeleteFirst 0.92
215 TestMountStart/serial/VerifyMountPostDelete 0.39
216 TestMountStart/serial/Stop 1.52
217 TestMountStart/serial/RestartStopped 22.81
218 TestMountStart/serial/VerifyMountPostStop 0.38
221 TestMultiNode/serial/FreshStart2Nodes 113.22
222 TestMultiNode/serial/DeployApp2Nodes 6.27
223 TestMultiNode/serial/PingHostFrom2Pods 0.84
224 TestMultiNode/serial/AddNode 52.95
225 TestMultiNode/serial/MultiNodeLabels 0.06
226 TestMultiNode/serial/ProfileList 0.59
227 TestMultiNode/serial/CopyFile 7.34
228 TestMultiNode/serial/StopNode 2.3
229 TestMultiNode/serial/StartAfterStop 39.59
231 TestMultiNode/serial/DeleteNode 2.28
233 TestMultiNode/serial/RestartMultiNode 204.94
234 TestMultiNode/serial/ValidateNameConflict 41.76
241 TestScheduledStopUnix 113.49
245 TestRunningBinaryUpgrade 123.76
249 TestStoppedBinaryUpgrade/Setup 2.33
253 TestStoppedBinaryUpgrade/Upgrade 186.26
258 TestNetworkPlugins/group/false 3.14
262 TestStoppedBinaryUpgrade/MinikubeLogs 0.93
271 TestPause/serial/Start 94.04
274 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
275 TestNoKubernetes/serial/StartWithK8s 56.08
276 TestNetworkPlugins/group/auto/Start 102.04
277 TestNetworkPlugins/group/kindnet/Start 79.2
278 TestNoKubernetes/serial/StartWithStopK8s 70.03
279 TestNoKubernetes/serial/Start 28.83
280 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
281 TestNetworkPlugins/group/auto/KubeletFlags 0.21
282 TestNetworkPlugins/group/auto/NetCatPod 12.3
283 TestNetworkPlugins/group/kindnet/KubeletFlags 0.33
284 TestNetworkPlugins/group/kindnet/NetCatPod 10.29
285 TestNetworkPlugins/group/auto/DNS 0.17
286 TestNetworkPlugins/group/auto/Localhost 0.16
287 TestNetworkPlugins/group/auto/HairPin 0.17
288 TestNetworkPlugins/group/calico/Start 85.12
289 TestNetworkPlugins/group/kindnet/DNS 0.16
290 TestNetworkPlugins/group/kindnet/Localhost 0.13
291 TestNetworkPlugins/group/kindnet/HairPin 0.13
292 TestNoKubernetes/serial/VerifyK8sNotRunning 0.25
293 TestNoKubernetes/serial/ProfileList 1.68
294 TestNoKubernetes/serial/Stop 1.7
295 TestNoKubernetes/serial/StartNoArgs 41.64
296 TestNetworkPlugins/group/custom-flannel/Start 109.03
297 TestNetworkPlugins/group/enable-default-cni/Start 146.19
298 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.22
299 TestNetworkPlugins/group/flannel/Start 145.17
300 TestNetworkPlugins/group/calico/ControllerPod 6.01
301 TestNetworkPlugins/group/calico/KubeletFlags 0.2
302 TestNetworkPlugins/group/calico/NetCatPod 11.26
303 TestNetworkPlugins/group/calico/DNS 0.16
304 TestNetworkPlugins/group/calico/Localhost 0.15
305 TestNetworkPlugins/group/calico/HairPin 0.15
306 TestNetworkPlugins/group/bridge/Start 69.7
307 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.21
308 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.24
309 TestNetworkPlugins/group/custom-flannel/DNS 0.17
310 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
311 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
314 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.26
315 TestNetworkPlugins/group/enable-default-cni/NetCatPod 14.29
316 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
317 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
318 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
319 TestNetworkPlugins/group/bridge/KubeletFlags 0.23
320 TestNetworkPlugins/group/bridge/NetCatPod 12.28
322 TestStartStop/group/no-preload/serial/FirstStart 78.71
323 TestNetworkPlugins/group/flannel/ControllerPod 6.01
324 TestNetworkPlugins/group/bridge/DNS 0.17
325 TestNetworkPlugins/group/bridge/Localhost 0.13
326 TestNetworkPlugins/group/bridge/HairPin 0.12
327 TestNetworkPlugins/group/flannel/KubeletFlags 0.22
328 TestNetworkPlugins/group/flannel/NetCatPod 11.24
329 TestNetworkPlugins/group/flannel/DNS 0.18
330 TestNetworkPlugins/group/flannel/Localhost 0.14
331 TestNetworkPlugins/group/flannel/HairPin 0.14
333 TestStartStop/group/embed-certs/serial/FirstStart 92.12
335 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 97.65
336 TestStartStop/group/no-preload/serial/DeployApp 10.37
337 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.36
339 TestStartStop/group/embed-certs/serial/DeployApp 11.27
340 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.96
342 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.29
343 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.02
348 TestStartStop/group/no-preload/serial/SecondStart 643.21
350 TestStartStop/group/embed-certs/serial/SecondStart 575.64
352 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 540.81
353 TestStartStop/group/old-k8s-version/serial/Stop 1.31
354 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
365 TestStartStop/group/newest-cni/serial/FirstStart 52.27
366 TestStartStop/group/newest-cni/serial/DeployApp 0
367 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.1
368 TestStartStop/group/newest-cni/serial/Stop 10.64
369 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
370 TestStartStop/group/newest-cni/serial/SecondStart 36.15
371 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
372 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
373 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
374 TestStartStop/group/newest-cni/serial/Pause 4.24
x
+
TestDownloadOnly/v1.20.0/json-events (25.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-858543 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-858543 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (25.096937377s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (25.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0920 16:44:03.487249   15973 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0920 16:44:03.487334   15973 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-858543
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-858543: exit status 85 (57.446257ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-858543 | jenkins | v1.34.0 | 20 Sep 24 16:43 UTC |          |
	|         | -p download-only-858543        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 16:43:38
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 16:43:38.427287   15985 out.go:345] Setting OutFile to fd 1 ...
	I0920 16:43:38.427395   15985 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 16:43:38.427406   15985 out.go:358] Setting ErrFile to fd 2...
	I0920 16:43:38.427411   15985 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 16:43:38.427628   15985 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-8777/.minikube/bin
	W0920 16:43:38.427797   15985 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19672-8777/.minikube/config/config.json: open /home/jenkins/minikube-integration/19672-8777/.minikube/config/config.json: no such file or directory
	I0920 16:43:38.428435   15985 out.go:352] Setting JSON to true
	I0920 16:43:38.429363   15985 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1561,"bootTime":1726849057,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 16:43:38.429462   15985 start.go:139] virtualization: kvm guest
	I0920 16:43:38.431974   15985 out.go:97] [download-only-858543] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 16:43:38.432153   15985 notify.go:220] Checking for updates...
	W0920 16:43:38.432142   15985 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball: no such file or directory
	I0920 16:43:38.433583   15985 out.go:169] MINIKUBE_LOCATION=19672
	I0920 16:43:38.434962   15985 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 16:43:38.436375   15985 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19672-8777/kubeconfig
	I0920 16:43:38.437748   15985 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-8777/.minikube
	I0920 16:43:38.439194   15985 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0920 16:43:38.441947   15985 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0920 16:43:38.442225   15985 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 16:43:38.549180   15985 out.go:97] Using the kvm2 driver based on user configuration
	I0920 16:43:38.549211   15985 start.go:297] selected driver: kvm2
	I0920 16:43:38.549218   15985 start.go:901] validating driver "kvm2" against <nil>
	I0920 16:43:38.549600   15985 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 16:43:38.549741   15985 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19672-8777/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 16:43:38.565867   15985 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 16:43:38.565929   15985 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 16:43:38.566483   15985 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0920 16:43:38.566666   15985 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0920 16:43:38.566694   15985 cni.go:84] Creating CNI manager for ""
	I0920 16:43:38.566753   15985 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 16:43:38.566768   15985 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 16:43:38.566826   15985 start.go:340] cluster config:
	{Name:download-only-858543 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-858543 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 16:43:38.567041   15985 iso.go:125] acquiring lock: {Name:mkba95ef0488e46f622333e9f317f43def93040b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 16:43:38.569237   15985 out.go:97] Downloading VM boot image ...
	I0920 16:43:38.569274   15985 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19672-8777/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso
	I0920 16:43:50.773516   15985 out.go:97] Starting "download-only-858543" primary control-plane node in "download-only-858543" cluster
	I0920 16:43:50.773556   15985 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0920 16:43:50.869156   15985 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0920 16:43:50.869180   15985 cache.go:56] Caching tarball of preloaded images
	I0920 16:43:50.869355   15985 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0920 16:43:50.871395   15985 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0920 16:43:50.871422   15985 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0920 16:43:50.972406   15985 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0920 16:44:01.491796   15985 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0920 16:44:01.491891   15985 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0920 16:44:02.694172   15985 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0920 16:44:02.694549   15985 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/download-only-858543/config.json ...
	I0920 16:44:02.694579   15985 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/download-only-858543/config.json: {Name:mk7d70c2e923e502dcfb0a3f82ebc145c64e7a11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:44:02.694745   15985 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0920 16:44:02.694945   15985 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19672-8777/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-858543 host does not exist
	  To start a cluster, run: "minikube start -p download-only-858543"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-858543
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (13.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-349545 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-349545 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (13.141360592s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (13.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I0920 16:44:16.948083   15973 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
I0920 16:44:16.948118   15973 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-349545
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-349545: exit status 85 (56.966826ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-858543 | jenkins | v1.34.0 | 20 Sep 24 16:43 UTC |                     |
	|         | -p download-only-858543        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 20 Sep 24 16:44 UTC | 20 Sep 24 16:44 UTC |
	| delete  | -p download-only-858543        | download-only-858543 | jenkins | v1.34.0 | 20 Sep 24 16:44 UTC | 20 Sep 24 16:44 UTC |
	| start   | -o=json --download-only        | download-only-349545 | jenkins | v1.34.0 | 20 Sep 24 16:44 UTC |                     |
	|         | -p download-only-349545        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 16:44:03
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 16:44:03.843843   16243 out.go:345] Setting OutFile to fd 1 ...
	I0920 16:44:03.843951   16243 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 16:44:03.843960   16243 out.go:358] Setting ErrFile to fd 2...
	I0920 16:44:03.843965   16243 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 16:44:03.844156   16243 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-8777/.minikube/bin
	I0920 16:44:03.844724   16243 out.go:352] Setting JSON to true
	I0920 16:44:03.845536   16243 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1587,"bootTime":1726849057,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 16:44:03.845629   16243 start.go:139] virtualization: kvm guest
	I0920 16:44:03.848152   16243 out.go:97] [download-only-349545] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 16:44:03.848313   16243 notify.go:220] Checking for updates...
	I0920 16:44:03.849772   16243 out.go:169] MINIKUBE_LOCATION=19672
	I0920 16:44:03.851263   16243 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 16:44:03.852869   16243 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19672-8777/kubeconfig
	I0920 16:44:03.854864   16243 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-8777/.minikube
	I0920 16:44:03.856798   16243 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0920 16:44:03.859494   16243 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0920 16:44:03.859699   16243 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 16:44:03.891312   16243 out.go:97] Using the kvm2 driver based on user configuration
	I0920 16:44:03.891340   16243 start.go:297] selected driver: kvm2
	I0920 16:44:03.891345   16243 start.go:901] validating driver "kvm2" against <nil>
	I0920 16:44:03.891647   16243 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 16:44:03.891724   16243 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19672-8777/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 16:44:03.906775   16243 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 16:44:03.906833   16243 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 16:44:03.907356   16243 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0920 16:44:03.907495   16243 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0920 16:44:03.907518   16243 cni.go:84] Creating CNI manager for ""
	I0920 16:44:03.907562   16243 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 16:44:03.907570   16243 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 16:44:03.907623   16243 start.go:340] cluster config:
	{Name:download-only-349545 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-349545 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 16:44:03.907711   16243 iso.go:125] acquiring lock: {Name:mkba95ef0488e46f622333e9f317f43def93040b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 16:44:03.909436   16243 out.go:97] Starting "download-only-349545" primary control-plane node in "download-only-349545" cluster
	I0920 16:44:03.909460   16243 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 16:44:04.043689   16243 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0920 16:44:04.043719   16243 cache.go:56] Caching tarball of preloaded images
	I0920 16:44:04.043871   16243 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 16:44:04.046567   16243 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0920 16:44:04.046593   16243 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 ...
	I0920 16:44:04.593497   16243 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:aa79045e4550b9510ee496fee0d50abb -> /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-349545 host does not exist
	  To start a cluster, run: "minikube start -p download-only-349545"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-349545
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
I0920 16:44:17.505225   15973 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-811854 --alsologtostderr --binary-mirror http://127.0.0.1:34057 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-811854" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-811854
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestOffline (76.06s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-312889 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-312889 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m14.93390692s)
helpers_test.go:175: Cleaning up "offline-crio-312889" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-312889
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-312889: (1.124069745s)
--- PASS: TestOffline (76.06s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-489802
addons_test.go:975: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-489802: exit status 85 (50.194058ms)

                                                
                                                
-- stdout --
	* Profile "addons-489802" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-489802"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-489802
addons_test.go:986: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-489802: exit status 85 (50.698885ms)

                                                
                                                
-- stdout --
	* Profile "addons-489802" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-489802"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (202.7s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-489802 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-489802 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns: (3m22.702273725s)
--- PASS: TestAddons/Setup (202.70s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (2.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:594: (dbg) Run:  kubectl --context addons-489802 create ns new-namespace
addons_test.go:608: (dbg) Run:  kubectl --context addons-489802 get secret gcp-auth -n new-namespace
addons_test.go:608: (dbg) Non-zero exit: kubectl --context addons-489802 get secret gcp-auth -n new-namespace: exit status 1 (104.650203ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): secrets "gcp-auth" not found

                                                
                                                
** /stderr **
addons_test.go:600: (dbg) Run:  kubectl --context addons-489802 logs -l app=gcp-auth -n gcp-auth
I0920 16:47:41.135147   15973 retry.go:31] will retry after 1.785521849s: %!w(<nil>): gcp-auth container logs: 
-- stdout --
	2024/09/20 16:47:40 GCP Auth Webhook started!
	2024/09/20 16:47:40 Ready to marshal response ...
	2024/09/20 16:47:40 Ready to write response ...

                                                
                                                
-- /stdout --
addons_test.go:608: (dbg) Run:  kubectl --context addons-489802 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (2.14s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.78s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-s52g6" [ade64dae-8767-49d0-95ef-d2ca5f9309a5] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.0052251s
addons_test.go:789: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-489802
addons_test.go:789: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-489802: (5.770777686s)
--- PASS: TestAddons/parallel/InspektorGadget (11.78s)

                                                
                                    
x
+
TestAddons/parallel/CSI (63.29s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0920 16:56:03.932876   15973 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0920 16:56:03.939206   15973 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0920 16:56:03.939246   15973 kapi.go:107] duration metric: took 6.450704ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:505: csi-hostpath-driver pods stabilized in 6.505251ms
addons_test.go:508: (dbg) Run:  kubectl --context addons-489802 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:513: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-489802 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-489802 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-489802 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-489802 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-489802 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-489802 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-489802 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-489802 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-489802 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-489802 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-489802 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-489802 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-489802 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-489802 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-489802 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-489802 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-489802 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-489802 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-489802 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-489802 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:518: (dbg) Run:  kubectl --context addons-489802 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:523: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [9ca46679-c1e3-422e-84a1-ecb9b07f574b] Pending
helpers_test.go:344: "task-pv-pod" [9ca46679-c1e3-422e-84a1-ecb9b07f574b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [9ca46679-c1e3-422e-84a1-ecb9b07f574b] Running
addons_test.go:523: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 17.00461392s
addons_test.go:528: (dbg) Run:  kubectl --context addons-489802 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:533: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-489802 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-489802 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:538: (dbg) Run:  kubectl --context addons-489802 delete pod task-pv-pod
addons_test.go:538: (dbg) Done: kubectl --context addons-489802 delete pod task-pv-pod: (1.108920835s)
addons_test.go:544: (dbg) Run:  kubectl --context addons-489802 delete pvc hpvc
addons_test.go:550: (dbg) Run:  kubectl --context addons-489802 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-489802 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-489802 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-489802 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-489802 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-489802 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-489802 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-489802 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-489802 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-489802 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:560: (dbg) Run:  kubectl --context addons-489802 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [5f951af3-0fc4-4606-9f2e-556adaa494f1] Pending
helpers_test.go:344: "task-pv-pod-restore" [5f951af3-0fc4-4606-9f2e-556adaa494f1] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [5f951af3-0fc4-4606-9f2e-556adaa494f1] Running
addons_test.go:565: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.008416924s
addons_test.go:570: (dbg) Run:  kubectl --context addons-489802 delete pod task-pv-pod-restore
addons_test.go:574: (dbg) Run:  kubectl --context addons-489802 delete pvc hpvc-restore
addons_test.go:578: (dbg) Run:  kubectl --context addons-489802 delete volumesnapshot new-snapshot-demo
addons_test.go:582: (dbg) Run:  out/minikube-linux-amd64 -p addons-489802 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:582: (dbg) Done: out/minikube-linux-amd64 -p addons-489802 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.164979237s)
addons_test.go:586: (dbg) Run:  out/minikube-linux-amd64 -p addons-489802 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (63.29s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.94s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:768: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-489802 --alsologtostderr -v=1
addons_test.go:768: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-489802 --alsologtostderr -v=1: (1.164442239s)
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-lbmjv" [78f3e0b2-7e45-404c-afc1-2ad4f459861b] Pending
helpers_test.go:344: "headlamp-7b5c95b59d-lbmjv" [78f3e0b2-7e45-404c-afc1-2ad4f459861b] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-lbmjv" [78f3e0b2-7e45-404c-afc1-2ad4f459861b] Running
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.004800624s
addons_test.go:777: (dbg) Run:  out/minikube-linux-amd64 -p addons-489802 addons disable headlamp --alsologtostderr -v=1
addons_test.go:777: (dbg) Done: out/minikube-linux-amd64 -p addons-489802 addons disable headlamp --alsologtostderr -v=1: (5.771274846s)
--- PASS: TestAddons/parallel/Headlamp (17.94s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.83s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-2xs98" [48614238-3baf-4435-8512-b1c655b18893] Running
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.070714259s
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-489802
--- PASS: TestAddons/parallel/CloudSpanner (6.83s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (12.77s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:920: (dbg) Run:  kubectl --context addons-489802 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:926: (dbg) Run:  kubectl --context addons-489802 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:930: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-489802 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-489802 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-489802 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-489802 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-489802 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-489802 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-489802 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [5a52579e-aa38-4262-8d40-663925dc3ec1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [5a52579e-aa38-4262-8d40-663925dc3ec1] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [5a52579e-aa38-4262-8d40-663925dc3ec1] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.003948245s
addons_test.go:938: (dbg) Run:  kubectl --context addons-489802 get pvc test-pvc -o=json
addons_test.go:947: (dbg) Run:  out/minikube-linux-amd64 -p addons-489802 ssh "cat /opt/local-path-provisioner/pvc-b8225ab7-cae8-4ab5-8ca1-5e74b7712f98_default_test-pvc/file1"
addons_test.go:959: (dbg) Run:  kubectl --context addons-489802 delete pod test-local-path
addons_test.go:963: (dbg) Run:  kubectl --context addons-489802 delete pvc test-pvc
addons_test.go:967: (dbg) Run:  out/minikube-linux-amd64 -p addons-489802 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (12.77s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.8s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-54hhx" [b022f644-f7de-4d74-aed4-63ad47ef0b71] Running
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.078374848s
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-489802
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.80s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.92s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-rs2v4" [0db1b32f-ce73-42d7-948a-296195235d9e] Running
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004833093s
addons_test.go:1014: (dbg) Run:  out/minikube-linux-amd64 -p addons-489802 addons disable yakd --alsologtostderr -v=1
addons_test.go:1014: (dbg) Done: out/minikube-linux-amd64 -p addons-489802 addons disable yakd --alsologtostderr -v=1: (5.915723345s)
--- PASS: TestAddons/parallel/Yakd (11.92s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (92.92s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-489802
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-489802: (1m32.643859292s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-489802
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-489802
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-489802
--- PASS: TestAddons/StoppedEnableDisable (92.92s)

                                                
                                    
x
+
TestCertOptions (69.44s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-815898 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-815898 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m7.990626009s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-815898 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-815898 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-815898 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-815898" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-815898
--- PASS: TestCertOptions (69.44s)

                                                
                                    
x
+
TestCertExpiration (367.47s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-452691 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-452691 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m8.918034393s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-452691 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-452691 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (1m57.503596718s)
helpers_test.go:175: Cleaning up "cert-expiration-452691" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-452691
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-452691: (1.048630448s)
--- PASS: TestCertExpiration (367.47s)

                                                
                                    
x
+
TestForceSystemdFlag (83.46s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-956160 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-956160 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m22.266389978s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-956160 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-956160" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-956160
--- PASS: TestForceSystemdFlag (83.46s)

                                                
                                    
x
+
TestForceSystemdEnv (59.28s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-030548 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-030548 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (58.308086423s)
helpers_test.go:175: Cleaning up "force-systemd-env-030548" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-030548
--- PASS: TestForceSystemdEnv (59.28s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.14s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0920 17:58:54.701864   15973 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0920 17:58:54.702124   15973 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0920 17:58:54.763731   15973 install.go:62] docker-machine-driver-kvm2: exit status 1
W0920 17:58:54.764768   15973 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0920 17:58:54.764885   15973 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3105688446/001/docker-machine-driver-kvm2
I0920 17:58:54.985751   15973 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate3105688446/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x4665640 0x4665640 0x4665640 0x4665640 0x4665640 0x4665640 0x4665640] Decompressors:map[bz2:0xc00070b650 gz:0xc00070b658 tar:0xc00070b600 tar.bz2:0xc00070b610 tar.gz:0xc00070b620 tar.xz:0xc00070b630 tar.zst:0xc00070b640 tbz2:0xc00070b610 tgz:0xc00070b620 txz:0xc00070b630 tzst:0xc00070b640 xz:0xc00070b660 zip:0xc00070b670 zst:0xc00070b668] Getters:map[file:0xc001b6edd0 http:0xc0019033b0 https:0xc001903400] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0920 17:58:54.985809   15973 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3105688446/001/docker-machine-driver-kvm2
I0920 17:58:56.689593   15973 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0920 17:58:56.689693   15973 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0920 17:58:56.736920   15973 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0920 17:58:56.736960   15973 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0920 17:58:56.737040   15973 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0920 17:58:56.737077   15973 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3105688446/002/docker-machine-driver-kvm2
I0920 17:58:56.781629   15973 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate3105688446/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x4665640 0x4665640 0x4665640 0x4665640 0x4665640 0x4665640 0x4665640] Decompressors:map[bz2:0xc00070b650 gz:0xc00070b658 tar:0xc00070b600 tar.bz2:0xc00070b610 tar.gz:0xc00070b620 tar.xz:0xc00070b630 tar.zst:0xc00070b640 tbz2:0xc00070b610 tgz:0xc00070b620 txz:0xc00070b630 tzst:0xc00070b640 xz:0xc00070b660 zip:0xc00070b670 zst:0xc00070b668] Getters:map[file:0xc000a283e0 http:0xc0000de370 https:0xc0000de3c0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0920 17:58:56.781673   15973 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3105688446/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (4.14s)

                                                
                                    
x
+
TestErrorSpam/setup (42.81s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-084981 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-084981 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-084981 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-084981 --driver=kvm2  --container-runtime=crio: (42.807297721s)
--- PASS: TestErrorSpam/setup (42.81s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-084981 --log_dir /tmp/nospam-084981 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-084981 --log_dir /tmp/nospam-084981 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-084981 --log_dir /tmp/nospam-084981 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.73s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-084981 --log_dir /tmp/nospam-084981 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-084981 --log_dir /tmp/nospam-084981 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-084981 --log_dir /tmp/nospam-084981 status
--- PASS: TestErrorSpam/status (0.73s)

                                                
                                    
x
+
TestErrorSpam/pause (1.64s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-084981 --log_dir /tmp/nospam-084981 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-084981 --log_dir /tmp/nospam-084981 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-084981 --log_dir /tmp/nospam-084981 pause
--- PASS: TestErrorSpam/pause (1.64s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.68s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-084981 --log_dir /tmp/nospam-084981 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-084981 --log_dir /tmp/nospam-084981 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-084981 --log_dir /tmp/nospam-084981 unpause
--- PASS: TestErrorSpam/unpause (1.68s)

                                                
                                    
x
+
TestErrorSpam/stop (5.16s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-084981 --log_dir /tmp/nospam-084981 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-084981 --log_dir /tmp/nospam-084981 stop: (1.597865197s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-084981 --log_dir /tmp/nospam-084981 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-084981 --log_dir /tmp/nospam-084981 stop: (2.073392654s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-084981 --log_dir /tmp/nospam-084981 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-084981 --log_dir /tmp/nospam-084981 stop: (1.486128379s)
--- PASS: TestErrorSpam/stop (5.16s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19672-8777/.minikube/files/etc/test/nested/copy/15973/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (56.55s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-945494 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-945494 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (56.553560821s)
--- PASS: TestFunctional/serial/StartWithProxy (56.55s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (33.73s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0920 17:05:15.179254   15973 config.go:182] Loaded profile config "functional-945494": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-945494 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-945494 --alsologtostderr -v=8: (33.728285268s)
functional_test.go:663: soft start took 33.729252948s for "functional-945494" cluster.
I0920 17:05:48.907950   15973 config.go:182] Loaded profile config "functional-945494": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (33.73s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-945494 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.41s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-945494 cache add registry.k8s.io/pause:3.1: (1.481205673s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-945494 cache add registry.k8s.io/pause:3.3: (1.532411257s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-945494 cache add registry.k8s.io/pause:latest: (1.394336423s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.41s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-945494 /tmp/TestFunctionalserialCacheCmdcacheadd_local2584465606/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 cache add minikube-local-cache-test:functional-945494
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-945494 cache add minikube-local-cache-test:functional-945494: (1.802265054s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 cache delete minikube-local-cache-test:functional-945494
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-945494
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.15s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.77s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-945494 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (205.761766ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-amd64 -p functional-945494 cache reload: (1.098216772s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.77s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 kubectl -- --context functional-945494 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-945494 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (34.83s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-945494 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-945494 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (34.828167616s)
functional_test.go:761: restart took 34.828337645s for "functional-945494" cluster.
I0920 17:06:32.893687   15973 config.go:182] Loaded profile config "functional-945494": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (34.83s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-945494 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.39s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-945494 logs: (1.38713912s)
--- PASS: TestFunctional/serial/LogsCmd (1.39s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.38s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 logs --file /tmp/TestFunctionalserialLogsFileCmd1657506182/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-945494 logs --file /tmp/TestFunctionalserialLogsFileCmd1657506182/001/logs.txt: (1.380454472s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.38s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.95s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-945494 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-945494
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-945494: exit status 115 (278.563817ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.181:30708 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-945494 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.95s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-945494 config get cpus: exit status 14 (70.71004ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-945494 config get cpus: exit status 14 (56.586416ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (22.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-945494 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-945494 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 27347: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (22.76s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-945494 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-945494 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (144.988583ms)

                                                
                                                
-- stdout --
	* [functional-945494] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19672
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19672-8777/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-8777/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 17:06:51.984904   26218 out.go:345] Setting OutFile to fd 1 ...
	I0920 17:06:51.985063   26218 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:06:51.985076   26218 out.go:358] Setting ErrFile to fd 2...
	I0920 17:06:51.985083   26218 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:06:51.985345   26218 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-8777/.minikube/bin
	I0920 17:06:51.986104   26218 out.go:352] Setting JSON to false
	I0920 17:06:51.987318   26218 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2955,"bootTime":1726849057,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 17:06:51.987436   26218 start.go:139] virtualization: kvm guest
	I0920 17:06:51.990171   26218 out.go:177] * [functional-945494] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 17:06:51.991746   26218 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 17:06:51.991803   26218 notify.go:220] Checking for updates...
	I0920 17:06:51.994594   26218 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 17:06:51.996193   26218 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-8777/kubeconfig
	I0920 17:06:51.997772   26218 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-8777/.minikube
	I0920 17:06:51.999917   26218 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 17:06:52.001407   26218 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 17:06:52.003399   26218 config.go:182] Loaded profile config "functional-945494": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 17:06:52.003817   26218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:06:52.003895   26218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:06:52.020401   26218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46269
	I0920 17:06:52.021002   26218 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:06:52.021619   26218 main.go:141] libmachine: Using API Version  1
	I0920 17:06:52.021678   26218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:06:52.022062   26218 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:06:52.022247   26218 main.go:141] libmachine: (functional-945494) Calling .DriverName
	I0920 17:06:52.022504   26218 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 17:06:52.022920   26218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:06:52.022962   26218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:06:52.039401   26218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34493
	I0920 17:06:52.039934   26218 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:06:52.040401   26218 main.go:141] libmachine: Using API Version  1
	I0920 17:06:52.040420   26218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:06:52.040791   26218 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:06:52.040989   26218 main.go:141] libmachine: (functional-945494) Calling .DriverName
	I0920 17:06:52.074507   26218 out.go:177] * Using the kvm2 driver based on existing profile
	I0920 17:06:52.075995   26218 start.go:297] selected driver: kvm2
	I0920 17:06:52.076009   26218 start.go:901] validating driver "kvm2" against &{Name:functional-945494 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-945494 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.181 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 17:06:52.076126   26218 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 17:06:52.078328   26218 out.go:201] 
	W0920 17:06:52.079606   26218 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0920 17:06:52.080871   26218 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-945494 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-945494 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-945494 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (171.462753ms)

                                                
                                                
-- stdout --
	* [functional-945494] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19672
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19672-8777/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-8777/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 17:06:52.303963   26303 out.go:345] Setting OutFile to fd 1 ...
	I0920 17:06:52.304316   26303 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:06:52.304328   26303 out.go:358] Setting ErrFile to fd 2...
	I0920 17:06:52.304336   26303 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:06:52.304710   26303 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-8777/.minikube/bin
	I0920 17:06:52.305357   26303 out.go:352] Setting JSON to false
	I0920 17:06:52.306475   26303 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2955,"bootTime":1726849057,"procs":228,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 17:06:52.306581   26303 start.go:139] virtualization: kvm guest
	I0920 17:06:52.308551   26303 out.go:177] * [functional-945494] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I0920 17:06:52.310219   26303 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 17:06:52.310279   26303 notify.go:220] Checking for updates...
	I0920 17:06:52.313197   26303 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 17:06:52.314813   26303 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-8777/kubeconfig
	I0920 17:06:52.316391   26303 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-8777/.minikube
	I0920 17:06:52.317719   26303 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 17:06:52.318975   26303 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 17:06:52.320874   26303 config.go:182] Loaded profile config "functional-945494": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 17:06:52.321397   26303 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:06:52.321473   26303 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:06:52.338373   26303 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34153
	I0920 17:06:52.338824   26303 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:06:52.339356   26303 main.go:141] libmachine: Using API Version  1
	I0920 17:06:52.339377   26303 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:06:52.339739   26303 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:06:52.339973   26303 main.go:141] libmachine: (functional-945494) Calling .DriverName
	I0920 17:06:52.340264   26303 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 17:06:52.340737   26303 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:06:52.340826   26303 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:06:52.360372   26303 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40227
	I0920 17:06:52.360925   26303 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:06:52.361489   26303 main.go:141] libmachine: Using API Version  1
	I0920 17:06:52.361516   26303 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:06:52.361968   26303 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:06:52.362180   26303 main.go:141] libmachine: (functional-945494) Calling .DriverName
	I0920 17:06:52.403023   26303 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0920 17:06:52.404423   26303 start.go:297] selected driver: kvm2
	I0920 17:06:52.404439   26303 start.go:901] validating driver "kvm2" against &{Name:functional-945494 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-945494 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.181 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 17:06:52.404591   26303 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 17:06:52.407020   26303 out.go:201] 
	W0920 17:06:52.408976   26303 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0920 17:06:52.410718   26303 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-945494 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-945494 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-ttgmp" [82d57e4f-af4e-474a-85b0-215b6c2bb6ae] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-ttgmp" [82d57e4f-af4e-474a-85b0-215b6c2bb6ae] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.006940015s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.181:30811
functional_test.go:1675: http://192.168.39.181:30811: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-ttgmp

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.181:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.181:30811
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.65s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (38.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [0da1f60a-fa61-4393-9ba9-528d8300e00c] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.005367858s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-945494 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-945494 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-945494 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-945494 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [61d7b992-1343-40c6-a805-75d1c76578a8] Pending
helpers_test.go:344: "sp-pod" [61d7b992-1343-40c6-a805-75d1c76578a8] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [61d7b992-1343-40c6-a805-75d1c76578a8] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.006394322s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-945494 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-945494 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-945494 delete -f testdata/storage-provisioner/pod.yaml: (2.089826469s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-945494 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [3aef59a1-2f45-4d7f-bf37-6212e9baf0cf] Pending
helpers_test.go:344: "sp-pod" [3aef59a1-2f45-4d7f-bf37-6212e9baf0cf] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [3aef59a1-2f45-4d7f-bf37-6212e9baf0cf] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 16.014459379s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-945494 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (38.48s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 ssh -n functional-945494 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 cp functional-945494:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4073062529/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 ssh -n functional-945494 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 ssh -n functional-945494 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (28.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-945494 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-fldl8" [54285573-5f1e-4cce-bf88-389eb44937df] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-fldl8" [54285573-5f1e-4cce-bf88-389eb44937df] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 24.008512135s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-945494 exec mysql-6cdb49bbb-fldl8 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-945494 exec mysql-6cdb49bbb-fldl8 -- mysql -ppassword -e "show databases;": exit status 1 (290.582612ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0920 17:07:17.554811   15973 retry.go:31] will retry after 1.170264439s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-945494 exec mysql-6cdb49bbb-fldl8 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-945494 exec mysql-6cdb49bbb-fldl8 -- mysql -ppassword -e "show databases;": exit status 1 (241.919208ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0920 17:07:18.967605   15973 retry.go:31] will retry after 1.984013125s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-945494 exec mysql-6cdb49bbb-fldl8 -- mysql -ppassword -e "show databases;"
2024/09/20 17:07:26 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/MySQL (28.15s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/15973/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 ssh "sudo cat /etc/test/nested/copy/15973/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/15973.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 ssh "sudo cat /etc/ssl/certs/15973.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/15973.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 ssh "sudo cat /usr/share/ca-certificates/15973.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/159732.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 ssh "sudo cat /etc/ssl/certs/159732.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/159732.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 ssh "sudo cat /usr/share/ca-certificates/159732.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-945494 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-945494 ssh "sudo systemctl is-active docker": exit status 1 (207.970232ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-945494 ssh "sudo systemctl is-active containerd": exit status 1 (212.431437ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-945494 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-945494 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-rjr9n" [ca8dce90-c5cc-4cfa-92e6-f6cae86244af] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-rjr9n" [ca8dce90-c5cc-4cfa-92e6-f6cae86244af] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.004400002s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.26s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "363.616809ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "52.322949ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "268.845463ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "46.735731ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (15.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-945494 /tmp/TestFunctionalparallelMountCmdany-port4054701743/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726852004773880898" to /tmp/TestFunctionalparallelMountCmdany-port4054701743/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726852004773880898" to /tmp/TestFunctionalparallelMountCmdany-port4054701743/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726852004773880898" to /tmp/TestFunctionalparallelMountCmdany-port4054701743/001/test-1726852004773880898
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-945494 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (201.844606ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0920 17:06:44.976104   15973 retry.go:31] will retry after 599.434936ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 20 17:06 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 20 17:06 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 20 17:06 test-1726852004773880898
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 ssh cat /mount-9p/test-1726852004773880898
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-945494 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [ddacf0da-738c-4498-a234-e3b7b32fdd26] Pending
helpers_test.go:344: "busybox-mount" [ddacf0da-738c-4498-a234-e3b7b32fdd26] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [ddacf0da-738c-4498-a234-e3b7b32fdd26] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [ddacf0da-738c-4498-a234-e3b7b32fdd26] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 13.004241942s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-945494 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-945494 /tmp/TestFunctionalparallelMountCmdany-port4054701743/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (15.71s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 service list -o json
functional_test.go:1494: Took "678.236442ms" to run "out/minikube-linux-amd64 -p functional-945494 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.181:30694
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.181:30694
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-945494 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-945494
localhost/kicbase/echo-server:functional-945494
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240813-c6f155d6
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-945494 image ls --format short --alsologtostderr:
I0920 17:07:12.844449   27574 out.go:345] Setting OutFile to fd 1 ...
I0920 17:07:12.844714   27574 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 17:07:12.844723   27574 out.go:358] Setting ErrFile to fd 2...
I0920 17:07:12.844728   27574 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 17:07:12.844897   27574 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-8777/.minikube/bin
I0920 17:07:12.845511   27574 config.go:182] Loaded profile config "functional-945494": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0920 17:07:12.845611   27574 config.go:182] Loaded profile config "functional-945494": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0920 17:07:12.846009   27574 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 17:07:12.846051   27574 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 17:07:12.861263   27574 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46733
I0920 17:07:12.861731   27574 main.go:141] libmachine: () Calling .GetVersion
I0920 17:07:12.862375   27574 main.go:141] libmachine: Using API Version  1
I0920 17:07:12.862402   27574 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 17:07:12.862755   27574 main.go:141] libmachine: () Calling .GetMachineName
I0920 17:07:12.862936   27574 main.go:141] libmachine: (functional-945494) Calling .GetState
I0920 17:07:12.864650   27574 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 17:07:12.864686   27574 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 17:07:12.880380   27574 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35409
I0920 17:07:12.880923   27574 main.go:141] libmachine: () Calling .GetVersion
I0920 17:07:12.881401   27574 main.go:141] libmachine: Using API Version  1
I0920 17:07:12.881424   27574 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 17:07:12.881796   27574 main.go:141] libmachine: () Calling .GetMachineName
I0920 17:07:12.882002   27574 main.go:141] libmachine: (functional-945494) Calling .DriverName
I0920 17:07:12.882214   27574 ssh_runner.go:195] Run: systemctl --version
I0920 17:07:12.882248   27574 main.go:141] libmachine: (functional-945494) Calling .GetSSHHostname
I0920 17:07:12.885286   27574 main.go:141] libmachine: (functional-945494) DBG | domain functional-945494 has defined MAC address 52:54:00:50:b8:7d in network mk-functional-945494
I0920 17:07:12.885770   27574 main.go:141] libmachine: (functional-945494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:b8:7d", ip: ""} in network mk-functional-945494: {Iface:virbr1 ExpiryTime:2024-09-20 18:04:33 +0000 UTC Type:0 Mac:52:54:00:50:b8:7d Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:functional-945494 Clientid:01:52:54:00:50:b8:7d}
I0920 17:07:12.885805   27574 main.go:141] libmachine: (functional-945494) DBG | domain functional-945494 has defined IP address 192.168.39.181 and MAC address 52:54:00:50:b8:7d in network mk-functional-945494
I0920 17:07:12.885954   27574 main.go:141] libmachine: (functional-945494) Calling .GetSSHPort
I0920 17:07:12.886165   27574 main.go:141] libmachine: (functional-945494) Calling .GetSSHKeyPath
I0920 17:07:12.886388   27574 main.go:141] libmachine: (functional-945494) Calling .GetSSHUsername
I0920 17:07:12.886531   27574 sshutil.go:53] new ssh client: &{IP:192.168.39.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/functional-945494/id_rsa Username:docker}
I0920 17:07:12.964706   27574 ssh_runner.go:195] Run: sudo crictl images --output json
I0920 17:07:13.004104   27574 main.go:141] libmachine: Making call to close driver server
I0920 17:07:13.004120   27574 main.go:141] libmachine: (functional-945494) Calling .Close
I0920 17:07:13.004408   27574 main.go:141] libmachine: Successfully made call to close driver server
I0920 17:07:13.004431   27574 main.go:141] libmachine: (functional-945494) DBG | Closing plugin on server side
I0920 17:07:13.004446   27574 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 17:07:13.004466   27574 main.go:141] libmachine: Making call to close driver server
I0920 17:07:13.004476   27574 main.go:141] libmachine: (functional-945494) Calling .Close
I0920 17:07:13.004754   27574 main.go:141] libmachine: Successfully made call to close driver server
I0920 17:07:13.004772   27574 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-945494 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-proxy              | v1.31.1            | 60c005f310ff3 | 92.7MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-controller-manager | v1.31.1            | 175ffd71cce3d | 89.4MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| docker.io/library/nginx                 | latest             | 39286ab8a5e14 | 192MB  |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| localhost/kicbase/echo-server           | functional-945494  | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/kube-apiserver          | v1.31.1            | 6bab7719df100 | 95.2MB |
| registry.k8s.io/kube-scheduler          | v1.31.1            | 9aa1fad941575 | 68.4MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| localhost/minikube-local-cache-test     | functional-945494  | 8a47a81631cf0 | 3.33kB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/kindest/kindnetd              | v20240813-c6f155d6 | 12968670680f4 | 87.2MB |
| localhost/my-image                      | functional-945494  | a6267f13b19e0 | 1.47MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-945494 image ls --format table --alsologtostderr:
I0920 17:07:18.680753   27760 out.go:345] Setting OutFile to fd 1 ...
I0920 17:07:18.680877   27760 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 17:07:18.680888   27760 out.go:358] Setting ErrFile to fd 2...
I0920 17:07:18.680895   27760 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 17:07:18.681156   27760 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-8777/.minikube/bin
I0920 17:07:18.682058   27760 config.go:182] Loaded profile config "functional-945494": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0920 17:07:18.682203   27760 config.go:182] Loaded profile config "functional-945494": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0920 17:07:18.682830   27760 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 17:07:18.682884   27760 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 17:07:18.698726   27760 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43729
I0920 17:07:18.699259   27760 main.go:141] libmachine: () Calling .GetVersion
I0920 17:07:18.699999   27760 main.go:141] libmachine: Using API Version  1
I0920 17:07:18.700033   27760 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 17:07:18.700418   27760 main.go:141] libmachine: () Calling .GetMachineName
I0920 17:07:18.700607   27760 main.go:141] libmachine: (functional-945494) Calling .GetState
I0920 17:07:18.702501   27760 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 17:07:18.702580   27760 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 17:07:18.717883   27760 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43871
I0920 17:07:18.718291   27760 main.go:141] libmachine: () Calling .GetVersion
I0920 17:07:18.718985   27760 main.go:141] libmachine: Using API Version  1
I0920 17:07:18.719011   27760 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 17:07:18.719340   27760 main.go:141] libmachine: () Calling .GetMachineName
I0920 17:07:18.719544   27760 main.go:141] libmachine: (functional-945494) Calling .DriverName
I0920 17:07:18.719766   27760 ssh_runner.go:195] Run: systemctl --version
I0920 17:07:18.719808   27760 main.go:141] libmachine: (functional-945494) Calling .GetSSHHostname
I0920 17:07:18.723101   27760 main.go:141] libmachine: (functional-945494) DBG | domain functional-945494 has defined MAC address 52:54:00:50:b8:7d in network mk-functional-945494
I0920 17:07:18.723682   27760 main.go:141] libmachine: (functional-945494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:b8:7d", ip: ""} in network mk-functional-945494: {Iface:virbr1 ExpiryTime:2024-09-20 18:04:33 +0000 UTC Type:0 Mac:52:54:00:50:b8:7d Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:functional-945494 Clientid:01:52:54:00:50:b8:7d}
I0920 17:07:18.723728   27760 main.go:141] libmachine: (functional-945494) DBG | domain functional-945494 has defined IP address 192.168.39.181 and MAC address 52:54:00:50:b8:7d in network mk-functional-945494
I0920 17:07:18.723896   27760 main.go:141] libmachine: (functional-945494) Calling .GetSSHPort
I0920 17:07:18.724068   27760 main.go:141] libmachine: (functional-945494) Calling .GetSSHKeyPath
I0920 17:07:18.724220   27760 main.go:141] libmachine: (functional-945494) Calling .GetSSHUsername
I0920 17:07:18.724362   27760 sshutil.go:53] new ssh client: &{IP:192.168.39.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/functional-945494/id_rsa Username:docker}
I0920 17:07:18.848622   27760 ssh_runner.go:195] Run: sudo crictl images --output json
I0920 17:07:18.909822   27760 main.go:141] libmachine: Making call to close driver server
I0920 17:07:18.909858   27760 main.go:141] libmachine: (functional-945494) Calling .Close
I0920 17:07:18.910108   27760 main.go:141] libmachine: Successfully made call to close driver server
I0920 17:07:18.910121   27760 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 17:07:18.910135   27760 main.go:141] libmachine: Making call to close driver server
I0920 17:07:18.910142   27760 main.go:141] libmachine: (functional-945494) Calling .Close
I0920 17:07:18.910381   27760 main.go:141] libmachine: Successfully made call to close driver server
I0920 17:07:18.910399   27760 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-945494 image ls --format json --alsologtostderr:
[{"id":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","repoDigests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44","registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"92733849"},{"id":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0","registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"68420934"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"r
epoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"450261e15efa73229ffa9e5c8aef532f97e8ae18e895444421a70255ddc0f9a0","repoDigests":["docker.io/library/5ba70848c8cefb46a2650863d9e7cdb7aebfb9954a36d7fbc657007025858efb-tmp@sha256:bd0c237b5bc98af21044a93bab7c4982534dab381feb2a91964f8dc0160ce784"],"repoTags":[],"size":"1466018"},{"id":"a6267f13b19e00134342025a97c0f270f6ce06ebfd61ee1ad464f0e2cf2d4aee","repoDigests":["localhost/my-image@sha256:906e55d822358551c56d60a7ce7cda542ca69e03e7025ad2a4a2af2f9e9a1bf3"],"repoTags":["localhost/my-image:functional-945494"],"size":"1468600"},{"id":"8a47a81631cf03b034e7e73a0a3e65ac4b2625d7e8aaa3ce14655ff8844ee80b","repoDigests":["localhost/minikube-local-cache-test@sha256:c6db0a86b9f8598caf189c1898ac46687310961f71879bbd5759868e59a125fd"],"repoTags":["localhost/minikube-local-cache-test:functional-945494"],"size":"3330"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d6
1285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-945494"],"size":"4943877"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6d
b6d998674b34181e1a1","registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"89437508"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3","repoDigests":["docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3","docker.io/library/nginx@sha256:88a0a069d5e9865fcaaf8c1e53ba6bf3d8d987b0fdc5e0135fec8ce8567d673e"],"repoTags":["docker.io/library/nginx:latest"],"size":"191853369"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/b
usybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e39
9310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","repoDigests":["registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771","registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"95237600"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208
bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f","repoDigests":["docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b","docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"87190579"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:lates
t"],"size":"247077"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-945494 image ls --format json --alsologtostderr:
I0920 17:07:18.598308   27743 out.go:345] Setting OutFile to fd 1 ...
I0920 17:07:18.598428   27743 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 17:07:18.598438   27743 out.go:358] Setting ErrFile to fd 2...
I0920 17:07:18.598442   27743 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 17:07:18.598631   27743 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-8777/.minikube/bin
I0920 17:07:18.599224   27743 config.go:182] Loaded profile config "functional-945494": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0920 17:07:18.599321   27743 config.go:182] Loaded profile config "functional-945494": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0920 17:07:18.599677   27743 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 17:07:18.599716   27743 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 17:07:18.615060   27743 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44293
I0920 17:07:18.615568   27743 main.go:141] libmachine: () Calling .GetVersion
I0920 17:07:18.616335   27743 main.go:141] libmachine: Using API Version  1
I0920 17:07:18.616367   27743 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 17:07:18.616876   27743 main.go:141] libmachine: () Calling .GetMachineName
I0920 17:07:18.617104   27743 main.go:141] libmachine: (functional-945494) Calling .GetState
I0920 17:07:18.619520   27743 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 17:07:18.619571   27743 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 17:07:18.636513   27743 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45613
I0920 17:07:18.636874   27743 main.go:141] libmachine: () Calling .GetVersion
I0920 17:07:18.637419   27743 main.go:141] libmachine: Using API Version  1
I0920 17:07:18.637450   27743 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 17:07:18.637812   27743 main.go:141] libmachine: () Calling .GetMachineName
I0920 17:07:18.638012   27743 main.go:141] libmachine: (functional-945494) Calling .DriverName
I0920 17:07:18.638217   27743 ssh_runner.go:195] Run: systemctl --version
I0920 17:07:18.638285   27743 main.go:141] libmachine: (functional-945494) Calling .GetSSHHostname
I0920 17:07:18.641565   27743 main.go:141] libmachine: (functional-945494) DBG | domain functional-945494 has defined MAC address 52:54:00:50:b8:7d in network mk-functional-945494
I0920 17:07:18.642075   27743 main.go:141] libmachine: (functional-945494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:b8:7d", ip: ""} in network mk-functional-945494: {Iface:virbr1 ExpiryTime:2024-09-20 18:04:33 +0000 UTC Type:0 Mac:52:54:00:50:b8:7d Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:functional-945494 Clientid:01:52:54:00:50:b8:7d}
I0920 17:07:18.642137   27743 main.go:141] libmachine: (functional-945494) DBG | domain functional-945494 has defined IP address 192.168.39.181 and MAC address 52:54:00:50:b8:7d in network mk-functional-945494
I0920 17:07:18.642376   27743 main.go:141] libmachine: (functional-945494) Calling .GetSSHPort
I0920 17:07:18.642552   27743 main.go:141] libmachine: (functional-945494) Calling .GetSSHKeyPath
I0920 17:07:18.642670   27743 main.go:141] libmachine: (functional-945494) Calling .GetSSHUsername
I0920 17:07:18.642828   27743 sshutil.go:53] new ssh client: &{IP:192.168.39.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/functional-945494/id_rsa Username:docker}
I0920 17:07:18.773015   27743 ssh_runner.go:195] Run: sudo crictl images --output json
I0920 17:07:18.891574   27743 main.go:141] libmachine: Making call to close driver server
I0920 17:07:18.891587   27743 main.go:141] libmachine: (functional-945494) Calling .Close
I0920 17:07:18.891875   27743 main.go:141] libmachine: Successfully made call to close driver server
I0920 17:07:18.891895   27743 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 17:07:18.891969   27743 main.go:141] libmachine: Making call to close driver server
I0920 17:07:18.891985   27743 main.go:141] libmachine: (functional-945494) Calling .Close
I0920 17:07:18.892211   27743 main.go:141] libmachine: Successfully made call to close driver server
I0920 17:07:18.892226   27743 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 17:07:18.892265   27743 main.go:141] libmachine: (functional-945494) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-945494 image ls --format yaml --alsologtostderr:
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f
repoDigests:
- docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "87190579"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
- registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "68420934"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3
repoDigests:
- docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3
- docker.io/library/nginx@sha256:88a0a069d5e9865fcaaf8c1e53ba6bf3d8d987b0fdc5e0135fec8ce8567d673e
repoTags:
- docker.io/library/nginx:latest
size: "191853369"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
- registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "89437508"
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
- registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "92733849"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "95237600"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-945494
size: "4943877"
- id: 8a47a81631cf03b034e7e73a0a3e65ac4b2625d7e8aaa3ce14655ff8844ee80b
repoDigests:
- localhost/minikube-local-cache-test@sha256:c6db0a86b9f8598caf189c1898ac46687310961f71879bbd5759868e59a125fd
repoTags:
- localhost/minikube-local-cache-test:functional-945494
size: "3330"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-945494 image ls --format yaml --alsologtostderr:
I0920 17:07:13.058041   27598 out.go:345] Setting OutFile to fd 1 ...
I0920 17:07:13.058154   27598 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 17:07:13.058163   27598 out.go:358] Setting ErrFile to fd 2...
I0920 17:07:13.058168   27598 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 17:07:13.058382   27598 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-8777/.minikube/bin
I0920 17:07:13.059000   27598 config.go:182] Loaded profile config "functional-945494": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0920 17:07:13.059095   27598 config.go:182] Loaded profile config "functional-945494": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0920 17:07:13.059489   27598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 17:07:13.059560   27598 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 17:07:13.074794   27598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37057
I0920 17:07:13.075364   27598 main.go:141] libmachine: () Calling .GetVersion
I0920 17:07:13.076023   27598 main.go:141] libmachine: Using API Version  1
I0920 17:07:13.076061   27598 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 17:07:13.076432   27598 main.go:141] libmachine: () Calling .GetMachineName
I0920 17:07:13.076649   27598 main.go:141] libmachine: (functional-945494) Calling .GetState
I0920 17:07:13.078714   27598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 17:07:13.078764   27598 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 17:07:13.094139   27598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43023
I0920 17:07:13.094613   27598 main.go:141] libmachine: () Calling .GetVersion
I0920 17:07:13.095097   27598 main.go:141] libmachine: Using API Version  1
I0920 17:07:13.095126   27598 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 17:07:13.095489   27598 main.go:141] libmachine: () Calling .GetMachineName
I0920 17:07:13.095689   27598 main.go:141] libmachine: (functional-945494) Calling .DriverName
I0920 17:07:13.095918   27598 ssh_runner.go:195] Run: systemctl --version
I0920 17:07:13.095960   27598 main.go:141] libmachine: (functional-945494) Calling .GetSSHHostname
I0920 17:07:13.098910   27598 main.go:141] libmachine: (functional-945494) DBG | domain functional-945494 has defined MAC address 52:54:00:50:b8:7d in network mk-functional-945494
I0920 17:07:13.099425   27598 main.go:141] libmachine: (functional-945494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:b8:7d", ip: ""} in network mk-functional-945494: {Iface:virbr1 ExpiryTime:2024-09-20 18:04:33 +0000 UTC Type:0 Mac:52:54:00:50:b8:7d Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:functional-945494 Clientid:01:52:54:00:50:b8:7d}
I0920 17:07:13.099463   27598 main.go:141] libmachine: (functional-945494) DBG | domain functional-945494 has defined IP address 192.168.39.181 and MAC address 52:54:00:50:b8:7d in network mk-functional-945494
I0920 17:07:13.099633   27598 main.go:141] libmachine: (functional-945494) Calling .GetSSHPort
I0920 17:07:13.099792   27598 main.go:141] libmachine: (functional-945494) Calling .GetSSHKeyPath
I0920 17:07:13.099942   27598 main.go:141] libmachine: (functional-945494) Calling .GetSSHUsername
I0920 17:07:13.100073   27598 sshutil.go:53] new ssh client: &{IP:192.168.39.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/functional-945494/id_rsa Username:docker}
I0920 17:07:13.196920   27598 ssh_runner.go:195] Run: sudo crictl images --output json
I0920 17:07:13.275219   27598 main.go:141] libmachine: Making call to close driver server
I0920 17:07:13.275235   27598 main.go:141] libmachine: (functional-945494) Calling .Close
I0920 17:07:13.275537   27598 main.go:141] libmachine: (functional-945494) DBG | Closing plugin on server side
I0920 17:07:13.275583   27598 main.go:141] libmachine: Successfully made call to close driver server
I0920 17:07:13.275598   27598 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 17:07:13.275609   27598 main.go:141] libmachine: Making call to close driver server
I0920 17:07:13.275618   27598 main.go:141] libmachine: (functional-945494) Calling .Close
I0920 17:07:13.275811   27598 main.go:141] libmachine: Successfully made call to close driver server
I0920 17:07:13.275825   27598 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-945494 ssh pgrep buildkitd: exit status 1 (197.324056ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 image build -t localhost/my-image:functional-945494 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-945494 image build -t localhost/my-image:functional-945494 testdata/build --alsologtostderr: (4.569381163s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-945494 image build -t localhost/my-image:functional-945494 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 450261e15ef
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-945494
--> a6267f13b19
Successfully tagged localhost/my-image:functional-945494
a6267f13b19e00134342025a97c0f270f6ce06ebfd61ee1ad464f0e2cf2d4aee
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-945494 image build -t localhost/my-image:functional-945494 testdata/build --alsologtostderr:
I0920 17:07:13.519635   27652 out.go:345] Setting OutFile to fd 1 ...
I0920 17:07:13.519808   27652 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 17:07:13.519817   27652 out.go:358] Setting ErrFile to fd 2...
I0920 17:07:13.519821   27652 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 17:07:13.519999   27652 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-8777/.minikube/bin
I0920 17:07:13.520588   27652 config.go:182] Loaded profile config "functional-945494": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0920 17:07:13.521091   27652 config.go:182] Loaded profile config "functional-945494": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0920 17:07:13.521494   27652 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 17:07:13.521535   27652 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 17:07:13.536503   27652 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45753
I0920 17:07:13.537016   27652 main.go:141] libmachine: () Calling .GetVersion
I0920 17:07:13.537509   27652 main.go:141] libmachine: Using API Version  1
I0920 17:07:13.537527   27652 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 17:07:13.537828   27652 main.go:141] libmachine: () Calling .GetMachineName
I0920 17:07:13.538004   27652 main.go:141] libmachine: (functional-945494) Calling .GetState
I0920 17:07:13.539790   27652 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 17:07:13.539824   27652 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 17:07:13.554571   27652 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33787
I0920 17:07:13.555045   27652 main.go:141] libmachine: () Calling .GetVersion
I0920 17:07:13.555496   27652 main.go:141] libmachine: Using API Version  1
I0920 17:07:13.555515   27652 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 17:07:13.555861   27652 main.go:141] libmachine: () Calling .GetMachineName
I0920 17:07:13.556031   27652 main.go:141] libmachine: (functional-945494) Calling .DriverName
I0920 17:07:13.556218   27652 ssh_runner.go:195] Run: systemctl --version
I0920 17:07:13.556244   27652 main.go:141] libmachine: (functional-945494) Calling .GetSSHHostname
I0920 17:07:13.558779   27652 main.go:141] libmachine: (functional-945494) DBG | domain functional-945494 has defined MAC address 52:54:00:50:b8:7d in network mk-functional-945494
I0920 17:07:13.559140   27652 main.go:141] libmachine: (functional-945494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:b8:7d", ip: ""} in network mk-functional-945494: {Iface:virbr1 ExpiryTime:2024-09-20 18:04:33 +0000 UTC Type:0 Mac:52:54:00:50:b8:7d Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:functional-945494 Clientid:01:52:54:00:50:b8:7d}
I0920 17:07:13.559169   27652 main.go:141] libmachine: (functional-945494) DBG | domain functional-945494 has defined IP address 192.168.39.181 and MAC address 52:54:00:50:b8:7d in network mk-functional-945494
I0920 17:07:13.559301   27652 main.go:141] libmachine: (functional-945494) Calling .GetSSHPort
I0920 17:07:13.559438   27652 main.go:141] libmachine: (functional-945494) Calling .GetSSHKeyPath
I0920 17:07:13.559579   27652 main.go:141] libmachine: (functional-945494) Calling .GetSSHUsername
I0920 17:07:13.559671   27652 sshutil.go:53] new ssh client: &{IP:192.168.39.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/functional-945494/id_rsa Username:docker}
I0920 17:07:13.644224   27652 build_images.go:161] Building image from path: /tmp/build.539659066.tar
I0920 17:07:13.644327   27652 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0920 17:07:13.653948   27652 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.539659066.tar
I0920 17:07:13.659682   27652 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.539659066.tar: stat -c "%s %y" /var/lib/minikube/build/build.539659066.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.539659066.tar': No such file or directory
I0920 17:07:13.659714   27652 ssh_runner.go:362] scp /tmp/build.539659066.tar --> /var/lib/minikube/build/build.539659066.tar (3072 bytes)
I0920 17:07:13.685337   27652 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.539659066
I0920 17:07:13.695569   27652 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.539659066 -xf /var/lib/minikube/build/build.539659066.tar
I0920 17:07:13.705600   27652 crio.go:315] Building image: /var/lib/minikube/build/build.539659066
I0920 17:07:13.705662   27652 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-945494 /var/lib/minikube/build/build.539659066 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0920 17:07:17.988613   27652 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-945494 /var/lib/minikube/build/build.539659066 --cgroup-manager=cgroupfs: (4.282924251s)
I0920 17:07:17.988681   27652 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.539659066
I0920 17:07:18.018098   27652 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.539659066.tar
I0920 17:07:18.041456   27652 build_images.go:217] Built localhost/my-image:functional-945494 from /tmp/build.539659066.tar
I0920 17:07:18.041512   27652 build_images.go:133] succeeded building to: functional-945494
I0920 17:07:18.041521   27652 build_images.go:134] failed building to: 
I0920 17:07:18.041555   27652 main.go:141] libmachine: Making call to close driver server
I0920 17:07:18.041573   27652 main.go:141] libmachine: (functional-945494) Calling .Close
I0920 17:07:18.041854   27652 main.go:141] libmachine: Successfully made call to close driver server
I0920 17:07:18.041873   27652 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 17:07:18.041885   27652 main.go:141] libmachine: Making call to close driver server
I0920 17:07:18.041895   27652 main.go:141] libmachine: (functional-945494) Calling .Close
I0920 17:07:18.042196   27652 main.go:141] libmachine: Successfully made call to close driver server
I0920 17:07:18.042212   27652 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 17:07:18.042225   27652 main.go:141] libmachine: (functional-945494) DBG | Closing plugin on server side
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.713962259s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-945494
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 image load --daemon kicbase/echo-server:functional-945494 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-945494 image load --daemon kicbase/echo-server:functional-945494 --alsologtostderr: (1.123214567s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 image load --daemon kicbase/echo-server:functional-945494 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-945494
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 image load --daemon kicbase/echo-server:functional-945494 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 image save kicbase/echo-server:functional-945494 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 image rm kicbase/echo-server:functional-945494 --alsologtostderr
functional_test.go:392: (dbg) Done: out/minikube-linux-amd64 -p functional-945494 image rm kicbase/echo-server:functional-945494 --alsologtostderr: (1.24389145s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (1.80s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-945494 /tmp/TestFunctionalparallelMountCmdspecific-port2020145018/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-945494 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (286.787085ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0920 17:07:00.769623   15973 retry.go:31] will retry after 613.550822ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-945494 /tmp/TestFunctionalparallelMountCmdspecific-port2020145018/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-945494 ssh "sudo umount -f /mount-9p": exit status 1 (275.937314ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-945494 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-945494 /tmp/TestFunctionalparallelMountCmdspecific-port2020145018/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.09s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-945494 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1107718131/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-945494 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1107718131/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-945494 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1107718131/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-945494 ssh "findmnt -T" /mount1: exit status 1 (316.131067ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0920 17:07:02.885379   15973 retry.go:31] will retry after 412.972506ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-945494 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-945494 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1107718131/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-945494 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1107718131/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-945494 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1107718131/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-945494
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-945494 image save --daemon kicbase/echo-server:functional-945494 --alsologtostderr
functional_test.go:424: (dbg) Done: out/minikube-linux-amd64 -p functional-945494 image save --daemon kicbase/echo-server:functional-945494 --alsologtostderr: (2.659617243s)
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-945494
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.70s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-945494
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-945494
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-945494
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (198.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-135993 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0920 17:07:43.197755   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:07:43.204208   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:07:43.215577   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:07:43.236944   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:07:43.278327   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:07:43.359798   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:07:43.521324   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:07:43.843092   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:07:44.484816   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:07:45.766468   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:07:48.328633   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:07:53.450584   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:08:03.692763   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:08:24.174907   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:09:05.136674   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:10:27.058948   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-135993 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m17.88460705s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-135993 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (198.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-135993 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-135993 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-135993 -- rollout status deployment/busybox: (4.70458128s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-135993 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-135993 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-135993 -- exec busybox-7dff88458-cw8r4 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-135993 -- exec busybox-7dff88458-df429 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-135993 -- exec busybox-7dff88458-ksx56 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-135993 -- exec busybox-7dff88458-cw8r4 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-135993 -- exec busybox-7dff88458-df429 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-135993 -- exec busybox-7dff88458-ksx56 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-135993 -- exec busybox-7dff88458-cw8r4 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-135993 -- exec busybox-7dff88458-df429 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-135993 -- exec busybox-7dff88458-ksx56 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-135993 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-135993 -- exec busybox-7dff88458-cw8r4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-135993 -- exec busybox-7dff88458-cw8r4 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-135993 -- exec busybox-7dff88458-df429 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-135993 -- exec busybox-7dff88458-df429 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-135993 -- exec busybox-7dff88458-ksx56 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-135993 -- exec busybox-7dff88458-ksx56 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (53.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-135993 -v=7 --alsologtostderr
E0920 17:11:39.931990   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/functional-945494/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:11:39.938381   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/functional-945494/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:11:39.949928   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/functional-945494/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:11:39.971412   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/functional-945494/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:11:40.012911   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/functional-945494/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:11:40.094367   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/functional-945494/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:11:40.256513   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/functional-945494/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:11:40.578255   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/functional-945494/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:11:41.220050   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/functional-945494/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:11:42.502644   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/functional-945494/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:11:45.064820   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/functional-945494/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-135993 -v=7 --alsologtostderr: (52.211271746s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-135993 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (53.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-135993 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-135993 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-135993 cp testdata/cp-test.txt ha-135993:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-135993 ssh -n ha-135993 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-135993 cp ha-135993:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2672215621/001/cp-test_ha-135993.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-135993 ssh -n ha-135993 "sudo cat /home/docker/cp-test.txt"
E0920 17:11:50.186357   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/functional-945494/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-135993 cp ha-135993:/home/docker/cp-test.txt ha-135993-m02:/home/docker/cp-test_ha-135993_ha-135993-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-135993 ssh -n ha-135993 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-135993 ssh -n ha-135993-m02 "sudo cat /home/docker/cp-test_ha-135993_ha-135993-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-135993 cp ha-135993:/home/docker/cp-test.txt ha-135993-m03:/home/docker/cp-test_ha-135993_ha-135993-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-135993 ssh -n ha-135993 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-135993 ssh -n ha-135993-m03 "sudo cat /home/docker/cp-test_ha-135993_ha-135993-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-135993 cp ha-135993:/home/docker/cp-test.txt ha-135993-m04:/home/docker/cp-test_ha-135993_ha-135993-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-135993 ssh -n ha-135993 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-135993 ssh -n ha-135993-m04 "sudo cat /home/docker/cp-test_ha-135993_ha-135993-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-135993 cp testdata/cp-test.txt ha-135993-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-135993 ssh -n ha-135993-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-135993 cp ha-135993-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2672215621/001/cp-test_ha-135993-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-135993 ssh -n ha-135993-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-135993 cp ha-135993-m02:/home/docker/cp-test.txt ha-135993:/home/docker/cp-test_ha-135993-m02_ha-135993.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-135993 ssh -n ha-135993-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-135993 ssh -n ha-135993 "sudo cat /home/docker/cp-test_ha-135993-m02_ha-135993.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-135993 cp ha-135993-m02:/home/docker/cp-test.txt ha-135993-m03:/home/docker/cp-test_ha-135993-m02_ha-135993-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-135993 ssh -n ha-135993-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-135993 ssh -n ha-135993-m03 "sudo cat /home/docker/cp-test_ha-135993-m02_ha-135993-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-135993 cp ha-135993-m02:/home/docker/cp-test.txt ha-135993-m04:/home/docker/cp-test_ha-135993-m02_ha-135993-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-135993 ssh -n ha-135993-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-135993 ssh -n ha-135993-m04 "sudo cat /home/docker/cp-test_ha-135993-m02_ha-135993-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-135993 cp testdata/cp-test.txt ha-135993-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-135993 ssh -n ha-135993-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-135993 cp ha-135993-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2672215621/001/cp-test_ha-135993-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-135993 ssh -n ha-135993-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-135993 cp ha-135993-m03:/home/docker/cp-test.txt ha-135993:/home/docker/cp-test_ha-135993-m03_ha-135993.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-135993 ssh -n ha-135993-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-135993 ssh -n ha-135993 "sudo cat /home/docker/cp-test_ha-135993-m03_ha-135993.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-135993 cp ha-135993-m03:/home/docker/cp-test.txt ha-135993-m02:/home/docker/cp-test_ha-135993-m03_ha-135993-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-135993 ssh -n ha-135993-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-135993 ssh -n ha-135993-m02 "sudo cat /home/docker/cp-test_ha-135993-m03_ha-135993-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-135993 cp ha-135993-m03:/home/docker/cp-test.txt ha-135993-m04:/home/docker/cp-test_ha-135993-m03_ha-135993-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-135993 ssh -n ha-135993-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-135993 ssh -n ha-135993-m04 "sudo cat /home/docker/cp-test_ha-135993-m03_ha-135993-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-135993 cp testdata/cp-test.txt ha-135993-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-135993 ssh -n ha-135993-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-135993 cp ha-135993-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2672215621/001/cp-test_ha-135993-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-135993 ssh -n ha-135993-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-135993 cp ha-135993-m04:/home/docker/cp-test.txt ha-135993:/home/docker/cp-test_ha-135993-m04_ha-135993.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-135993 ssh -n ha-135993-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-135993 ssh -n ha-135993 "sudo cat /home/docker/cp-test_ha-135993-m04_ha-135993.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-135993 cp ha-135993-m04:/home/docker/cp-test.txt ha-135993-m02:/home/docker/cp-test_ha-135993-m04_ha-135993-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-135993 ssh -n ha-135993-m04 "sudo cat /home/docker/cp-test.txt"
E0920 17:12:00.427745   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/functional-945494/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-135993 ssh -n ha-135993-m02 "sudo cat /home/docker/cp-test_ha-135993-m04_ha-135993-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-135993 cp ha-135993-m04:/home/docker/cp-test.txt ha-135993-m03:/home/docker/cp-test_ha-135993-m04_ha-135993-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-135993 ssh -n ha-135993-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-135993 ssh -n ha-135993-m03 "sudo cat /home/docker/cp-test_ha-135993-m04_ha-135993-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (4.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (4.154303741s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (4.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (16.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-135993 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-135993 node delete m03 -v=7 --alsologtostderr: (16.027135514s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-135993 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (16.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (350.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-135993 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0920 17:24:06.264537   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:26:39.932372   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/functional-945494/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:27:43.197324   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:28:03.000516   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/functional-945494/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-135993 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m49.792562241s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-135993 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (350.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (80.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-135993 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-135993 --control-plane -v=7 --alsologtostderr: (1m19.901372357s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-135993 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (80.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.85s)

                                                
                                    
x
+
TestJSONOutput/start/Command (79.19s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-099598 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0920 17:31:39.931460   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/functional-945494/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-099598 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m19.191970372s)
--- PASS: TestJSONOutput/start/Command (79.19s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.71s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-099598 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.71s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-099598 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.62s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-099598 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-099598 --output=json --user=testUser: (6.616018937s)
--- PASS: TestJSONOutput/stop/Command (6.62s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-159703 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-159703 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (63.166353ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"cc1532c6-335a-4548-b6db-1e22cc5b37b7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-159703] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"42abde7e-2d3f-4299-b9b6-1848ffc62a1c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19672"}}
	{"specversion":"1.0","id":"7b94417a-ee65-4a4a-a27c-2a0631e122e5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"1209e27a-48db-4787-b354-1b0a0ce859e0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19672-8777/kubeconfig"}}
	{"specversion":"1.0","id":"3f08c059-e910-4940-af7d-826f7211963e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-8777/.minikube"}}
	{"specversion":"1.0","id":"415c84fa-840c-42d8-8640-a23d820cf09f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"f41aefd5-8f65-482d-82ad-4b02bf3f94e8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"3296db9e-3872-486d-9d19-d5a479f826ca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-159703" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-159703
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (87.56s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-427808 --driver=kvm2  --container-runtime=crio
E0920 17:32:43.199203   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-427808 --driver=kvm2  --container-runtime=crio: (43.30319552s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-437942 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-437942 --driver=kvm2  --container-runtime=crio: (41.215819008s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-427808
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-437942
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-437942" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-437942
helpers_test.go:175: Cleaning up "first-427808" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-427808
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-427808: (1.007488532s)
--- PASS: TestMinikubeProfile (87.56s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (31.33s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-415952 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-415952 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (30.333431304s)
--- PASS: TestMountStart/serial/StartWithMountFirst (31.33s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.43s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-415952 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-415952 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.43s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (28.16s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-427861 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-427861 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.162264125s)
--- PASS: TestMountStart/serial/StartWithMountSecond (28.16s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-427861 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-427861 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.92s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-415952 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.92s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-427861 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-427861 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.52s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-427861
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-427861: (1.523639713s)
--- PASS: TestMountStart/serial/Stop (1.52s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.81s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-427861
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-427861: (21.809651899s)
--- PASS: TestMountStart/serial/RestartStopped (22.81s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-427861 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-427861 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (113.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-592246 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0920 17:36:39.932204   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/functional-945494/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-592246 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m52.813922059s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592246 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (113.22s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-592246 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-592246 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-592246 -- rollout status deployment/busybox: (4.731411997s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-592246 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-592246 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-592246 -- exec busybox-7dff88458-ts58h -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-592246 -- exec busybox-7dff88458-wpfrr -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-592246 -- exec busybox-7dff88458-ts58h -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-592246 -- exec busybox-7dff88458-wpfrr -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-592246 -- exec busybox-7dff88458-ts58h -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-592246 -- exec busybox-7dff88458-wpfrr -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.27s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-592246 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-592246 -- exec busybox-7dff88458-ts58h -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-592246 -- exec busybox-7dff88458-ts58h -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-592246 -- exec busybox-7dff88458-wpfrr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-592246 -- exec busybox-7dff88458-wpfrr -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.84s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (52.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-592246 -v 3 --alsologtostderr
E0920 17:37:43.197997   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-592246 -v 3 --alsologtostderr: (52.357411088s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592246 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (52.95s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-592246 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.59s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592246 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592246 cp testdata/cp-test.txt multinode-592246:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592246 ssh -n multinode-592246 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592246 cp multinode-592246:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3830266903/001/cp-test_multinode-592246.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592246 ssh -n multinode-592246 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592246 cp multinode-592246:/home/docker/cp-test.txt multinode-592246-m02:/home/docker/cp-test_multinode-592246_multinode-592246-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592246 ssh -n multinode-592246 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592246 ssh -n multinode-592246-m02 "sudo cat /home/docker/cp-test_multinode-592246_multinode-592246-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592246 cp multinode-592246:/home/docker/cp-test.txt multinode-592246-m03:/home/docker/cp-test_multinode-592246_multinode-592246-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592246 ssh -n multinode-592246 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592246 ssh -n multinode-592246-m03 "sudo cat /home/docker/cp-test_multinode-592246_multinode-592246-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592246 cp testdata/cp-test.txt multinode-592246-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592246 ssh -n multinode-592246-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592246 cp multinode-592246-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3830266903/001/cp-test_multinode-592246-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592246 ssh -n multinode-592246-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592246 cp multinode-592246-m02:/home/docker/cp-test.txt multinode-592246:/home/docker/cp-test_multinode-592246-m02_multinode-592246.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592246 ssh -n multinode-592246-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592246 ssh -n multinode-592246 "sudo cat /home/docker/cp-test_multinode-592246-m02_multinode-592246.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592246 cp multinode-592246-m02:/home/docker/cp-test.txt multinode-592246-m03:/home/docker/cp-test_multinode-592246-m02_multinode-592246-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592246 ssh -n multinode-592246-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592246 ssh -n multinode-592246-m03 "sudo cat /home/docker/cp-test_multinode-592246-m02_multinode-592246-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592246 cp testdata/cp-test.txt multinode-592246-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592246 ssh -n multinode-592246-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592246 cp multinode-592246-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3830266903/001/cp-test_multinode-592246-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592246 ssh -n multinode-592246-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592246 cp multinode-592246-m03:/home/docker/cp-test.txt multinode-592246:/home/docker/cp-test_multinode-592246-m03_multinode-592246.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592246 ssh -n multinode-592246-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592246 ssh -n multinode-592246 "sudo cat /home/docker/cp-test_multinode-592246-m03_multinode-592246.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592246 cp multinode-592246-m03:/home/docker/cp-test.txt multinode-592246-m02:/home/docker/cp-test_multinode-592246-m03_multinode-592246-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592246 ssh -n multinode-592246-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592246 ssh -n multinode-592246-m02 "sudo cat /home/docker/cp-test_multinode-592246-m03_multinode-592246-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.34s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592246 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-592246 node stop m03: (1.431068164s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592246 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-592246 status: exit status 7 (432.129508ms)

                                                
                                                
-- stdout --
	multinode-592246
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-592246-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-592246-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592246 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-592246 status --alsologtostderr: exit status 7 (434.028624ms)

                                                
                                                
-- stdout --
	multinode-592246
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-592246-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-592246-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 17:38:22.862290   45144 out.go:345] Setting OutFile to fd 1 ...
	I0920 17:38:22.862403   45144 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:38:22.862412   45144 out.go:358] Setting ErrFile to fd 2...
	I0920 17:38:22.862417   45144 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:38:22.862623   45144 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-8777/.minikube/bin
	I0920 17:38:22.862832   45144 out.go:352] Setting JSON to false
	I0920 17:38:22.862872   45144 mustload.go:65] Loading cluster: multinode-592246
	I0920 17:38:22.862992   45144 notify.go:220] Checking for updates...
	I0920 17:38:22.863435   45144 config.go:182] Loaded profile config "multinode-592246": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 17:38:22.863456   45144 status.go:174] checking status of multinode-592246 ...
	I0920 17:38:22.863930   45144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:38:22.863974   45144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:38:22.881551   45144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43257
	I0920 17:38:22.882099   45144 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:38:22.882710   45144 main.go:141] libmachine: Using API Version  1
	I0920 17:38:22.882735   45144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:38:22.883115   45144 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:38:22.883352   45144 main.go:141] libmachine: (multinode-592246) Calling .GetState
	I0920 17:38:22.885056   45144 status.go:364] multinode-592246 host status = "Running" (err=<nil>)
	I0920 17:38:22.885072   45144 host.go:66] Checking if "multinode-592246" exists ...
	I0920 17:38:22.885412   45144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:38:22.885460   45144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:38:22.901080   45144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42833
	I0920 17:38:22.901615   45144 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:38:22.902178   45144 main.go:141] libmachine: Using API Version  1
	I0920 17:38:22.902197   45144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:38:22.902563   45144 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:38:22.902793   45144 main.go:141] libmachine: (multinode-592246) Calling .GetIP
	I0920 17:38:22.906007   45144 main.go:141] libmachine: (multinode-592246) DBG | domain multinode-592246 has defined MAC address 52:54:00:15:f5:5b in network mk-multinode-592246
	I0920 17:38:22.906516   45144 main.go:141] libmachine: (multinode-592246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:f5:5b", ip: ""} in network mk-multinode-592246: {Iface:virbr1 ExpiryTime:2024-09-20 18:35:34 +0000 UTC Type:0 Mac:52:54:00:15:f5:5b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:multinode-592246 Clientid:01:52:54:00:15:f5:5b}
	I0920 17:38:22.906564   45144 main.go:141] libmachine: (multinode-592246) DBG | domain multinode-592246 has defined IP address 192.168.39.115 and MAC address 52:54:00:15:f5:5b in network mk-multinode-592246
	I0920 17:38:22.906664   45144 host.go:66] Checking if "multinode-592246" exists ...
	I0920 17:38:22.906981   45144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:38:22.907047   45144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:38:22.923001   45144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43559
	I0920 17:38:22.923469   45144 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:38:22.924027   45144 main.go:141] libmachine: Using API Version  1
	I0920 17:38:22.924051   45144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:38:22.924406   45144 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:38:22.924613   45144 main.go:141] libmachine: (multinode-592246) Calling .DriverName
	I0920 17:38:22.924802   45144 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 17:38:22.924835   45144 main.go:141] libmachine: (multinode-592246) Calling .GetSSHHostname
	I0920 17:38:22.928367   45144 main.go:141] libmachine: (multinode-592246) DBG | domain multinode-592246 has defined MAC address 52:54:00:15:f5:5b in network mk-multinode-592246
	I0920 17:38:22.928837   45144 main.go:141] libmachine: (multinode-592246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:f5:5b", ip: ""} in network mk-multinode-592246: {Iface:virbr1 ExpiryTime:2024-09-20 18:35:34 +0000 UTC Type:0 Mac:52:54:00:15:f5:5b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:multinode-592246 Clientid:01:52:54:00:15:f5:5b}
	I0920 17:38:22.928867   45144 main.go:141] libmachine: (multinode-592246) DBG | domain multinode-592246 has defined IP address 192.168.39.115 and MAC address 52:54:00:15:f5:5b in network mk-multinode-592246
	I0920 17:38:22.928992   45144 main.go:141] libmachine: (multinode-592246) Calling .GetSSHPort
	I0920 17:38:22.929203   45144 main.go:141] libmachine: (multinode-592246) Calling .GetSSHKeyPath
	I0920 17:38:22.929364   45144 main.go:141] libmachine: (multinode-592246) Calling .GetSSHUsername
	I0920 17:38:22.929502   45144 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/multinode-592246/id_rsa Username:docker}
	I0920 17:38:23.014022   45144 ssh_runner.go:195] Run: systemctl --version
	I0920 17:38:23.021847   45144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 17:38:23.036825   45144 kubeconfig.go:125] found "multinode-592246" server: "https://192.168.39.115:8443"
	I0920 17:38:23.036868   45144 api_server.go:166] Checking apiserver status ...
	I0920 17:38:23.036909   45144 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 17:38:23.051639   45144 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1071/cgroup
	W0920 17:38:23.062473   45144 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1071/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0920 17:38:23.062556   45144 ssh_runner.go:195] Run: ls
	I0920 17:38:23.067893   45144 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0920 17:38:23.072951   45144 api_server.go:279] https://192.168.39.115:8443/healthz returned 200:
	ok
	I0920 17:38:23.072987   45144 status.go:456] multinode-592246 apiserver status = Running (err=<nil>)
	I0920 17:38:23.072997   45144 status.go:176] multinode-592246 status: &{Name:multinode-592246 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 17:38:23.073013   45144 status.go:174] checking status of multinode-592246-m02 ...
	I0920 17:38:23.073443   45144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:38:23.073486   45144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:38:23.089257   45144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41469
	I0920 17:38:23.089747   45144 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:38:23.090284   45144 main.go:141] libmachine: Using API Version  1
	I0920 17:38:23.090312   45144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:38:23.090666   45144 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:38:23.090931   45144 main.go:141] libmachine: (multinode-592246-m02) Calling .GetState
	I0920 17:38:23.092791   45144 status.go:364] multinode-592246-m02 host status = "Running" (err=<nil>)
	I0920 17:38:23.092812   45144 host.go:66] Checking if "multinode-592246-m02" exists ...
	I0920 17:38:23.093185   45144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:38:23.093230   45144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:38:23.109072   45144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35381
	I0920 17:38:23.109563   45144 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:38:23.110076   45144 main.go:141] libmachine: Using API Version  1
	I0920 17:38:23.110097   45144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:38:23.110450   45144 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:38:23.110646   45144 main.go:141] libmachine: (multinode-592246-m02) Calling .GetIP
	I0920 17:38:23.113719   45144 main.go:141] libmachine: (multinode-592246-m02) DBG | domain multinode-592246-m02 has defined MAC address 52:54:00:8b:23:e6 in network mk-multinode-592246
	I0920 17:38:23.114173   45144 main.go:141] libmachine: (multinode-592246-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:23:e6", ip: ""} in network mk-multinode-592246: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:36 +0000 UTC Type:0 Mac:52:54:00:8b:23:e6 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:multinode-592246-m02 Clientid:01:52:54:00:8b:23:e6}
	I0920 17:38:23.114213   45144 main.go:141] libmachine: (multinode-592246-m02) DBG | domain multinode-592246-m02 has defined IP address 192.168.39.170 and MAC address 52:54:00:8b:23:e6 in network mk-multinode-592246
	I0920 17:38:23.114338   45144 host.go:66] Checking if "multinode-592246-m02" exists ...
	I0920 17:38:23.114650   45144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:38:23.114689   45144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:38:23.130072   45144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38797
	I0920 17:38:23.130549   45144 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:38:23.131043   45144 main.go:141] libmachine: Using API Version  1
	I0920 17:38:23.131068   45144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:38:23.131428   45144 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:38:23.131707   45144 main.go:141] libmachine: (multinode-592246-m02) Calling .DriverName
	I0920 17:38:23.131920   45144 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 17:38:23.131945   45144 main.go:141] libmachine: (multinode-592246-m02) Calling .GetSSHHostname
	I0920 17:38:23.134880   45144 main.go:141] libmachine: (multinode-592246-m02) DBG | domain multinode-592246-m02 has defined MAC address 52:54:00:8b:23:e6 in network mk-multinode-592246
	I0920 17:38:23.135320   45144 main.go:141] libmachine: (multinode-592246-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:23:e6", ip: ""} in network mk-multinode-592246: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:36 +0000 UTC Type:0 Mac:52:54:00:8b:23:e6 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:multinode-592246-m02 Clientid:01:52:54:00:8b:23:e6}
	I0920 17:38:23.135350   45144 main.go:141] libmachine: (multinode-592246-m02) DBG | domain multinode-592246-m02 has defined IP address 192.168.39.170 and MAC address 52:54:00:8b:23:e6 in network mk-multinode-592246
	I0920 17:38:23.135471   45144 main.go:141] libmachine: (multinode-592246-m02) Calling .GetSSHPort
	I0920 17:38:23.135679   45144 main.go:141] libmachine: (multinode-592246-m02) Calling .GetSSHKeyPath
	I0920 17:38:23.135865   45144 main.go:141] libmachine: (multinode-592246-m02) Calling .GetSSHUsername
	I0920 17:38:23.136005   45144 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/multinode-592246-m02/id_rsa Username:docker}
	I0920 17:38:23.217149   45144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 17:38:23.231271   45144 status.go:176] multinode-592246-m02 status: &{Name:multinode-592246-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0920 17:38:23.231303   45144 status.go:174] checking status of multinode-592246-m03 ...
	I0920 17:38:23.231649   45144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:38:23.231689   45144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:38:23.247276   45144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44523
	I0920 17:38:23.247777   45144 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:38:23.248293   45144 main.go:141] libmachine: Using API Version  1
	I0920 17:38:23.248316   45144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:38:23.248651   45144 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:38:23.248927   45144 main.go:141] libmachine: (multinode-592246-m03) Calling .GetState
	I0920 17:38:23.250570   45144 status.go:364] multinode-592246-m03 host status = "Stopped" (err=<nil>)
	I0920 17:38:23.250587   45144 status.go:377] host is not running, skipping remaining checks
	I0920 17:38:23.250594   45144 status.go:176] multinode-592246-m03 status: &{Name:multinode-592246-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.30s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (39.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592246 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-592246 node start m03 -v=7 --alsologtostderr: (38.928432512s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592246 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (39.59s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592246 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-592246 node delete m03: (1.738175921s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592246 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.28s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (204.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-592246 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0920 17:47:43.198044   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-592246 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m24.402958848s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592246 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (204.94s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (41.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-592246
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-592246-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-592246-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (66.79792ms)

                                                
                                                
-- stdout --
	* [multinode-592246-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19672
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19672-8777/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-8777/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-592246-m02' is duplicated with machine name 'multinode-592246-m02' in profile 'multinode-592246'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-592246-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-592246-m03 --driver=kvm2  --container-runtime=crio: (40.404491026s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-592246
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-592246: exit status 80 (219.700216ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-592246 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-592246-m03 already exists in multinode-592246-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-592246-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-592246-m03: (1.020379136s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (41.76s)

                                                
                                    
x
+
TestScheduledStopUnix (113.49s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-878923 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-878923 --memory=2048 --driver=kvm2  --container-runtime=crio: (41.890316959s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-878923 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-878923 -n scheduled-stop-878923
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-878923 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0920 17:56:27.377046   15973 retry.go:31] will retry after 137.279µs: open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/scheduled-stop-878923/pid: no such file or directory
I0920 17:56:27.378168   15973 retry.go:31] will retry after 131.697µs: open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/scheduled-stop-878923/pid: no such file or directory
I0920 17:56:27.379314   15973 retry.go:31] will retry after 270.542µs: open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/scheduled-stop-878923/pid: no such file or directory
I0920 17:56:27.380438   15973 retry.go:31] will retry after 213.703µs: open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/scheduled-stop-878923/pid: no such file or directory
I0920 17:56:27.381554   15973 retry.go:31] will retry after 382.758µs: open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/scheduled-stop-878923/pid: no such file or directory
I0920 17:56:27.382660   15973 retry.go:31] will retry after 1.118976ms: open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/scheduled-stop-878923/pid: no such file or directory
I0920 17:56:27.384856   15973 retry.go:31] will retry after 1.160476ms: open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/scheduled-stop-878923/pid: no such file or directory
I0920 17:56:27.387023   15973 retry.go:31] will retry after 1.029783ms: open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/scheduled-stop-878923/pid: no such file or directory
I0920 17:56:27.388138   15973 retry.go:31] will retry after 1.589799ms: open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/scheduled-stop-878923/pid: no such file or directory
I0920 17:56:27.390334   15973 retry.go:31] will retry after 4.001176ms: open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/scheduled-stop-878923/pid: no such file or directory
I0920 17:56:27.394529   15973 retry.go:31] will retry after 5.344966ms: open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/scheduled-stop-878923/pid: no such file or directory
I0920 17:56:27.400733   15973 retry.go:31] will retry after 8.305605ms: open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/scheduled-stop-878923/pid: no such file or directory
I0920 17:56:27.409965   15973 retry.go:31] will retry after 18.623633ms: open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/scheduled-stop-878923/pid: no such file or directory
I0920 17:56:27.429229   15973 retry.go:31] will retry after 13.693375ms: open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/scheduled-stop-878923/pid: no such file or directory
I0920 17:56:27.443489   15973 retry.go:31] will retry after 42.670427ms: open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/scheduled-stop-878923/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-878923 --cancel-scheduled
E0920 17:56:39.932075   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/functional-945494/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-878923 -n scheduled-stop-878923
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-878923
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-878923 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0920 17:57:26.272177   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-878923
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-878923: exit status 7 (63.267056ms)

                                                
                                                
-- stdout --
	scheduled-stop-878923
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-878923 -n scheduled-stop-878923
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-878923 -n scheduled-stop-878923: exit status 7 (64.417698ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-878923" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-878923
--- PASS: TestScheduledStopUnix (113.49s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (123.76s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1805634545 start -p running-upgrade-267014 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1805634545 start -p running-upgrade-267014 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (49.413333142s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-267014 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-267014 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m10.744536777s)
helpers_test.go:175: Cleaning up "running-upgrade-267014" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-267014
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-267014: (1.234301244s)
--- PASS: TestRunningBinaryUpgrade (123.76s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.33s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.33s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (186.26s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.910370752 start -p stopped-upgrade-299391 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.910370752 start -p stopped-upgrade-299391 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m0.03459674s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.910370752 -p stopped-upgrade-299391 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.910370752 -p stopped-upgrade-299391 stop: (1.533411486s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-299391 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-299391 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m4.693296941s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (186.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-833505 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-833505 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (104.981404ms)

                                                
                                                
-- stdout --
	* [false-833505] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19672
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19672-8777/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-8777/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 17:57:41.647277   52891 out.go:345] Setting OutFile to fd 1 ...
	I0920 17:57:41.647438   52891 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:57:41.647450   52891 out.go:358] Setting ErrFile to fd 2...
	I0920 17:57:41.647457   52891 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:57:41.647713   52891 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-8777/.minikube/bin
	I0920 17:57:41.648529   52891 out.go:352] Setting JSON to false
	I0920 17:57:41.649803   52891 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6005,"bootTime":1726849057,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 17:57:41.649930   52891 start.go:139] virtualization: kvm guest
	I0920 17:57:41.652052   52891 out.go:177] * [false-833505] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 17:57:41.653912   52891 notify.go:220] Checking for updates...
	I0920 17:57:41.653978   52891 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 17:57:41.655397   52891 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 17:57:41.657147   52891 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-8777/kubeconfig
	I0920 17:57:41.658592   52891 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-8777/.minikube
	I0920 17:57:41.659901   52891 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 17:57:41.661329   52891 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 17:57:41.662951   52891 config.go:182] Loaded profile config "kubernetes-upgrade-299508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0920 17:57:41.663059   52891 config.go:182] Loaded profile config "offline-crio-312889": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 17:57:41.663155   52891 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 17:57:41.698988   52891 out.go:177] * Using the kvm2 driver based on user configuration
	I0920 17:57:41.700536   52891 start.go:297] selected driver: kvm2
	I0920 17:57:41.700556   52891 start.go:901] validating driver "kvm2" against <nil>
	I0920 17:57:41.700568   52891 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 17:57:41.702727   52891 out.go:201] 
	W0920 17:57:41.704054   52891 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0920 17:57:41.705465   52891 out.go:201] 

                                                
                                                
** /stderr **
E0920 17:57:43.197368   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:88: 
----------------------- debugLogs start: false-833505 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-833505

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-833505

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-833505

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-833505

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-833505

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-833505

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-833505

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-833505

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-833505

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-833505

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833505"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833505"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833505"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-833505

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833505"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833505"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-833505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-833505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-833505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-833505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-833505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-833505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-833505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-833505" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833505"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833505"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833505"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833505"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833505"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-833505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-833505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-833505" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833505"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833505"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833505"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833505"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833505"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-833505

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833505"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833505"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833505"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833505"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833505"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833505"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833505"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833505"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833505"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833505"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833505"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833505"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833505"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833505"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833505"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833505"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833505"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833505"

                                                
                                                
----------------------- debugLogs end: false-833505 [took: 2.882213811s] --------------------------------
helpers_test.go:175: Cleaning up "false-833505" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-833505
--- PASS: TestNetworkPlugins/group/false (3.14s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.93s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-299391
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.93s)

                                                
                                    
x
+
TestPause/serial/Start (94.04s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-421146 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
E0920 18:01:23.003708   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/functional-945494/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:01:39.931917   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/functional-945494/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-421146 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m34.044095834s)
--- PASS: TestPause/serial/Start (94.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-246858 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-246858 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (75.549056ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-246858] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19672
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19672-8777/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-8777/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (56.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-246858 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-246858 --driver=kvm2  --container-runtime=crio: (55.802100785s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-246858 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (56.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (102.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-833505 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-833505 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m42.040467699s)
--- PASS: TestNetworkPlugins/group/auto/Start (102.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (79.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-833505 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-833505 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m19.197036694s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (79.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (70.03s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-246858 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-246858 --no-kubernetes --driver=kvm2  --container-runtime=crio: (1m8.777520618s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-246858 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-246858 status -o json: exit status 2 (225.007447ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-246858","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-246858
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-246858: (1.029146577s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (70.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (28.83s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-246858 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-246858 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.825076406s)
--- PASS: TestNoKubernetes/serial/Start (28.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-lg5ll" [f0645782-f7c4-49d6-a5c7-ee92a46aa028] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00664463s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-833505 "pgrep -a kubelet"
I0920 18:05:04.659121   15973 config.go:182] Loaded profile config "auto-833505": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-833505 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-jccm9" [573759b1-c82f-4342-8dbc-822a37e7d1df] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-jccm9" [573759b1-c82f-4342-8dbc-822a37e7d1df] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.006103896s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-833505 "pgrep -a kubelet"
I0920 18:05:09.117878   15973 config.go:182] Loaded profile config "kindnet-833505": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-833505 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-zzrtf" [d87484a3-58e1-453a-977e-baced353c44e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-zzrtf" [d87484a3-58e1-453a-977e-baced353c44e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.005582161s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-833505 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-833505 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-833505 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (85.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-833505 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-833505 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m25.116306954s)
--- PASS: TestNetworkPlugins/group/calico/Start (85.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-833505 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-833505 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-833505 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-246858 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-246858 "sudo systemctl is-active --quiet service kubelet": exit status 1 (250.8415ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.68s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.7s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-246858
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-246858: (1.69727411s)
--- PASS: TestNoKubernetes/serial/Stop (1.70s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (41.64s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-246858 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-246858 --driver=kvm2  --container-runtime=crio: (41.640427951s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (41.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (109.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-833505 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-833505 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m49.033553826s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (109.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (146.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-833505 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-833505 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (2m26.188263627s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (146.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-246858 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-246858 "sudo systemctl is-active --quiet service kubelet": exit status 1 (216.147535ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (145.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-833505 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
E0920 18:06:39.932206   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/functional-945494/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-833505 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (2m25.169411963s)
--- PASS: TestNetworkPlugins/group/flannel/Start (145.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-wwwqj" [e73c160c-03ea-4402-9c35-d634c4452f57] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005334434s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-833505 "pgrep -a kubelet"
I0920 18:06:50.363979   15973 config.go:182] Loaded profile config "calico-833505": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-833505 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-js788" [ba9903e1-46b0-4b10-8d46-6c09bad38cc1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-js788" [ba9903e1-46b0-4b10-8d46-6c09bad38cc1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004562482s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-833505 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-833505 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-833505 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (69.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-833505 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-833505 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m9.701491454s)
--- PASS: TestNetworkPlugins/group/bridge/Start (69.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-833505 "pgrep -a kubelet"
I0920 18:07:23.290071   15973 config.go:182] Loaded profile config "custom-flannel-833505": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-833505 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-v8jxd" [cb3be570-619d-4b93-83dc-7aa16d3da3b8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-v8jxd" [cb3be570-619d-4b93-83dc-7aa16d3da3b8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004008028s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-833505 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-833505 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-833505 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-833505 "pgrep -a kubelet"
I0920 18:08:03.847101   15973 config.go:182] Loaded profile config "enable-default-cni-833505": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (14.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-833505 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-dgv5p" [39f2e572-6047-4f6f-904e-f2be2b4b3dc6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-dgv5p" [39f2e572-6047-4f6f-904e-f2be2b4b3dc6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 14.006270483s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (14.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-833505 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-833505 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-833505 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-833505 "pgrep -a kubelet"
I0920 18:08:29.976486   15973 config.go:182] Loaded profile config "bridge-833505": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-833505 replace --force -f testdata/netcat-deployment.yaml
I0920 18:08:30.246292   15973 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-bvsrm" [c799b274-6331-472c-8d27-efb839dcff36] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-bvsrm" [c799b274-6331-472c-8d27-efb839dcff36] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.00436741s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (78.71s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-956403 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-956403 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (1m18.705579978s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (78.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-2hglg" [e44294b5-e4c8-484e-8761-f017fb43a5fb] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003501609s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-833505 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-833505 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-833505 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-833505 "pgrep -a kubelet"
I0920 18:08:45.927611   15973 config.go:182] Loaded profile config "flannel-833505": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-833505 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-6c8f9" [68eac507-1e59-4657-9ffd-f2a3de4ab085] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-6c8f9" [68eac507-1e59-4657-9ffd-f2a3de4ab085] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.005119588s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-833505 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-833505 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-833505 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)
E0920 18:38:39.707707   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/flannel-833505/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (92.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-768431 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-768431 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (1m32.117067592s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (92.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (97.65s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-553719 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-553719 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (1m37.645628893s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (97.65s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-956403 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a15af88e-18f6-4284-b32a-0cb1b432b683] Pending
helpers_test.go:344: "busybox" [a15af88e-18f6-4284-b32a-0cb1b432b683] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [a15af88e-18f6-4284-b32a-0cb1b432b683] Running
E0920 18:10:02.780249   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/kindnet-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:10:02.786663   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/kindnet-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:10:02.798058   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/kindnet-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:10:02.819426   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/kindnet-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:10:02.860883   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/kindnet-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:10:02.942362   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/kindnet-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:10:03.103782   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/kindnet-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:10:03.425853   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/kindnet-833505/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.080159846s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-956403 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-956403 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0920 18:10:04.067512   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/kindnet-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:10:04.941956   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/auto-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:10:04.948372   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/auto-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:10:04.959855   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/auto-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:10:04.981362   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/auto-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:10:05.022903   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/auto-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:10:05.104424   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/auto-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:10:05.265989   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/auto-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:10:05.349581   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/kindnet-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:10:05.588209   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/auto-833505/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-956403 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.27429146s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-956403 describe deploy/metrics-server -n kube-system
E0920 18:10:06.229858   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/auto-833505/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-768431 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [cd3b2c28-6095-4ce5-8d7f-8cac95e59e68] Pending
helpers_test.go:344: "busybox" [cd3b2c28-6095-4ce5-8d7f-8cac95e59e68] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [cd3b2c28-6095-4ce5-8d7f-8cac95e59e68] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.00428621s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-768431 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.96s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-768431 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-768431 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-553719 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [03376c58-8368-41cb-8d71-ec5f2ff84ab5] Pending
helpers_test.go:344: "busybox" [03376c58-8368-41cb-8d71-ec5f2ff84ab5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [03376c58-8368-41cb-8d71-ec5f2ff84ab5] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.004481653s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-553719 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-553719 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-553719 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (643.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-956403 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-956403 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (10m42.957551666s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-956403 -n no-preload-956403
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (643.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (575.64s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-768431 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
E0920 18:13:14.378424   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/enable-default-cni-833505/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-768431 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (9m35.395949918s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-768431 -n embed-certs-768431
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (575.64s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (540.81s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-553719 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
E0920 18:13:39.707198   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/flannel-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:13:39.713674   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/flannel-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:13:39.725023   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/flannel-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:13:39.746489   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/flannel-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:13:39.787882   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/flannel-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:13:39.869388   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/flannel-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:13:40.031253   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/flannel-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:13:40.353154   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/flannel-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:13:40.487732   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/bridge-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:13:40.994555   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/flannel-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:13:42.275972   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/flannel-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:13:44.837921   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/flannel-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:13:45.101757   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/enable-default-cni-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:13:45.458468   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/custom-flannel-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:13:49.960180   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/flannel-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:13:50.729704   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/bridge-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:14:00.201611   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/flannel-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:14:06.273591   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-553719 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (9m0.559588825s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-553719 -n default-k8s-diff-port-553719
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (540.81s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-744025 --alsologtostderr -v=3
E0920 18:14:11.211437   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/bridge-833505/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-744025 --alsologtostderr -v=3: (1.309510415s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-744025 -n old-k8s-version-744025
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-744025 -n old-k8s-version-744025: exit status 7 (63.314606ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-744025 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (52.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-803958 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
E0920 18:38:04.124864   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/enable-default-cni-833505/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:38:30.234781   15973 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/bridge-833505/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-803958 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (52.270715099s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (52.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-803958 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-803958 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.102693622s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.64s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-803958 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-803958 --alsologtostderr -v=3: (10.642699369s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.64s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-803958 -n newest-cni-803958
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-803958 -n newest-cni-803958: exit status 7 (72.165261ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-803958 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (36.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-803958 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-803958 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (35.792918893s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-803958 -n newest-cni-803958
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (36.15s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-803958 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (4.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-803958 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 pause -p newest-cni-803958 --alsologtostderr -v=1: (1.847227085s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-803958 -n newest-cni-803958
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-803958 -n newest-cni-803958: exit status 2 (354.844745ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-803958 -n newest-cni-803958
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-803958 -n newest-cni-803958: exit status 2 (250.426883ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-803958 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 unpause -p newest-cni-803958 --alsologtostderr -v=1: (1.007102015s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-803958 -n newest-cni-803958
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-803958 -n newest-cni-803958
--- PASS: TestStartStop/group/newest-cni/serial/Pause (4.24s)

                                                
                                    

Test skip (37/311)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.31.1/cached-images 0
15 TestDownloadOnly/v1.31.1/binaries 0
16 TestDownloadOnly/v1.31.1/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0
37 TestAddons/parallel/Olm 0
47 TestDockerFlags 0
50 TestDockerEnvContainerd 0
52 TestHyperKitDriverInstallOrUpdate 0
53 TestHyperkitDriverSkipUpgrade 0
104 TestFunctional/parallel/DockerEnv 0
105 TestFunctional/parallel/PodmanEnv 0
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
115 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
116 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
117 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
118 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
119 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
120 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
153 TestGvisorAddon 0
175 TestImageBuild 0
202 TestKicCustomNetwork 0
203 TestKicExistingNetwork 0
204 TestKicCustomSubnet 0
205 TestKicStaticIP 0
237 TestChangeNoneUser 0
240 TestScheduledStopWindows 0
242 TestSkaffold 0
244 TestInsufficientStorage 0
248 TestMissingContainerUpgrade 0
252 TestNetworkPlugins/group/kubenet 2.96
261 TestNetworkPlugins/group/cilium 3.36
268 TestStartStop/group/disable-driver-mounts 0.19
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:817: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:438: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-833505 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-833505

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-833505

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-833505

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-833505

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-833505

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-833505

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-833505

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-833505

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-833505

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-833505

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833505"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833505"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833505"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-833505

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833505"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833505"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-833505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-833505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-833505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-833505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-833505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-833505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-833505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-833505" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833505"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833505"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833505"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833505"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833505"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-833505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-833505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-833505" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833505"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833505"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833505"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833505"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833505"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-833505

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833505"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833505"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833505"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833505"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833505"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833505"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833505"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833505"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833505"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833505"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833505"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833505"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833505"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833505"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833505"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833505"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833505"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833505"

                                                
                                                
----------------------- debugLogs end: kubenet-833505 [took: 2.823477573s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-833505" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-833505
--- SKIP: TestNetworkPlugins/group/kubenet (2.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-833505 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-833505

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-833505

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-833505

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-833505

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-833505

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-833505

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-833505

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-833505

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-833505

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-833505

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833505"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833505"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833505"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-833505

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833505"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833505"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-833505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-833505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-833505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-833505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-833505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-833505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-833505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-833505" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833505"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833505"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833505"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833505"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833505"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-833505

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-833505

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-833505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-833505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-833505

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-833505

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-833505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-833505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-833505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-833505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-833505" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833505"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833505"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833505"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833505"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833505"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-833505

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833505"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833505"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833505"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833505"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833505"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833505"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833505"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833505"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833505"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833505"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833505"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833505"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833505"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833505"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833505"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833505"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833505"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-833505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833505"

                                                
                                                
----------------------- debugLogs end: cilium-833505 [took: 3.220469468s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-833505" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-833505
--- SKIP: TestNetworkPlugins/group/cilium (3.36s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-739804" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-739804
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
Copied to clipboard